Publications

ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models

This paper evaluates ChatGPTs ability to understand and reproduce human humor through exploratory experiments.

Recommended citation: Jentzsch, S., & Kersting, K. (2023). ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models. In Proceedings of the 14th Workshop in Computational Approaches to Subjectivity and Sentiment Analysis. Association for Computational Linguistics, USA. https://arxiv.org/abs/2306.04563</p> </article> </div>

Requirements for Explainability and Acceptance of Artificial Intelligence in Collaborative Work

This paper is a collaboration between colleagues of different DLR institutes, namely Sabine Theis (Group Leader of the Group Human Factors in Software Engineering in the Institue for Softwaretechnology), me (Group Intelligente Software Systeme, same Institute), Foteini Deligiannaki, Charles Berro, and Arne Raulf (DLR Institute for AI Security), and Carmen Bruder (DLR Institute of Aerospace Medicine).

Recommended citation: Theis, S., Jentzsch, S., Deligiannaki, F., Berro, C., Raulf, A. P., & Bruder, C. (2023, July). Requirements for Explainability and Acceptance of Artificial Intelligence in Collaborative Work. In International Conference on Human-Computer Interaction (pp. 355-380). Cham: Springer Nature Switzerland. https://link.springer.com/chapter/10.1007/978-3-031-35891-3_22

Gender Bias in BERT – Measuring and Analysing Biases through Sentiment Rating in a Realistic Downstream Classification Task

In this paper, we analyse how gender biases are passed through ML-based systems and what effect the size of the foundation model has on the size of biases.

Recommended citation: Jentzsch et al. (2022). Gender Bias in BERT-Measuring and Analysing Biases through Sentiment Rating in a Realistic Downstream Classification Task. In Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) (pp. 184-199). Association for Computational Linguistics. https://aclanthology.org/2022.gebnlp-1.20/

Semantics Derived Automatically from Language Corpora Contain Human-like Moral Choices

First introduction of the Moral Choice Machine.

Recommended citation: Jentzsch, S., Schramowski, P., Rothkopf, C., & Kersting, K. (2019, January). "Semantics derived automatically from language corpora contain human-like moral choices."" In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 37-44). https://dl.acm.org/doi/pdf/10.1145/3306618.3314267?casa_token=63O7vkrL6UkAAAAA:7FvuUnrwG0jHEW5Q66M54ErS99l3zxRsumero3vbN5lM2eBcrJJMwBKTqWHwAIL6wdlBBpXwVIY