This paper evaluates ChatGPTs ability to understand and reproduce human humor through exploratory experiments.
Recommended citation: Jentzsch, S., & Kersting, K. (2023). ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models. In Proceedings of the 14th Workshop in Computational Approaches to Subjectivity and Sentiment Analysis. Association for Computational Linguistics, USA. https://arxiv.org/abs/2306.04563</p> </article> </div>This paper is a collaboration between colleagues of different DLR institutes, namely Sabine Theis (Group Leader of the Group Human Factors in Software Engineering in the Institue for Softwaretechnology), me (Group Intelligente Software Systeme, same Institute), Foteini Deligiannaki, Charles Berro, and Arne Raulf (DLR Institute for AI Security), and Carmen Bruder (DLR Institute of Aerospace Medicine).
Recommended citation: Theis, S., Jentzsch, S., Deligiannaki, F., Berro, C., Raulf, A. P., & Bruder, C. (2023, July). Requirements for Explainability and Acceptance of Artificial Intelligence in Collaborative Work. In International Conference on Human-Computer Interaction (pp. 355-380). Cham: Springer Nature Switzerland. https://link.springer.com/chapter/10.1007/978-3-031-35891-3_22
In this paper, we analyse how gender biases are passed through ML-based systems and what effect the size of the foundation model has on the size of biases.
Recommended citation: Jentzsch et al. (2022). Gender Bias in BERT-Measuring and Analysing Biases through Sentiment Rating in a Realistic Downstream Classification Task. In Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) (pp. 184-199). Association for Computational Linguistics. https://aclanthology.org/2022.gebnlp-1.20/
An explorative study providing insights regarding Machine Learning development at the Earth Observation Center of the DLR.
Recommended citation: Jentzsch et al. (2021). A qualitative study of Machine Learning practices and engineering challenges in Earth Observation. it-Information Technology, 63(4), 235-247. https://www.degruyter.com/document/doi/10.1515/itit-2020-0045/html
This paper is the second contribution regarding the “Moral Choice Machine.”
Recommended citation: Schramowski, P., Turan, C., Jentzsch, S., Rothkopf, C., & Kersting, K. (2019). BERT has a Moral Compass: Improvements of ethical and moral values of machines. https://arxiv.org/abs/1912.05238
First introduction of the Moral Choice Machine.
Recommended citation: Jentzsch, S., Schramowski, P., Rothkopf, C., & Kersting, K. (2019, January). "Semantics derived automatically from language corpora contain human-like moral choices."" In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 37-44). https://dl.acm.org/doi/pdf/10.1145/3306618.3314267?casa_token=63O7vkrL6UkAAAAA:7FvuUnrwG0jHEW5Q66M54ErS99l3zxRsumero3vbN5lM2eBcrJJMwBKTqWHwAIL6wdlBBpXwVIY