Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
First introduction of the Moral Choice Machine.
Recommended citation: Jentzsch, S., Schramowski, P., Rothkopf, C., & Kersting, K. (2019, January). "Semantics derived automatically from language corpora contain human-like moral choices."" In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 37-44). https://dl.acm.org/doi/pdf/10.1145/3306618.3314267?casa_token=63O7vkrL6UkAAAAA:7FvuUnrwG0jHEW5Q66M54ErS99l3zxRsumero3vbN5lM2eBcrJJMwBKTqWHwAIL6wdlBBpXwVIY
excerpt
Recommended citation: Jentzsch, S. F., Höhn, S., & Hochgeschwender, N. (2019, May). "Conversational interfaces for explainable AI: a human-centred approach." In International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems (pp. 77-92). Springer, Cham. https://orbilu.uni.lu/bitstream/10993/39940/1/extraamas2019-conversational_interfaces_for_XAI-Jentzsch_Hoehn_Hochgeschwender.pdf
This paper is the second contribution regarding the “Moral Choice Machine.”
Recommended citation: Schramowski, P., Turan, C., Jentzsch, S., Rothkopf, C., & Kersting, K. (2019). BERT has a Moral Compass: Improvements of ethical and moral values of machines. https://arxiv.org/abs/1912.05238
This paper is the third contribution regarding the “Moral Choice Machine.”
Recommended citation: Schramowski et al. (2010). "The moral choice machine." Frontiers in Artificial Intelligence. 3, 36. https://www.frontiersin.org/articles/10.3389/frai.2020.00036/pdf
An explorative study providing insights regarding Machine Learning development at the Earth Observation Center of the DLR.
Recommended citation: Jentzsch et al. (2021). A qualitative study of Machine Learning practices and engineering challenges in Earth Observation. it-Information Technology, 63(4), 235-247. https://www.degruyter.com/document/doi/10.1515/itit-2020-0045/html
In this paper, we analyse how gender biases are passed through ML-based systems and what effect the size of the foundation model has on the size of biases.
Recommended citation: Jentzsch et al. (2022). Gender Bias in BERT-Measuring and Analysing Biases through Sentiment Rating in a Realistic Downstream Classification Task. In Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) (pp. 184-199). Association for Computational Linguistics. https://aclanthology.org/2022.gebnlp-1.20/
This paper is a collaboration between colleagues of different DLR institutes, namely Sabine Theis (Group Leader of the Group Human Factors in Software Engineering in the Institue for Softwaretechnology), me (Group Intelligente Software Systeme, same Institute), Foteini Deligiannaki, Charles Berro, and Arne Raulf (DLR Institute for AI Security), and Carmen Bruder (DLR Institute of Aerospace Medicine).
Recommended citation: Theis, S., Jentzsch, S., Deligiannaki, F., Berro, C., Raulf, A. P., & Bruder, C. (2023, July). Requirements for Explainability and Acceptance of Artificial Intelligence in Collaborative Work. In International Conference on Human-Computer Interaction (pp. 355-380). Cham: Springer Nature Switzerland. https://link.springer.com/chapter/10.1007/978-3-031-35891-3_22
This paper evaluates ChatGPTs ability to understand and reproduce human humor through exploratory experiments.
Recommended citation: Jentzsch, S., & Kersting, K. (2023). ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models. In Proceedings of the 14th Workshop in Computational Approaches to Subjectivity and Sentiment Analysis. Association for Computational Linguistics, USA. https://arxiv.org/abs/2306.04563</p> </article> </div> Style Vectors for Steering Generative Large Language Model. Recommended citation: Konen, K., Jentzsch, S., Diallo, D., Schütt, P., Bensch, O., El Baff, R., Opitz, D. & Hecking, T. (2024). Style Vectors for Steering Generative Large Language Model. arXiv e-prints, arXiv-2402. https://arxiv.org/abs/2402.01618 Published: This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown! Published: This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field. Undergraduate course, University 1, Department, 2014 This is a description of a teaching experience. You can use markdown like any other post. Workshop, University 1, Department, 2015 This is a description of a teaching experience. You can use markdown like any other post. Style Vectors for Steering Generative Large Language Model
talks
Talk 1 on Relevant Topic in Your Field
Conference Proceeding talk 3 on Relevant Topic in Your Field
teaching
Teaching experience 1
Teaching experience 2