Severin Engelmann

Severin Engelmann is a PhD Student at the Professorship of Cyber Trust.

With a background in philosophy of technology and computer science, Severin is an ethicist focusing on the ethics of digital platforms and systems. His research explores the transparency of digitalized reputation mechanisms in the Chinese Social Credit System and examines the feasibility of participatory governance of commercial social media platforms. Currently, he studies how non-experts in AI ethically evaluate AI inference-making across computer vision decision-making scenarios. In this research project, he also investigates whether and to what extent participatory approaches to AI ethics help advance the ethical governance of algorithmic systems. Severin applies a multidisciplinary research methodology. This includes conceptual and theoretical work, data-driven policy analyses, as well as experimental vignette studies.

Between January and May 2022, Severin was a visiting scholar at the School of Information at UC Berkeley, USA. His hosting professor was Prof. Deirdre Mulligan.

Between April and August 2021, Severin was a visiting scholar at Prof. Anna Baumert's group Moral Courage at the Max Planck Institute for Research on Collective Goods in Bonn, Germany.

Research interests:

AI ethics, Chinese Social Credit System, participatory AI ethics.

E-mail: severin.engelmann@tum.de  | severin.engelmann@berkeley.edu

Twitter: https://twitter.com/SeverinEngelma1

Publications

  1. Ullstein, C., Engelmann, S., Papakyriakopoulos, O., Hohendanner, M., & Grossklags, J. (2022) AI-competent individuals and laypeople tend to oppose facial analysis AI. Proceedings of the Second ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO), forthcoming. Author Version
  2. Engelmann, S., Scheibe, V., Battaglia, F., & Grossklags, J. (2022) Social media profiling continues to partake in the development of formalistic self-concepts. Social media users think so, too. Proceedings of the 5th AAAI/ACM Conference on AI, Ethics, and Society (AAAI/ACM AIES), 238–252. Author Version Publisher Version (Open Access)
  3. Chen, M., Engelmann, S., & Grossklags, J. (2022) Ordinary people as moral heroes and foes: Digital role model narratives propagate social norms in China's Social Credit System. Proceedings of the 5th AAAI/ACM Conference on AI, Ethics, and Society (AAAI/ACM AIES), 181–191. Publisher Version (Open Access)
  4. Engelmann, S., Ullstein, C., Papakyriakopoulos, O., & Grossklags, J. (2022) What People Think AI Should Infer from Faces. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), pp. 128–141. Author Version Appendix Publisher Version (Open Access)
  5. Cypris, N., Engelmann, S., Sasse, J., Grossklags, J., & Baumert, A. (2022) Intervening Against Online Hate Speech: A Case for Automated Counterspeech. IEAI Research Brief, Technical University of Munich.
  6. Engelmann, S., Chen, M., Dang, L., & Grossklags, J. (2021) Blacklists and Redlists in the Chinese Social Credit System: Diversity, Flexibility, and Comprehensiveness. Proceedings of the 4th AAAI/ACM Conference on AI, Ethics, and Society (AAAI/ACM AIES), pp. 78–88. Full paper; oral presentation. Author Version Publisher Version (Open Access)
  7. Engelmann, S., Grossklags, J., & Herzog, L. (2020) Should users participate in governing social media? Philosophical and technical considerations of democratic social media. First Monday, 25(12). Open Access
  8. Engelmann, S., & Grossklags, J. (2019) Setting the Stage: Towards Principles for Reasonable Image Inferences. Workshop on Fairness in User Modeling, Adaptation and Personalization (FairUMAP), 27th Conference on User Modeling, Adaptation and Personalization (ACM UMAP). Author Version Free Access (ACM Authorizer)
  9. Engelmann, S., Chen, M., Fischer, F., & Kao, C. & Grossklags, J. (2019) Clear Sanctions, Vague Rewards: How China's Social Credit System Currently Defines “Good” and “Bad” Behavior. Proceedings of the 2nd ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), Atlanta, Georgia, January 2019. Author Version Free Access (ACM Authorizer)
  10. Engelmann, S., Grossklags, J., & Papakyriakopoulos, O. (2018) A Democracy called Facebook? Participation as a Privacy Strategy on Social Media. Proceedings of the Annual Privacy Forum 2018. Lecture Notes in Computer Sciences (LNCS). Full paper. Author Version Publisher Version

 

Conference Talks & Panels

2022

  • Lightning talk on Narratives in the Chinese Social Credit System at the ACM/AAAI Conference on Artificial Intelligence, Ethics and Society (AIES) 2022, Oxford, United Kingdom.
  • Lightning talk on Formatlistic Self-Concepts & Social Media Profiling at the ACM/AAAI Conference on Artificial Intelligence, Ethics and Society (AIES) 2022, Oxford, United Kingdom.
  • Talk on What People Think AI Should Infer from Faces at the ACM Conference on Fairness, Accountability, and Transparency (FAccT) 2022, Seoul, South Korea.
  • Talk on The Ethics of Intervening Against Hate Speech: A Case for Automated Counterspeech at the Ethics, Society, & Technology Unconference 2022, Stanford University, USA.

2021

  • Talk On the Epistemic Soundness of AI Personality Inferences based on Visual Data at CEPE/International Association of Computing and Philosophy, Joint Conference 2021: The Philosophy and Ethics of Artificial Intelligence. Virtual conference.
  • Talk on Blacklists and Redlists in the Chinese Social Credit System: Diversity, Flexibility, and Comprehensiveness at the 4th AAAI/ACM Conference on AI, Ethics, and Society (AIES). Virtual conference.
  • Talk on What People Think AI Should Infer from Faces at the Ethics and Technology Lecture Series of the Munich Center for Technology in Society, Munich, Germany.

2020

2019

  • Talk on Ethical Implications of Image-based User Modeling at FairUMAP, UMAP 2019, Larnaca, Cyprus.
  • Talk on Reasonable Image Inferences at Metaethics of AI & Self-learning Robots (Workshop), Venice International University & Ludwig Maximilian University of Munich, Venice, Italy.
  • Talk on Clear Sanctions, Vague Rewards: How the Chinese Social Credit System Currently Defines "Good" and "Bad" Behavior at ACM Conference on Fairness, Accountability, and Transparency (FAT*), Atlanta, USA.

2018

Media

"What would it take to turn Facebook into a democracy?". Blog entry for blog Justice Everywhere together with Prof. Lisa Herzog. (March 4, 2019)

"China's Social Credit System won't tell you what's right". TechCrunch reports on our paper "Clear Sanctions, Vague Rewards: How China's Social Credit System Currently Defines "Good" and "Bad" Behavior. (January 28, 2019)

Awards

Weizenbaum Student Prize 2018 (October 2018). Awarded for his Master Thesis on Facebook's capacity to generate data narratives. 

 

Teaching

Seminars

  • The Value of Privacy
  • Trust in Automated Decision-Making
  • Transparency of Algorithmic Systems

Lecture

  • IT and Society

 

Contact

Email: severin.engelmann@tum.de

Phone: +49 (89) 289 - 17746

Twitter: https://twitter.com/SeverinEngelma1