Teaching at the Professorship of Cyber Trust
Winter Term 2025/26
Course Instructor: Prof. Jens Grossklags, Ph.D.
Description:
The seminar explores the nascent and growing field of the economics of privacy and cyber security and related security/risk governance aspects. Personal information has become a primary economic good for legitimate companies and is collected for countless purposes. For example, targeted advertisements, personalization and price discrimination are enabled by the automated wholesale accumulation of users’ trails; online and offline. Given this background, the key objective of the seminar is a better understanding of the current and future marketplace for personal information. We will draw on methods from computer science as well as the economic and behavioral sciences to contribute to a rigorous comprehension of the challenges and solution approaches for current privacy and security challenges.
SPECIAL FOCUS TOPIC WINTER 2025/2026: In this seminar, we will focus on the economic implications of privacy regulations in response to increasing concerns about the various forms of online tracking and data misuse. Seminar theses will explore topics such as assessing the effectiveness of existing privacy regulations (e.g., GDPR, CCPA), analyzing their impact on businesses and consumers, evaluating compliance strategies adopted by companies, and investigating how regulatory frameworks influence innovation in privacy-preserving technologies.
Course objectives:
Seminar meetings will be held in the lecture period during the semester. The presentation of results will follow in the final weeks of the lecture period.
Meetings take place in Garching, online / hybrid participation is not possible.
Students are expected to deliver a concise report and a comprehensive presentation about their findings. The exact timeline will be discussed in the introductory sessions. Collaboration in two-person teams is possible with agreement of the instructor.
Requirement:
No specific knowledge required. General interest in interdisciplinary privacy and security topics highly desirable. The seminar language is English.
Notes:
🔜 Application via http://docmatching.in.tum.de/
- There will be no pre-course meeting!
- Information and materials will be made available via Moodle.
- Regular seminar meeting is planned for Tuesday, 14:15 - 16:45 in room 00.13.054 (CIT building Garching).
- According to the policy of our chair, deregistration from courses is possible until the first regular course meeting by written notice to the instructor. Further, regular attendance and participation in seminar meetings will be compulsory and also be part of the assessment.
- Note for MMT-Students: This course is recognized at TUM SOM as module [IN8031] Informatics Advanced Seminar Courses for Management (Specialization in Technology: Informatics (major) or Catalogue of Elective Modules: Informatics (advanced)). Please double-check with your program manager that this module aligns with your curriculum prior (!) to course registration.
TUM Online: Course Description
Course Instructor: Chiara Ullstein
Description:
The seminar aims to familiarise students with the EU AI Act (artificial intelligence regulation) and explore how participatory AI can help operationalize selected provisions of the EU AI Act. By working with the AI Act, students learn to read a legal text, understand what it means for AI to be in compliance with specific provisions of the AI Act, and familiarize themselves with approaches to participatory AI.
Participatory AI can play a critical role in operationalizing AI regulation by fostering an inclusive framework that draws on diverse stakeholders' collective expertise, values, and perspectives. This approach can be perceived as valuable because AI technologies are not developed or deployed in a vacuum; they affect a broad spectrum of social, economic, and ethical realms. By involving a wide range of participants, including technologists, policymakers, ethicists, business leaders, civil society, laypeople, and others, Participatory AI may help ensure that AI development and AI governance are well-informed and reflect societal values. The exploration of approaches to Participatory AI to meet selected Articles of the EU AI Act is the focus of this seminar.
First, the EU AI Act and Participatory AI will be introduced. Students will learn to understand the structure of the proposal and how to interpret it. As part of the introduction to the AI Act, participants will also learn about the context of the EU AI Act, the legislative processes of the European Union, and the current stage of AI Act enforcement procedures. Participants will present either provisions of the AI Act or a paper on Participatory AI. The presentations serve to develop a foundational understanding of the EU AI Act and Participatory AI.
Then, teams of two students select one article from the EU AI Act and develop a strategy or suggestions on how participatory AI could be a means to ensure or contribute to legal compliance with the article. Teams will first present a proposal in the seminar. Then, student teams develop their work in the form of a seminar paper after receiving feedback. At the end of the seminar, they will give a presentation of their work and hand in a seminar paper.
The goal of the seminar is to make students familiar with AI regulation developed by the EU and to sensitize them for approaches to Participatory AI. For the teamwork, the goal is to bring together students from the Informatics & Mathematics Department and the Governance Department to foster interdisciplinary discussion.
Course objectives:
Understand what the AI Act is about and how it influences the development of AI systems. Become familiar with the analysis, critical reading, and application of legislative text.
Understand what Participatory AI is and how it can be beneficially applied to realize provisions of the EU AI Act.
Deliverables
- Presentation (English; 2x 15 min presentation per person+ 5 min discussion)
- Written Seminar Paper (English; ~3000 words per student)
Requirement:
Interest in AI regulation and/or experience with the development of AI; Interest in approaches to Participatory AI
IMPORTANT:
Participation is reserved for students from the Governance Department (50%) and Informatics & Mathematics Department (50%).
🔜 Application via docmatching.in.tum.de
Pre-course meeting:
Date: July 9, 2025, 6:00 pm
Location (meeting room or link to online session): https://tum-conf.zoom-x.de/j/9366159409?pwd=dXF2VURzNlI3Nm9lQnNrUEl0ekZKQT09
Meeting-ID: 936 615 9409
Kenncode: 194415
Kick-off Meeting:
Date: October 27, 2025
Location (meeting room): tba
Planned course timeline:
Mo, 27.10.25 Introduction to the AI Act and Participatory AI (14:00-18:00)
Di, 28.10.25 Exploration of the AI Act and approaches to participation (14:00-18:00)
Fr, 12.12.25 Student presentations on the AI Act/ Participatory AI (10:00-18:30)
Sa, 13.12.25 Student presentations on the AI Act/ Participatory AI & Student’s proposal of seminar thesis topic (10:00-18:30)
Fr, 09.01.26 mandatory online feedback session
Fr, 23.01.26 mandatory online feedback session
Fr, 27.02.26 mandatory online feedback session
Do, 12.03.26 Submission of seminar thesis and presentation of work (10:00-18:30)
Location (meeting room): tba
Information and materials will be made available via Moodle
Note: According to the policy of our chair, deregistration from courses is possible until the first regular course meeting by written notice to the instructor. Further, regular attendance and participation in seminar meetings will be compulsory and also be part of the assessment.
TUM Online: Course Description
Course Instructor: Michel Hohendanner
Description:
The seminar aims to familiarize students with participatory approaches to AI ethics and explore how applications making use of gamification and/or educative principles are useful for research and development in AI and sustainable HCI.
AI technologies are not developed or deployed in a vacuum; they affect a broad spectrum of social, economic, and ethical realms. By involving a wide range of participants, including technologists, policymakers, ethicists, business leaders, civil society, laypeople, and others, participation may help ensure that AI development and AI governance are well-informed and reflect societal values.
On the methodological side, a prominent example to gamify moral questions with regards to AI is the “Moral Machine Experiment” (https://www.moralmachine.net/). Here, a platform is provided for gathering people’s perspective on moral decisions made by machine intelligence, such as self-driving cars. The experiment shows moral dilemmas, where a driverless car must choose the lesser of two evils, such as causing an accident involving two passengers or five pedestrians. Users judge which outcome they think is more acceptable. Another example that rather aims at explaining how specific AI systems work (here Facial Processing AI) and their limitations, also to question their use, is the project “How Normal am I” (https://www.hownormalami.eu/).
Building on such prior work, students explore the current AI application landscape focusing on the social and ethical dimensions and impact. In parallel, they explore methodological means to work towards desirable developments including principles of participation for AI in software development. The final goal is to build an application in the sense of the Moral Machine Experiment aimed at different social and ethical issues across the current AI application landscape.
The seminar is divided into two phases:
Phase A: In the first phase, (1) principles of participation in AI, (2) spotlights into social and ethical issues inherent in the current AI application landscape, and (3) methodological examples of how to work towards desirable developments in that regard through means of gamified participation in AI research and development and limitations of these approaches are explored. Participants will present reports or papers in the realm of one of these three areas. The presentations serve to develop a foundational understanding of the problem space this seminar wants to prompt change in and possible tools to foster desirable processes.
Phase B: In the second phase, student teams select one specific social/ethical issue area or type of current AI application that they want to deal with in the course of a practical project. Teams will first develop and present a concept proposal for a web/software application making use of gamification principles to foster or operationalize participatory AI. Then, teams build an interactive/functional prototype of this application.
At the end of the seminar, they will give a presentation/demonstration of their work and, after integrating other student groups’ feedback to their applications, hand in a project report.
Course objectives:
Understand how participation in AI research and development can be beneficially applied making use of gamification and/or educative principles.
Reflect on the capabilities and risks of current AI technologies. Build an application, i.e., as a research/didactic tool, with a focus on communicating and exploring social/ethical issues prompted by current AI applications.
Deliverables
- Presentation of selected articles/ paper (English; 10 min presentation per person+ 5 min discussion)
- web/software application, written project documentation (Group; English; ~12 pages in total), and presentation of group project (English; 10 min presentations + 5 min discussion per student)
Please note, the first presentation will be graded individually. For the group project, the group members will be graded collectively. The group grade will account for the concept, functionality, and usability of the developed application (not the code itself), the written project documentation and the group presentation.
Requirement:
Interest in participatory approaches in AI research and development; Interest in Digital/AI ethics and social impacts of AI;
Requirement for participation are solid programming skills, experiences with web/software development, high commitment to collaborating on a group project, high dedication to self-learning in order to build a functioning tool. The instructor will give feedback on the concept, but not on the implementation.
IMPORTANT:
🔜 Application via docmatching.in.tum.de
Pre-course meeting:
Date: July 16, 2025, 6:00 pm
Location (meeting room or link to online session): https://tum-conf.zoom-x.de/j/67001853568?pwd=eS2U6WQETUaita3ZhrZLeXlr181xlH.1
Meeting-ID: 670 0185 3568
Kenncode: 697305
Kick-off Meeting:
Date: October 29, 2025
Location (meeting room): tba
Planned course timeline:
Mi, 29.10.25 Introduction (15:00-18:00)
Mi, 05.11.25 Students choose presentation topic (16:00-17:00)
Fr, 05.12.25 Student presentations (10:00-18:30)
Sa, 06.12.25 Student presentations & group building (10:00-18:30)
Fr, 09.01.26 mandatory feedback session
Fr, 23.01.26 mandatory feedback session
Fr, 27.02.26 mandatory feedback session
Fr, 13.03.26 presentations of applications & joint feedback
Fr, 20.03.26 submission of project report
Location (meeting room): tba
Information and materials will be made available via Moodle
Note: According to the policy of our chair, deregistration from courses is possible until the first regular course meeting by written notice to the instructor. Further, regular attendance and participation in seminar meetings will be compulsory and also be part of the assessment.
TUM Online: Course Description
Weekly group meeting of the Chair of Cyber Trust for members and guests of the chair. The seminar includes research discussions and talks about topics related to the activities of the chair.