Robust Machine Learning
This course (CIT423004) builds upon the knowledge you gained in the lecture Machine Learning (IN2064). We will study the vulnerabilities of neural networks to adversarial perturbations, examining how models can be attacked and how to defend them.
Information
- Lecture/Exercise: There course alternates between lectures and exercises with one appointment per week (Mondays, 14:00 - 16:00)
- Required knowledge: Content of our Machine Learning lecture
- IMPORTANT: The course can not be taken if MLGS has been taken before SS2025
All announcements will be made on the Piazza forum, which can be accessed via the link on the course's moodle page.
Please do not send any questions about organizational matters via e-mail.
If you have problems accessing the Moodle course, contact l.schwinn [at] tum.de .
Tentative list of topics
- Aspects of robustness in machine learning
- Attacks on neural networks
- Defenses against adversarial attacks
- Certification methods for robustness guarantees
- Robustness of large language models