Seminar: Selected Topics in Machine Learning Research

Seminar: Selected Topics in Machine Learning Research (IN2107, IN4872)


Join us for the information event on Moday, 08.02.2021 at 14:00. The meeting will be online

IMPORTANT! To apply fill out a google form AND register via the matching system!


  • Preliminary meeting: Monday, 08.02.2021, 14:00, BigBlueButton  Slides: slides
  • Kick-off meeting: 15.04.2021, 11:00-12:00
  • Seminar: 15.07.2021, 9:00-17:00 and 16.07.2021, 9:00-17:00


This seminar is intended for Master's students only. You should have attended (and passed) the Machine Learning lecture (IN2064). Having attended Machine Learning for Graphs and Sequential Data (IN2323, formerly Mining Massive Datasets) is a plus.


The amount of research in machine learning has grown exponentially in the last couple of years, uncovering many promising and successful research directions. In this seminar we will select and discuss a diverse set of topics of current research. This seminar will let students get acquainted with current machine learning research, let them explore new fields and ideas and let them analyze and criticize recent publications.

To do so, each student will receive 2-5 research papers which they should carefully read and analyze. Starting from these they should explore the surrounding literature and summarize their findings, criticism, and research ideas in a 4-page paper (double column). The students will then review each other's work to give valuable feedback and criticism. Finally, all students will prepare 25-minute presentations and present their work during a block seminar at the end of the semester.

Possible topics

There are more topics than students, so there should be plenty of choice for everyone, e.g.:

  • Adversarial Robustness (on graphs, on CNNs, non-Lp-bounded perturbations, ...)
  • Robustness Verficaiton (Randomized Smoothing, ...)
  • Sparse Neural Networks
  • Transformers
  • Uncertainty Estimation
  • Transfer Learning
  • Neural Network Ensembles
  • Object-centric Deep Learning
  • Scalable Attention Models