Statistical Foundations of Learning (IN2378)

The success of machine learning has been unprecedented in the past few years. It is natural to wonder why popular machine learning algorithms exhibit good performance on a wide range of problems. This course will take a foundational perspective on learning, and will describe the mathematical principles that are useful for theoretically analysing the performance of machine learning algorithms. 

The first part of the course will focus on statistical learning theory that provides the foundation for a systematic study of supervised learning. The following topics will be covered:

  • Vapnik-Chervonenkis (VC) theory
  • Probably-Approximately-Correct (PAC) framework
  • Empirical risk minimization and loss functions
  • Analysis of Nearest Neighbor (NN) and Support Vector Machine (SVM)
  • Boosting
  • Online learning

The second part of the course will briefly cover the theoretical foundations of unsupervised learning, particularly clustering. The following topics will be covered:

  • Clustering: Analysis of k-means, k-medoid clustering
  • Graph clustering: Analysis of spectral clustering
  • Random graphs and stochastic block model

This course invites students from all disciplines, who are curious to
gain a deeper theoretical understanding of machine learning.

 

Note: the lectures will be held in-person. Sessions may be streamed on TUM live but there will be no possibility of interaction in the livestream

Previous Knowledge Expected

Familiarity with machine learning (IN2064, IN2332 or equivalent), and knowledge of probability and linear algebra.

Languages of Instruction

English

Recommended Reading

Main text for first part: 

  • Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge University Press, 2014.