Zum Inhalt springen
  • Data Analytics and Machine Learning Group
  • TUM School of Computation, Information and Technology
  • Technische Universität München
Technische Universität München
  • Startseite
  • Team
    • Stephan Günnemann
    • Sirine Ayadi
    • Tim Beyer
    • Jonas Dornbusch
    • Eike Eberhard
    • Dominik Fuchsgruber
    • Nicholas Gao
    • Lukas Gosch
    • Filippo Guerranti
    • Leon Hetzel
    • Chengzhi Martin Hu
    • Niklas Kemper
    • Amine Ketata
    • Marcel Kollovieh
    • Arthur Kosmala
    • Aleksei Kuvshinov
    • Richard Leibrandt
    • Marten Lienen
    • David Lüdke
    • Aman Saxena
    • Sebastian Schmidt
    • Yan Scholten
    • Jan Schuchardt
    • Leo Schwinn
    • Johanna Sommer
    • Tim Tomov
    • Tom Wollschläger
    • Alumni
      • Simon Geisler
      • Anna-Kathrin Kopetzki
      • Amir Akbarnejad
      • Roberto Alonso
      • Bertrand Charpentier
      • Marin Bilos
      • Aleksandar Bojchevski
      • Johannes Gasteiger, né Klicpera
      • Maria Kaiser
      • Richard Kurle
      • Hao Lin
      • John Rachwan
      • Oleksandr Shchur
      • Armin Moin
      • Daniel Zügner
  • Lehre
    • Sommersemester 2026
      • Machine Learning for Graphs and Sequential Data
      • Advanced Machine Learning: Deep Generative Models
      • Applied Machine Learning
    • Wintersemester 2025/26
      • Machine Learning
      • Robust Machine Learning
      • Seminar: Current Topics in Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
    • Sommersemester 2025
      • Advanced Machine Learning: Deep Generative Models
      • Applied Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
      • Seminar: Current Topics in Machine Learning
    • Wintersemester 2024/25
      • Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
      • Seminar: Current Topics in Machine Learning
    • Sommersemester 2024
      • Machine Learning for Graphs and Sequential Data
      • Advanced Machine Learning: Deep Generative Models
      • Applied Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
    • Wintersemester 2023/24
      • Machine Learning
      • Applied Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
      • Seminar: Machine Learning for Sequential Decision Making
    • Sommersemester 2023
      • Machine Learning for Graphs and Sequential Data
      • Advanced Machine Learning: Deep Generative Models
      • Large-Scale Machine Learning
      • Seminar
    • Wintersemester 2022/23
      • Machine Learning
      • Large-Scale Machine Learning
      • Seminar
    • Sommersemester 2022
      • Machine Learning for Graphs and Sequential Data
      • Large-Scale Machine Learning
      • Seminar (Selected Topics)
      • Seminar (Time Series)
    • Wintersemester 2021/22
      • Machine Learning
      • Large-Scale Machine Learning
      • Seminar
    • Sommersemester 2021
      • Machine Learning for Graphs and Sequential Data
      • Large-Scale Machine Learning
      • Seminar
    • Wintersemester 2020/21
      • Machine Learning
      • Large-Scale Machine Learning
      • Seminar
    • Sommersemester 2020
      • Machine Learning for Graphs and Sequential Data
      • Large-Scale Machine Learning
      • Seminar
    • Wintersemester 2019/20
      • Machine Learning
      • Large-Scale Machine Learning
    • Sommersemester 2019
      • Mining Massive Datasets
      • Large-Scale Machine Learning
      • Oberseminar
    • Wintersemester 2018/19
      • Machine Learning
      • Large-Scale Machine Learning
      • Oberseminar
    • Sommersemester 2018
      • Mining Massive Datasets
      • Large-Scale Machine Learning
      • Oberseminar
    • Wintersemester 2017/18
      • Machine Learning
      • Oberseminar
    • Sommersemester 2017
      • Robust Data Mining Techniques
      • Efficient Inference and Large-Scale Machine Learning
      • Oberseminar
    • Wintersemester 2016/17
      • Mining Massive Datasets
    • Sommersemester 2016
      • Large-Scale Graph Analytics and Machine Learning
    • Wintersemester 2015/16
      • Mining Massive Datasets
    • Sommersemester 2015
      • Data Science in the Era of Big Data
    • Machine Learning Lab
  • Forschung
    • Robust Machine Learning
    • Machine Learning for Graphs/Networks
    • Machine Learning for Temporal and Dynamical Data
    • Bayesian (Deep) Learning / Uncertainty
    • Efficient ML
    • Code
  • Publikationen
  • Offene Stellen
    • FAQ
  • Abschlussarbeiten

News

Four papers accepted at ICLR2026

23.02.2026


Our group will present four papers at ICLR 2026. Congratulations!

Edit-Based Flow Matching for Temporal Point Processes
(David Lüdke*, Marten Lienen*, Marcel Kollovieh and Stephan Günnemann)

What if modeling event sequences didn't require processing them one event at a time? Temporal point processes are fundamental for modeling events in continuous time—from financial transactions to social network activity—yet most approaches rely on autoregressive generation. Recent diffusion-inspired methods offered a compelling alternative by jointly transforming noise into data through insertions and deletions, but they still lack the expressivity to efficiently navigate sequence space. In this work, we introduce EdiTPP, which adds substitution as a third atomic operation within a continuous-time Markov chain framework. This seemingly simple addition has profound effects: substitutions act as shortcuts that bypass costly delete-insert pairs, reducing total edit operations by ~17% while achieving up to 4× faster sampling. Our unconditionally trained model flexibly handles both unconditional generation and conditional tasks like forecasting—without task-specific retraining. Empirically, EdiTPP achieves state-of-the-art results across synthetic and real-world benchmarks while offering a principled compute-quality tradeoff at inference time. Overall, our work demonstrates that the right choice of elementary operations can fundamentally improve how we generate structured sequences in continuous time.

Discrete Bayesian Sample Inference for Graph Generation
(Ole Petersen*, Marcel Kollovieh*, Marten Lienen and Stephan Günnemann)

Generating graphs is hard. They show up everywhere: molecules, knowledge graphs, networks, but they're discrete and unordered, which makes them challenging to generate. We're introducing GraphBSI, a one-shot graph generator that takes a Bayesian approach: instead of "noising and denoising" graphs, it refines a probabilistic belief over graphs in a continuous parameter space, until converging at a discrete one. This makes handling discrete structure much more natural. On the theory side, we formulate Bayesian Sample Inference (BSI) as an SDE and derive a noise-controlled family that preserves the right marginals using a score approximation, and we show connections to Bayesian Flow Networks and Diffusion models.

Sampling-aware Adversarial Attacks Against Large Language Models
(Tim Beyer, Yan Scholten, Leo Schwinn*, Stephan Günnemann*)

Adversarial robustness of large language models is typically evaluated using single, greedy generations, despite the repeated stochastic sampling which occurs in real-world applications. This paper shows that ignoring sampling fundamentally overestimates LLM safety. We introduce a sampling-aware perspective that treats sampling as a first-class attack component and frames adversarial attacks as a compute-constrained resource allocation problem between prompt optimization and generation. By reallocating compute from optimization to sampling, we demonstrate dramatic improvements: existing attacks become up to two orders of magnitude more efficient and achieve up to +37 p.p. higher attack success rates at equal compute. Analyzing the full distribution of output harmfulness reveals that most optimization strategies primarily suppress refusals rather than increasing harm severity, explaining why sampling is so effective. Finally, we propose a label-free, model-agnostic entropy-maximization objective that is explicitly designed for sampling-aware attacks and uncovers tail risks missed by standard objectives. Overall, our results establish sampling as essential for realistic LLM safety evaluation and attack design at scale.

Model Collapse Is Not a Bug but a Feature in Machine Unlearning for LLMs
(Yan Scholten, Sophie Xhonneux, Leo Schwinn*, Stephan Günnemann*)

What if one of the most critical failure modes in generative AI could be turned into a feature? In this work, we explore how model collapse -- typically considered a bug -- can actually help us unlearn data safely and effectively. Current unlearning methods take a counterintuitive approach: they optimize against the very data they are supposed to unlearn, with negative effects for privacy and safety. Our approach challenges this paradigm and instead draws inspiration from model collapse -- the phenomenon where generative models trained on their own outputs gradually degrade in quality. By carefully guiding this collapse process, we can transform what was once a failure mode into a powerful mechanism for unlearning targeted information from LLMs. Our method achieves unlearning without reusing sensitive data, supported by both theoretical analysis and empirical evidence. Overall, our work opens exciting new directions in trustworthy AI: leveraging collapse to enable safer and more principled unlearning in LLMs and beyond.


◄ Zurück zu: Alle News
To top

Informatik 26 - Data Analytics and Machine Learning


Prof. Dr. Stephan Günnemann

Technische Universität München
TUM School of Computation, Information and Technology
Department of Computer Science
Boltzmannstr. 3
85748 Garching 

Sekretariat:
Raum 00.11.057
Tel.: +49 89 289-17256
Fax: +49 89 289-17257

  • Datenschutz
  • Impressum
  • Barrierefreiheit