Skip to content
  • de
  • en
  • Data Analytics and Machine Learning Group
  • TUM School of Computation, Information and Technology
  • Technical University of Munich
Technical University of Munich
  • Home
  • Team
    • Stephan Günnemann
    • Sirine Ayadi
    • Tim Beyer
    • Jonas Dornbusch
    • Eike Eberhard
    • Dominik Fuchsgruber
    • Nicholas Gao
    • Simon Geisler
    • Lukas Gosch
    • Filippo Guerranti
    • Leon Hetzel
    • Niklas Kemper
    • Amine Ketata
    • Marcel Kollovieh
    • Anna-Kathrin Kopetzki
    • Arthur Kosmala
    • Aleksei Kuvshinov
    • Richard Leibrandt
    • Marten Lienen
    • David Lüdke
    • Aman Saxena
    • Sebastian Schmidt
    • Yan Scholten
    • Jan Schuchardt
    • Leo Schwinn
    • Johanna Sommer
    • Tom Wollschläger
    • Alumni
      • Amir Akbarnejad
      • Roberto Alonso
      • Bertrand Charpentier
      • Marin Bilos
      • Aleksandar Bojchevski
      • Johannes Klicpera
      • Maria Kaiser
      • Richard Kurle
      • Hao Lin
      • John Rachwan
      • Oleksandr Shchur
      • Armin Moin
      • Daniel Zügner
  • Teaching
    • Sommersemester 2025
      • Advanced Machine Learning: Deep Generative Models
      • Applied Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
      • Seminar: Current Topics in Machine Learning
    • Wintersemester 2024/25
      • Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
      • Seminar: Current Topics in Machine Learning
    • Sommersemester 2024
      • Machine Learning for Graphs and Sequential Data
      • Advanced Machine Learning: Deep Generative Models
      • Applied Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
    • Wintersemester 2023/24
      • Machine Learning
      • Applied Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
      • Seminar: Machine Learning for Sequential Decision Making
    • Sommersemester 2023
      • Machine Learning for Graphs and Sequential Data
      • Advanced Machine Learning: Deep Generative Models
      • Large-Scale Machine Learning
      • Seminar
    • Wintersemester 2022/23
      • Machine Learning
      • Large-Scale Machine Learning
      • Seminar
    • Summer Term 2022
      • Machine Learning for Graphs and Sequential Data
      • Large-Scale Machine Learning
      • Seminar (Selected Topics)
      • Seminar (Time Series)
    • Winter Term 2021/22
      • Machine Learning
      • Large-Scale Machine Learning
      • Seminar
    • Summer Term 2021
      • Machine Learning for Graphs and Sequential Data
      • Large-Scale Machine Learning
      • Seminar
    • Winter Term 2020/21
      • Machine Learning
      • Large-Scale Machine Learning
      • Seminar
    • Summer Term 2020
      • Machine Learning for Graphs and Sequential Data
      • Large-Scale Machine Learning
      • Seminar
    • Winter Term 2019/2020
      • Machine Learning
      • Large-Scale Machine Learning
    • Summer Term 2019
      • Mining Massive Datasets
      • Large-Scale Machine Learning
      • Oberseminar
    • Winter Term 2018/2019
      • Machine Learning
      • Large-Scale Machine Learning
      • Oberseminar
    • Summer Term 2018
      • Mining Massive Datasets
      • Large-Scale Machine Learning
      • Oberseminar
    • Winter Term 2017/2018
      • Machine Learning
      • Oberseminar
    • Summer Term 2017
      • Robust Data Mining Techniques
      • Efficient Inference and Large-Scale Machine Learning
      • Oberseminar
    • Winter Term 2016/2017
      • Mining Massive Datasets
    • Sommersemester 2016
      • Large-Scale Graph Analytics and Machine Learning
    • Wintersemester 2015/16
      • Mining Massive Datasets
    • Sommersemester 2015
      • Data Science in the Era of Big Data
    • Machine Learning Lab
  • Research
    • Robust Machine Learning
    • Machine Learning for Graphs/Networks
    • Machine Learning for Temporal and Dynamical Data
    • Bayesian (Deep) Learning / Uncertainty
    • Efficient ML
    • Code
  • Publications
  • Open Positions
    • FAQ
  • Open Theses

News

Four papers accepted at NeurIPS 2022

16.09.2022


Our group will present four papers at this year's NeurIPS. The works cover graph neural networks and ML robustness/certification. Links to the papers/preprints will follow soon!

  • Jan Schuchardt, Stephan Günnemann
    Invariance-Aware Randomized Smoothing Certificates
    Incorporating invariances/symmetries in neural networks, such as invariance under translation or rotation, is a key aspect of applying machine learning to real world problems like molecular property prediction, medical imaging, protein folding or LiDAR classification. For the first time, we study how the invariances of a model can be leveraged to provably guarantee the robustness of its predictions. We propose the first gray-box approach, enhancing the powerful black-box randomized smoothing technique with white-box knowledge about invariances.
     
  • Yan Scholten, Jan Schuchardt, Simon Geisler, Aleksandar Bojchevski, Stephan Günnemann
    Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks
    Randomized smoothing is one of the most promising frameworks for certifying robustness of machine learning models. Treating the ML model as black box, it has extremely wide applicability and (unlike white-box certificates) does not require designing new certification techniques with every new model at hand. Yet, due to this black-box nature, randomized smoothing certificates are overly pessimistic since the underlying architecture (e.g. a GNN) is ignored. In this work, we propose the first gray-box certificate for GNNs, exploiting their core paradigm: the message-passing principle.
     
  • Felix Mujkanovic, Simon Geisler, Aleksandar Bojchevski, Stephan Günnemann
    Are Defenses for Graph Neural Networks Robust?
    A cursory reading of the literature suggests that we made a lot of progress in designing effective adversarial defenses for Graph Neural Networks. Yet, the standard methodology has a serious flaw – virtually all of the defenses are evaluated against non-adaptive attacks leading to overly optimistic robustness estimates. We perform a thorough robustness analysis of the most popular defenses. The results are sobering – most defenses show no or only marginal improvement compared to an undefended baseline. We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks.
     
  • Leon Hetzel, Simon Boehm, Niki Kilbertus, Stephan Günnemann, Mohammad Lotfollahi, Fabian J Theis
    Predicting Single-Cell Perturbation Responses for Unseen Drugs
    Perturbation screens lie at the core of drug discovery. However, scaling high-throughput screens (HTSs) to measure cellular responses for many drugs remains challenging due to technical limitations and, more importantly, the cost of such multiplexed experiments. To overcome these limitations, we propose leveraging routinely performed bulk RNA HTS data and incorporating molecular priors. Concerning these priors, our method, chemCPA, is flexible and can include any (pretrained) GNN or molecular fingerprints such as RDKit features. ChemCPA can enrich single-cell data meaningfully and is able to predict perturbation effects for unseen drugs.

◄ Back to: All News
To top

Informatics 26 - Data Analytics and Machine Learning


Prof. Dr. Stephan Günnemann

Technical University of Munich
TUM School of Computation, Information and Technology
Department of Computer Science 
Boltzmannstr. 3
85748 Garching
Germany

Secretary's office:
Room 00.11.057
Phone: +49 89 289-17256
Fax: +49 89 289-17257

  • Privacy
  • Imprint
  • Accessibility