Zum Inhalt springen
  • de
  • en
  • Data Analytics and Machine Learning Group
  • TUM School of Computation, Information and Technology
  • Technische Universität München
Technische Universität München
  • Startseite
  • Team
    • Stephan Günnemann
    • Sirine Ayadi
    • Tim Beyer
    • Jonas Dornbusch
    • Eike Eberhard
    • Dominik Fuchsgruber
    • Nicholas Gao
    • Simon Geisler
    • Lukas Gosch
    • Filippo Guerranti
    • Leon Hetzel
    • Niklas Kemper
    • Amine Ketata
    • Marcel Kollovieh
    • Anna-Kathrin Kopetzki
    • Arthur Kosmala
    • Aleksei Kuvshinov
    • Richard Leibrandt
    • Marten Lienen
    • David Lüdke
    • Aman Saxena
    • Sebastian Schmidt
    • Yan Scholten
    • Jan Schuchardt
    • Leo Schwinn
    • Johanna Sommer
    • Tom Wollschläger
    • Alumni
      • Amir Akbarnejad
      • Roberto Alonso
      • Bertrand Charpentier
      • Marin Bilos
      • Aleksandar Bojchevski
      • Johannes Gasteiger, né Klicpera
      • Maria Kaiser
      • Richard Kurle
      • Hao Lin
      • John Rachwan
      • Oleksandr Shchur
      • Armin Moin
      • Daniel Zügner
  • Lehre
    • Sommersemester 2025
      • Advanced Machine Learning: Deep Generative Models
      • Applied Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
      • Seminar: Current Topics in Machine Learning
    • Wintersemester 2024/25
      • Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
      • Seminar: Current Topics in Machine Learning
    • Sommersemester 2024
      • Machine Learning for Graphs and Sequential Data
      • Advanced Machine Learning: Deep Generative Models
      • Applied Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
    • Wintersemester 2023/24
      • Machine Learning
      • Applied Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
      • Seminar: Machine Learning for Sequential Decision Making
    • Sommersemester 2023
      • Machine Learning for Graphs and Sequential Data
      • Advanced Machine Learning: Deep Generative Models
      • Large-Scale Machine Learning
      • Seminar
    • Wintersemester 2022/23
      • Machine Learning
      • Large-Scale Machine Learning
      • Seminar
    • Sommersemester 2022
      • Machine Learning for Graphs and Sequential Data
      • Large-Scale Machine Learning
      • Seminar (Selected Topics)
      • Seminar (Time Series)
    • Wintersemester 2021/22
      • Machine Learning
      • Large-Scale Machine Learning
      • Seminar
    • Sommersemester 2021
      • Machine Learning for Graphs and Sequential Data
      • Large-Scale Machine Learning
      • Seminar
    • Wintersemester 2020/21
      • Machine Learning
      • Large-Scale Machine Learning
      • Seminar
    • Sommersemester 2020
      • Machine Learning for Graphs and Sequential Data
      • Large-Scale Machine Learning
      • Seminar
    • Wintersemester 2019/20
      • Machine Learning
      • Large-Scale Machine Learning
    • Sommersemester 2019
      • Mining Massive Datasets
      • Large-Scale Machine Learning
      • Oberseminar
    • Wintersemester 2018/19
      • Machine Learning
      • Large-Scale Machine Learning
      • Oberseminar
    • Sommersemester 2018
      • Mining Massive Datasets
      • Large-Scale Machine Learning
      • Oberseminar
    • Wintersemester 2017/18
      • Machine Learning
      • Oberseminar
    • Sommersemester 2017
      • Robust Data Mining Techniques
      • Efficient Inference and Large-Scale Machine Learning
      • Oberseminar
    • Wintersemester 2016/17
      • Mining Massive Datasets
    • Sommersemester 2016
      • Large-Scale Graph Analytics and Machine Learning
    • Wintersemester 2015/16
      • Mining Massive Datasets
    • Sommersemester 2015
      • Data Science in the Era of Big Data
    • Machine Learning Lab
  • Forschung
    • Robust Machine Learning
    • Machine Learning for Graphs/Networks
    • Machine Learning for Temporal and Dynamical Data
    • Bayesian (Deep) Learning / Uncertainty
    • Efficient ML
    • Code
  • Publikationen
  • Offene Stellen
    • FAQ
  • Abschlussarbeiten

News

Four papers accepted at ICLR 2023

06.02.2023


Our group will present four papers at this year's ICLR (incl. one spotlight). Congratulations to all co-authors!

  • Jan Schuchardt, Tom Wollschläger, Aleksandar Bojchevski, Stephan Günnemann
    Localized Randomized Smoothing for Collective Robustness Certification
    (selected for spotlight presentation)
    Thus far, robustness certification literature is primarily concerned with proving that a simple classifier mapping an input to a label is robust to adversarial attacks. However, many real-world tasks like image segmentation, node classification or machine translation involve making multiple predictions simultaneously. Proving robustness for such tasks requires significantly more care, as attacks on different predictions can interfere with each other or even be mutually exclusive. In this work, we develop a novel *localized randomized smoothing scheme* that generalizes our prior work on collective robustness certification and can be applied to arbitrary models. We experimentally demonstrate that this approach can Pareto-dominate existing certificates, offering both higher accuracy and stronger robustness guarantees.
     
  • Nicholas Gao, Stephan Günnemann
    Sampling-free Inference for Ab-Initio Potential Energy Surface Networks
    Access to the potential energy surface is critical in modern computational chemistry to accurately model the behavior of molecules. To compute the energy of a molecule its associated Schrödinger equation must be solved. Neural networks are a promising solution for this as they have shown to provide accurate solutions while being able to generalize to different structures. But, despite the joint solving, inference remains expensive due to Monte Carlo integration. In this work, we improve such neural-networks solutions in two ways: 1) we tackle the inference problem by learning a surrogate that avoid the numerical integration and 2) we improve neural network architectures inspired by quantum mechanical calculations.
     
  • Lukas Gosch, Daniel Sturm, Simon Geisler, Stephan Günnemann
    Revisiting Robustness in Graph Machine Learning
    Graph Neural Networks (GNNs) are susceptible to small, often termed adversarial, changes to the graph structure. However, it is unclear if these perturbations preserve the semantic content in the graph and hence, are truly adversarial. Therefore, we introduce a semantic-aware robustness notion for graphs and uncover: (i) prevalent perturbation models violate the unchanged-semantics assumption; (ii) surprisingly, all assessed GNNs show over-robustness - that is robustness beyond the point of semantic change; (iii) there is no robustness-accuracy tradeoff for classifying an inductively added node. Consequently, we develop a simple yet effective method to reduce over-robustness in practice.
     
  • Raffaele Paolino, Aleksandar Bojchevski, Stephan Günnemann, Gitta Kutyniok, Ron Levie
    Unveiling the sampling density in non-uniform geometric graphs
    A powerful framework for studying graphs is to consider them as geometric graphs: nodes are randomly sampled from an underlying metric space, and any pair of nodes is connected if their distance is less than a specified neighborhood radius. Currently, the literature mostly focuses on uniform sampling and constant neighborhood radius. However, real-world graphs are likely to be better represented by a model in which the sampling density and the neighborhood radius can both vary over the latent space. For instance, in a social network communities can be modeled as densely sampled areas, and hubs as nodes with larger neighborhood radius. We introduce geometric graphs with hubs, an effective model for real-world graphs, and retrieve the sampling density by which those graphs are sampled from continuous latent spaces. Finally, we present exemplary applications in which the learnt density is used to 1) correct the graph shift operator and improve performance on a variety of tasks, 2) improve pooling, and 3) extract knowledge from networks.

◄ Zurück zu: Alle News
To top

Informatik 26 - Data Analytics and Machine Learning


Prof. Dr. Stephan Günnemann

Technische Universität München
TUM School of Computation, Information and Technology
Department of Computer Science
Boltzmannstr. 3
85748 Garching 

Sekretariat:
Raum 00.11.057
Tel.: +49 89 289-17256
Fax: +49 89 289-17257

  • Datenschutz
  • Impressum
  • Barrierefreiheit