Robust Machine Learning

Topics: Robust & Reliable Machine Learning, Adversarial Machine Learning, Robust Data Analytics

In most real-world applications, the collected data is rarely of high-quality but often noisy, prone to errors, or vulnerable to manipulations. Corrupted sensors, errors in the measurement devices, or adversarial data manipulations are only a few examples. Standard machine learning and data analytics methods often fail in such scenarios. For example, even only slight deliberate perturbations of the input data (a.k.a. adversarial perturbations) can lead to dramatically different outcomes of the machine learning models. Such negative results significantly hinder the applicability of these models, leading to unintuitive and unreliable results, and they additionally open the door for attackers that can exploit these vulnerabilities. 

The goal of our research is to design robust machine learning techniques which handle various forms of errors/corruptions as well as changes in the underlying data distribution in an automatic way. Overall, this will lead to models that can be used in a reliable way, enabling their application even in sensitive application domains.

Selected Publications

  • Lukas Gosch, Daniel Sturm, Simon Geisler, Stephan Günnemann
    Revisiting Robustness in Graph Machine Learning
    International Conference on Learning Representations (ICLR), 2023
  • Simon Geisler, Johanna Sommer, Jan Schuchardt, Aleksandar Bojchevski, and Stephan Günnemann
    Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness
    International Conference on Learning Representations (ICLR), 2022
  • Simon Geisler, Tobias Schmidt, Hakan Şirin, Daniel Zügner, Aleksandar Bojchevski, and Stephan Günnemann
    Robustness of Graph Neural Networks at Scale
    Neural Information Processing Systems (NeurIPS), 2021
  • Jan Schuchardt, Aleksandar Bojchevski, Johannes Gasteiger, Stephan Günnemann
    Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks
    International Conference on Learning Representations (ICLR), 2021
  • Simon Geisler, Daniel Zügner, Stephan Günnemann
    Reliable Graph Neural Networks via Robust Aggregation
    Neural Information Processing Systems (NeurIPS), 2020
  • Aleksandar Bojchevski, Stephan Günnemann
    Certifiable Robustness to Graph Perturbations
    Neural Information Processing Systems (NeurIPS), 2019
  • Daniel Zügner, Stephan Günnemann
    Certifiable Robustness and Robust Training for Graph Convolutional Networks
    ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2019
  • Daniel Zügner, Stephan Günnemann
    Adversarial Attacks on Graph Neural Networks via Meta Learning
    International Conference on Learning Representations (ICLR), 2019
  • Richard Kurle, Stephan Günnemann, Patrick van der Smagt
    Multi-Source Neural Variational Inference
    AAAI Conference on Artificial Intelligence, 2019
  • Daniel Zügner, Amir Akbarnejad, Stephan Günnemann
    Adversarial Attacks on Neural Networks for Graph Data (Best Research Paper Award)
    ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2018
  • Richard Leibrandt, Stephan Günnemann
    Making Kernel Density Estimation Robust towards Missing Values in Highly Incomplete Multivariate Data without Imputation
    SIAM International Conference on Data Mining (SDM), 2018
  • Aleksandar Bojchevski, Stephan Günnemann
    Bayesian Robust Attributed Graph Clustering: Joint Learning of Partial Anomalies and Group Structure
    AAAI Conference on Artificial Intelligence, 2018
  • Aleksandar Bojchevski, Yves Matkovic, Stephan Günnemann
    Robust Spectral Clustering for Noisy Data: Modeling Sparse Corruptions Improves Latent Embeddings
    ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2017