Our group will present four papers at this year's ICLR (incl. one spotlight). Congratulations to all co-authors!
- Jan Schuchardt, Tom Wollschläger, Aleksandar Bojchevski, Stephan Günnemann
Localized Randomized Smoothing for Collective Robustness Certification
(selected for spotlight presentation)
Thus far, robustness certification literature is primarily concerned with proving that a simple classifier mapping an input to a label is robust to adversarial attacks. However, many real-world tasks like image segmentation, node classification or machine translation involve making multiple predictions simultaneously. Proving robustness for such tasks requires significantly more care, as attacks on different predictions can interfere with each other or even be mutually exclusive. In this work, we develop a novel *localized randomized smoothing scheme* that generalizes our prior work on collective robustness certification and can be applied to arbitrary models. We experimentally demonstrate that this approach can Pareto-dominate existing certificates, offering both higher accuracy and stronger robustness guarantees.
- Nicholas Gao, Stephan Günnemann
Sampling-free Inference for Ab-Initio Potential Energy Surface Networks
Access to the potential energy surface is critical in modern computational chemistry to accurately model the behavior of molecules. To compute the energy of a molecule its associated Schrödinger equation must be solved. Neural networks are a promising solution for this as they have shown to provide accurate solutions while being able to generalize to different structures. But, despite the joint solving, inference remains expensive due to Monte Carlo integration. In this work, we improve such neural-networks solutions in two ways: 1) we tackle the inference problem by learning a surrogate that avoid the numerical integration and 2) we improve neural network architectures inspired by quantum mechanical calculations.
- Lukas Gosch, Daniel Sturm, Simon Geisler, Stephan Günnemann
Revisiting Robustness in Graph Machine Learning
Graph Neural Networks (GNNs) are susceptible to small, often termed adversarial, changes to the graph structure. However, it is unclear if these perturbations preserve the semantic content in the graph and hence, are truly adversarial. Therefore, we introduce a semantic-aware robustness notion for graphs and uncover: (i) prevalent perturbation models violate the unchanged-semantics assumption; (ii) surprisingly, all assessed GNNs show over-robustness - that is robustness beyond the point of semantic change; (iii) there is no robustness-accuracy tradeoff for classifying an inductively added node. Consequently, we develop a simple yet effective method to reduce over-robustness in practice.
- Raffaele Paolino, Aleksandar Bojchevski, Stephan Günnemann, Gitta Kutyniok, Ron Levie
Unveiling the sampling density in non-uniform geometric graphs
A powerful framework for studying graphs is to consider them as geometric graphs: nodes are randomly sampled from an underlying metric space, and any pair of nodes is connected if their distance is less than a specified neighborhood radius. Currently, the literature mostly focuses on uniform sampling and constant neighborhood radius. However, real-world graphs are likely to be better represented by a model in which the sampling density and the neighborhood radius can both vary over the latent space. For instance, in a social network communities can be modeled as densely sampled areas, and hubs as nodes with larger neighborhood radius. We introduce geometric graphs with hubs, an effective model for real-world graphs, and retrieve the sampling density by which those graphs are sampled from continuous latent spaces. Finally, we present exemplary applications in which the learnt density is used to 1) correct the graph shift operator and improve performance on a variety of tasks, 2) improve pooling, and 3) extract knowledge from networks.