Hierarchical Randomized Smoothing

This page links to additional material for our paper

Hierarchical Randomized Smoothing
Yan Scholten, Jan Schuchardt, Aleksandar Bojchevski, Stephan Günnemann
Conference on Neural Information Processing Systems (NeurIPS), 2023

Links

[PDF | Talk | Slides | Poster | Code ]

Abstract

Real-world data is complex and often consists of objects that can be decomposed into multiple entities (e.g. images into pixels, graphs into interconnected nodes). Randomized smoothing is a powerful framework for making models provably robust against small changes to their inputs – by guaranteeing robustness of the majority vote when randomly adding noise before classification. Yet, certifying robustness on such complex data via randomized smoothing is challenging when adversaries do not arbitrarily perturb entire objects (e.g. images) but only a subset of their entities (e.g. pixels). As a solution, we introduce hierarchical randomized smoothing: We partially smooth objects by adding random noise only on a randomly selected subset of their entities. By adding noise in a more targeted manner than existing methods we obtain stronger robustness guarantees while maintaining high accuracy. We initialize hierarchical smoothing using different noising distributions, yielding novel robustness certificates for discrete and continuous domains. We experimentally demonstrate the importance of hierarchical smoothing in image and node classification, where it yields superior robustness-accuracy trade-offs. Overall, hierarchical smoothing is an important contribution towards models that are both – certifiably robust to perturbations and accurate.

Cite

@inproceedings{scholten2023hierarchical,
title={Hierarchical Randomized Smoothing},
author={Yan Scholten and Jan Schuchardt and Aleksandar Bojchevski and Stephan G{\"u}nnemann},
booktitle={Advances in Neural Information Processing Systems, {NeurIPS}},
year={2023}
}