Theoretical foundations for Graph Neural Networks

Graph structured data are ubiquitous in various domains, including social network analysis, bioinformatics, chemistry, and communications engineering, among others. In recent years Graph Neural Networks (GNNs) have emerged as powerful tools for learning on network structured data by extending deep learning approaches from an euclidean setting (e.g., for processing images) to a graph setting. Tasks here usually either focus on a graph or node classification problems. Although empirically successful, GNNs exhibit certain behavior that has no rigorous explanation.

This project aims to further the theoretical foundations for GNNs by considering the analysis in the infinite width setting through the lens of Neural Tangent Kernels (NTKs) as well as in the context of traditional statistical learning theory. In recent works, we show that under distributional assumptions (e.g. Stochastic Block Models) transductive Rademacher complexity-based generalization error bounds become an informative description of the GNN behavior. In the infinite width analysis, we focus on semi-supervised problems, specifically, node classification, and derive NTK exactly, which aids the study of the role of depth in GNNs and to reason several empirical observations of GNNs.

Funding

DFG Priority program Theoretical foundations of deep learning (SPP 2298). The topic of GNN is one part of the project funded through SPP 2298

Publications / Preprints

  • P. Esser, L. Vankadara, D. Ghoshdastidar. Learning theory can (sometimes) explain generalisation in graph neural networks. [Preprint]
  • M. Sabanayagam, P. Esser, D. Ghoshdastidar. New insights into graph convolutional networks using neural tangent kernels. [Preprint]