Previous talks at the SCCS Colloquium

Martin Meinel: Efficient Implementation of Deep Convolutional Gaussian Processes

SCCS Colloquium |


Convolutional Neural Networks are used to obtain state of the art results in the field of image classification. However, they also come with certain drawbacks, such as their vulnerability to adversarial attacks and lacking uncertainty estimation.

These drawbacks can be tackled by making use of Bayesian inference, which is the main concept underlying Gaussian Processes for making predictions. Deep Convolutional Networks can be represented as Gaussian Processes in the limit where the number of convolutional filters approaches infinity. The disadvantage of making predictions with Gaussian Processes is the big computational effort coming with each sample of the training set, because the kernel matrix has to be extended for each new sample. Additionally, this large kernel matrix for the training data has to be inverted. In this thesis, it is explained in detail what a Gaussian Process is and how it can be used for Regression. Furthermore, a Gaussian Process is presented that is equivalent to a convolutional neural network in the limit of infinitely many filters. To construct the kernel matrix for the process from given training data, Iterative SVD, Soft Impute, Matrix Factorization and the Nyström method are investigated to increase the efficiency by exploiting the low-rank structure of the matrix.

Master's thesis submission talk. Martin is advised by Felix Dietrich.