Previous talks at the SCCS Colloquium

Victor-Constantin Stroescu: Constructing Transformations BetweenPre-trained Neural Networks

SCCS Colloquium |


This thesis will describe our attempt at implementing transformations between neural networks, as they were presented in "Transformations between deep neural networks" by Tom Bertalan, Felix Dietrich, and Ioannis G. Kevrekidis, on established, pre-trained neural networks, like ResNet, AlexNet, and other such networks. By creating these transformations between these neural networks, we aim to establish equivalence classes between widely used, pre-trained models. For our implementation of the transformations, we will use the approach established in the aforementioned paper, namely diffusion maps with a Mahalanobis-like metric. We will also use Whitney’s theorem to determine the number of measurements required from each neural network to reconstruct all features from the other network. We aim to use for this purpose different models, which were trained to tackle tasks such as text interpretation, image classification, and speech recognition. The Models used for the representational experiments were dependent on the availability of pre-trained neural networks, but we will also present procedures that aim to implement the transformation between neural networks that work with popular data types, such as text data, sound data, and image data.

Master's thesis presentation (Informatics). Victor is advised by Felix Dietrich.