Previous talks at the SCCS Colloquium

Dhia Bouassida: Converting Neural Networks to Sampled Networks

SCCS Colloquium |


Neural networks have become dominant in machine learning due to their ability to automatically learn complex patterns. They are traditionally trained using iterative gradient-based optimizers, with Adam being one of the most widely used methods. Sampled Networks, introduced by Bolager et al., offer an alternative method where network parameters are constructed directly by sampling data point pairs, eliminating the need for iterative optimization and enhancing efficiency and interpretability.

The primary contribution of this thesis is to propose an algorithm to convert traditionally trained neural networks into sampled networks. The emphasis is on converting a two-layer neural network employing the ReLU activation function into a sampled network. The main objective for the converted sampled network is to closely match the trained neural network in terms of its parameters, namely weights and biases, as well as its output. For the conversion algorithm, we introduce multiple approaches for converting both the hidden and the output layer parameters of the trained network. Numerical experiments provide a comparative analysis of each proposed approach, comparing the network parameters and outputs between the original trained network and the converted sampled network, as well as the runtime of the conversion algorithm.

Bachelor's presentation. Dhia is advised by Erik Bolager, and Dr. Felix Dietrich.