Previous talks at the SCCS Colloquium

Kushal Dhungana: Comparison of Training Techniques in Neural Architecture Search

SCCS Colloquium |


Neural Architecture Search has been gaining popularity lately because it offers automation in designing neural networks and can deliver customized neural architectures for specific requirements. Although neural architecture search is expensive and takes a long time,
it is able to search through a larger space of architectures, eventually delivering the best architecture for a given task. We compare the result of two training techniques, the iterative gradient-based method (Adam) and the sampling method (SWIM), in neural architecture search for approximating different functions and try to understand whether deep learning is relevant for multi-layer perceptrons. For most cases, we saw that a single hidden layer with more neurons is sufficient for approximation using SWIM; however, Adam requires more layers and decent neurons for a similar approximation. The results show that the
SWIM method is comparable, if not better, to the iterative optimization method for approximation experiments, and the time to obtain a trained network is one to two orders of magnitude faster using SWIM than using Adam.

Master's thesis presentation. Kushal is advised by Dr. Felix Dietrich.