Skip to content

Downsizing Neural Networks

Sahel Mohammad Iqbal
Published: at 03:30 PM

It is a general trend that as neural networks get better at their respective jobs, the number of parameters that they have also go up, and with it the resources that are required to train and run inference. As an example, current state-of-the-art face detection models have upwards of a 100 million weights. In this talk we’ll look at two methods to reduce the number of parameters of a neural network - pruning and using quaternions. I’ll also cover the research questions that I look to explore in my M.Sc project, which is concerned with using the above two methods simultaneously to achieve an extremely slimmed-down (convolutional) neural network.

No slides are available for this talk.