Our project focuses on implementing and researching various neural network pruning techniques, particularly extending the lottery ticket hypothesis to structured pruning.
An illustration of various structured pruning strategies.
Lottery Ticket Hypothesis
A randomly-initialized, dense neural network contains a subnetwork that is initialized such that—when trained in isolation—it can match the test accuracy of the original network after training for at most the same number of iterations.
We aim to reduce model size while maintaining performance, accuracy, and uncertainty, and to decrease training time.
Accuracy vs Pruning Ratio for Different Techniques
Uncertainity
For a better understanding of calibration and the value of ECE, we performed an Out-of-Distribution (OOD) Detection for CIFAR10 trained model on CIFAR100 dataset.
Our analysis demonstrates that neural network pruning reallocates confidence intervals, evidenced by the reduced misclassification of man images in the deer category after intensive pruning, enhancing the model's reliability and robustness.
References
2021
Bayesian Deep Learning via Subnetwork Inference
Proceedings of the 38th International Conference on Machine Learning, 2021