nanoll extt
Please use this identifier to cite or link to this item: http://lrcdrs.bennett.edu.in:80/handle/123456789/1459
Title: Genetic Algorithm based Approach to Compress and Accelerate the Trained Convolution Neural Network Model
Authors: Agarwal, Mohit
Gupta, Suneet Kumar
Biswas, K. K.
Keywords: Deep Convolution, Neural Network, Genetic algorithm, Model compression, Model acceleration, Fitness function
Issue Date: 2022
Abstract: Although transfer learning has been employed successfully with pre-trained models based on large convolutional neural networks, the demand for huge storage space makes it unattractive to deploy these solutions on edge devices having limited storage and computational power. A number of researchers have proposed Convolution Neural Network Compression models to take care of such issues. In this paper, a genetic algorithm-based approach has been employed to reduce the size of the Convolution Neural Network model, by selecting a subset of convolutional filters and nodes in the dense layers, while maintaining accuracy levels of original models. Specifically, AlexNet, VGG16, ResNet50 architectures have been taken up for model reduction and it has been shown that without compromising on the accuracy, huge gains can be made in terms of reduced storage space. The paper also shows that using this approach additional reduction in storage space of around 38% could be achieved even for SqueezeNet, which is an already compressed model. The paper also reports a substantial reduction in inference time for standard datasets such as MNIST, CIFAR-10 and CIFAR-100 applied on all the compressed models mentioned above. For CIFAR-100, the reduction in time is almost double that of other results reported in the literature.
URI: https://doi.https://doi.org/10.1007/s13042-022-01768-4
http://lrcdrs.bennett.edu.in:80/handle/123456789/1459
Appears in Collections:Journal Articles_SCSET


Contact admin for Full-Text

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.