PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Small neural networks (NNs) that have a small model size find applications in mobile and wearable computing. One famous example is the SqueezeNet that achieves the same accuracy as the AlexNet yet has 50x fewer parameters than AlexNet. A few follow-ups and architectural variants have been inspired. They were built upon ad hoc arguments and experimentally justified. It remains a mystery why the SqueezeNet works efficiently. In this work, we attempt to provide a scientific explanation to the superior performance of the SqueezeNet. The function of the fire module, which is a key component of the SqueezeNet, is analyzed in detail. We study the evolution of cross-entropy values across layers and use visualization tools to shed light on its behavior with several illustrative examples.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Ruiyuan Lin, Yuhang Xu, Hamza Ghani, Muhan Li, C.-C. Jay Kuo, "Demystify squeeze networks and go beyond," Proc. SPIE 11510, Applications of Digital Image Processing XLIII, 115100O (21 August 2020); https://doi.org/10.1117/12.2567544