SqueezeNet

Deep neural network for image classification, released 2016 From Wikipedia, the free encyclopedia

SqueezeNet is a deep neural network for image classification released in 2016. SqueezeNet was developed by researchers at DeepScale, University of California, Berkeley, and Stanford University. In designing SqueezeNet, the authors' goal was to create a smaller neural network with fewer parameters while achieving competitive accuracy. Their best-performing model achieved the same accuracy as AlexNet on ImageNet classification, but has a size 510x less than it.[1]

Quick Facts Original author(s), Initial release ...
SqueezeNet
Original author(s)Forrest Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, Bill Dally, Kurt Keutzer
Initial release22 February 2016; 9 years ago (2016-02-22)
Stable release
v1.1 (June 6, 2016; 8 years ago (2016-06-06))
Repositorygithub.com/DeepScale/SqueezeNet
TypeDeep neural network
LicenseBSD license
Close

Version history

Summarize
Perspective

SqueezeNet was originally released on February 22, 2016.[2] This original version of SqueezeNet was implemented on top of the Caffe deep learning software framework. Shortly thereafter, the open-source research community ported SqueezeNet to a number of other deep learning frameworks. On February 26, 2016, Eddie Bell released a port of SqueezeNet for the Chainer deep learning framework.[3] On March 2, 2016, Guo Haria released a port of SqueezeNet for the Apache MXNet framework.[4] On June 3, 2016, Tammy Yang released a port of SqueezeNet for the Keras framework.[5] In 2017, companies including Baidu, Xilinx, Imagination Technologies, and Synopsys demonstrated SqueezeNet running on low-power processing platforms such as smartphones, FPGAs, and custom processors.[6][7][8][9]

As of 2018, SqueezeNet ships "natively" as part of the source code of a number of deep learning frameworks such as PyTorch, Apache MXNet, and Apple CoreML.[10][11][12] In addition, third party developers have created implementations of SqueezeNet that are compatible with frameworks such as TensorFlow.[13] Below is a summary of frameworks that support SqueezeNet.

More information Framework, References ...
Framework SqueezeNet Support References
Apache MXNet Native [11]
Apple CoreML Native [12]
Caffe2 Native [14]
Keras 3rd party [5]
MATLAB Deep Learning Toolbox Native [15]
ONNX Native [16]
PyTorch Native [10]
TensorFlow 3rd party [13]
Wolfram Mathematica Native [17]
Close

Relationship to other networks

AlexNet

SqueezeNet was originally described in SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size.[1] AlexNet is a deep neural network that has 240 MB of parameters, and SqueezeNet has just 5 MB of parameters. This small model size can more easily fit into computer memory and can more easily be transmitted over a computer network. However, it's important to note that SqueezeNet is not a "squeezed version of AlexNet." Rather, SqueezeNet is an entirely different DNN architecture than AlexNet.[18] What SqueezeNet and AlexNet have in common is that both of them achieve approximately the same level of accuracy when evaluated on the ImageNet image classification validation dataset.

Model compression

Model compression (e.g. quantization and pruning of model parameters) can be applied to a deep neural network after it has been trained.[19] In the SqueezeNet paper, the authors demonstrated that a model compression technique called Deep Compression can be applied to SqueezeNet to further reduce the size of the parameter file from 5 MB to 500 KB.[1] Deep Compression has also been applied to other DNNs, such as AlexNet and VGG.[20]

Variants

Some of the members of the original SqueezeNet team have continued to develop resource-efficient deep neural networks for a variety of applications. A few of these works are noted in the following table. As with the original SqueezeNet model, the open-source research community has ported and adapted these newer "squeeze"-family models for compatibility with multiple deep learning frameworks.

More information DNN Model, Application ...
DNN Model Application Original

Implementation

Other

Implementations

SqueezeDet[21][22] Object Detection

on Images

TensorFlow[23] Caffe,[24] Keras[25][26][27]
SqueezeSeg[28] Semantic

Segmentation

of LIDAR

TensorFlow[29]
SqueezeNext[30] Image

Classification

Caffe[31] TensorFlow,[32] Keras,[33]

PyTorch[34]

SqueezeNAS[35][36] Neural Architecture Search

for Semantic Segmentation

PyTorch[37]
Close

In addition, the open-source research community has extended SqueezeNet to other applications, including semantic segmentation of images and style transfer.[38][39][40]

See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.