Survey of Deep Neural Network Model Compression
Author:
Affiliation:

Clc Number:

Fund Project:

National Natural Science Foundation of China (61572428, U1509206)

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Deep neural networks have continually surpassed traditional methods on a variety of computer vision tasks. Though deep neural networks are very powerful, the large number of weights consumes considerable storage and calculation time, making it hard to deploy on resource-constrained hardware platforms such as mobile system. The number of weights in deep neural networks represents the complexity to an extent, but not all the weights contribute to the performance according to recent researches. Specifically, some weights are redundant and even decrease the performance. This survey offers a systematic summarization of existing research achievements of the domestic and foreign researchers in recent years in the aspects of network pruning, network distillation, and network decomposition. Furthermore, comparisons of compression performance are provided on several public deep neural networks. Finally, a perspective of future work and challenges in this research area are discussed.

    Reference
    Related
    Cited by
Get Citation

雷杰,高鑫,宋杰,王兴路,宋明黎.深度网络模型压缩综述.软件学报,2018,29(2):251-266

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:May 02,2017
  • Revised:July 24,2017
  • Adopted:
  • Online: November 29,2017
  • Published:
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063