###
Journal of Software:2020.31(1):67-81

对抗样本生成技术综述
潘文雯,王新宇,宋明黎,陈纯
(浙江大学 计算机科学与技术学院, 浙江 杭州 310027)
Survey on Generating Adversarial Examples
PAN Wen-Wen,WANG Xin-Yu,SONG Ming-Li,CHEN Chun
(School of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China)
Abstract
Chart / table
Reference
Similar Articles
Article :Browse 788   Download 614
Received:November 19, 2018    Revised:July 08, 2019
> 中文摘要: 如今,深度学习已被广泛应用于图像分类和图像识别的问题中,取得了令人满意的实际效果,成为许多人工智能应用的关键所在.在对于模型准确率的不断探究中,研究人员在近期提出了“对抗样本”这一概念.通过在原有样本中添加微小扰动的方法,成功地大幅度降低原有分类深度模型的准确率,实现了对于深度学习的对抗目的,同时也给深度学习的攻方提供了新的思路,对如何开展防御提出了新的要求.在介绍对抗样本生成技术的起源和原理的基础上,对近年来有关对抗样本的研究和文献进行了总结,按照各自的算法原理将经典的生成算法分成两大类——全像素添加扰动和部分像素添加扰动.之后,以目标定向和目标非定向、黑盒测试和白盒测试、肉眼可见和肉眼不可见的二级分类标准进行二次分类.同时,使用MNIST数据集对各类代表性的方法进行了实验验证,以探究各种方法的优缺点.最后总结了生成对抗样本所面临的挑战及其可以发展的方向,并就该技术的发展前景进行了探讨.
Abstract:Recently, deep learning has been widely used in image classification and image recognition, which has achieved satisfactory results and has become the important part of AI applications. During the continuous exploration of the accuracy of models, recent studies have proposed the concept of "adversarial examples". By adding small perturbations to the original samples, it can greatly reduce the accuracy of the original classifier and achieve the purpose of anti-deep learning, which provides new ideas for deep learning attackers, and also puts forward new requirements for defenders. On the basis of introducing the origin and principle of generating adversarial examples, this paper summarizes the research and papers on generating adversarial examples in recent years, and divides these algorithms into two categories:entire pixel perturbation and partial pixel perturbation. Then, the secondary classification criteria (targeted and not targeted, black-box test and white-box test, visible and invisible) were used for secondary classification. At the same time, the MNIST data set is used to validate the methods, which proves the advantages and disadvantages of the various methods. Finally, this paper summarizes the challenges of generating adversarial examples and the direction of their development, and also discusses the future of them.
文章编号:     中图分类号:TP18    文献标志码:
基金项目:国家自然科学基金(61572426,61572428) 国家自然科学基金(61572426,61572428)
Foundation items:National Natural Science Foundation of China (61572426, 61572428)
Reference text:

潘文雯,王新宇,宋明黎,陈纯.对抗样本生成技术综述.软件学报,2020,31(1):67-81

PAN Wen-Wen,WANG Xin-Yu,SONG Ming-Li,CHEN Chun.Survey on Generating Adversarial Examples.Journal of Software,2020,31(1):67-81