预测不确定性与对抗鲁棒性的关系研究
作者:
作者单位:

作者简介:

陈思宏(1997-), 男, 硕士生, 主要研究领域为机器学习;
沈浩靖(1997-), 男, 硕士生, 主要研究领域为机器学习;
王冉(1987-), 女, 博士, 副教授, 主要研究领域为机器学习, 模式识别;
王熙照(1963-), 男, 博士, 教授, 博士生导师, 主要研究领域为机器学习

通讯作者:

王熙照,wizhaowang@ieee.org

中图分类号:

TP181

基金项目:

国家自然科学基金(61732011, 62176160, 61976141, 61732011, 61772344); 深圳大学自然科学基金(827-000230); 深圳大学跨学科创新小组


Relationship Between Prediction Uncertainty and Adversarial Robustness
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    对抗鲁棒性指的是模型抵抗对抗样本的能力, 对抗训练是提高模型对抗鲁棒性的一种常用方法. 然而, 对抗训练会降低模型在干净样本上的准确率, 这种现象被称为accuracy-robustness problem. 由于在训练过程中需要生成对抗样本, 这个过程显著增加了网络的训练时间. 研究了预测不确定性与对抗鲁棒性的关系, 得出以下结论: 预测不确定性越大, 则模型对抗鲁棒性越大. 结论解释为: 用交叉熵训练得到的模型边界并不完美, 为了使得交叉熵最小化, 可能使得一些类的分类面变得狭隘, 导致这些类的样本容易受到对抗攻击. 如果在训练模型的同时最大化模型输出的信息熵, 可以使得模型的分类面更加平衡, 模型分类面边界与每一类数据的距离尽可能一样远, 从而提高攻击难度. 在此基础上, 提出一种新的增强对抗鲁棒性的方法, 通过增加模型预测的不确定性, 以达到提高鲁棒性的目的; 它在保证模型准确率的同时, 使得模型预测的信息熵达到更大. 在MNIST、CIFAR-10和CIFAR-100数据集上的大量实验和简化的模型推导, 都证实了对抗鲁棒性随模型预测不确定性的增加而增加的统计关系. 该方法也可结合对抗训练, 进一步提高了模型的对抗鲁棒性.

    Abstract:

    Adversarial robustness describes the ability of the model to resist adversarial examples and adversarial training is a common method to improve the model's adversarial robustness. However, adversarial training will reduce the accuracy of the model on clean samples. This phenomenon is called accuracy-robustness problem. Due to the need to generate adversarial examples during the adversarial training, this process significantly increases the training time of the network. This work studies the relationship between prediction uncertainty and adversarial robustness, and draws the following conclusions: the greater the prediction uncertainty, the greater the adversarial robustness. The conclusion is explained as: the boundary of the model obtained by cross-entropy is not perfect. In order to minimize the cross-entropy, the classification surface of some classes may become narrow, which makes the samples of these classes vulnerable to adversarial attacks. And if the output's information entropy is maximized while training the model, the classification surface of the model could be more balanced, that is, the distance between boundary and data is as far as possible, which makes it more difficult for the attacker to attack the samples. Based on this finding, a new methodis proposed to improve the adversarial robustness of the model, by increasing the uncertainty of the model's prediction to improve the adversarial robustness of the model. While ensuring the accuracy of the model, the prediction's information entropy is larger. Extensive experiments and simplified model derivations on the MNIST, CIFAR-10, and CIFAR-100 datasets have confirmed the statistical relationship that the adversarial robustness increases with the increase of the model's prediction uncertainty. The method proposed in this study also can be combined with adversarial training to further improve the model's adversarial robustness.

    参考文献
    相似文献
    引证文献
引用本文

陈思宏,沈浩靖,王冉,王熙照.预测不确定性与对抗鲁棒性的关系研究.软件学报,2022,33(2):524-538

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2020-08-08
  • 最后修改日期:2020-09-14
  • 录用日期:
  • 在线发布日期: 2022-01-25
  • 出版日期: 2022-02-06
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号