VALSE

VALSE 首页 活动通知 查看内容

20200408-08 对抗攻击与防御:机器学习模型怎样在对抗中百炼成钢? ... ...

2020-4-2 19:48| 发布者: 程一-计算所| 查看: 5068| 评论: 0

摘要: 报告时间:2020年04月08日(星期三)晚上19:00(北京时间)主题:对抗攻击与防御:机器学习模型怎样在对抗中百炼成钢?报告主持人:谭明奎(华南理工大学)报告嘉宾:朱占星(北京大学)报告题目:Adversarial Train ...

报告时间:2020年04月08日(星期三)晚上19:00(北京时间)

主题:对抗攻击与防御:机器学习模型怎样在对抗中百炼成钢?

报告主持人:谭明奎(华南理工大学)


报告嘉宾:朱占星(北京大学)

报告题目:Adversarial Training for Deep Learning: A Framework for Improving Robustness, Generalization and Interpretability


报告嘉宾:马兴军(墨尔本大学)

报告题目:Recent advances in adversarial machine learning: defense, transferable and camouflaged attacks


Panel议题:

1. 如何看待深度学习模型的脆弱性?是否精度越高就必然越脆弱(有论文声明:Tsipras, D., ICLR 2019)?

2. 对抗攻击本质是什么?是安全性问题,还是机器学习鲁棒性问题?如果是鲁棒性问题,是否可以通过对输入数据进行严格预处理加以避免?

3. 对抗防御本质是什么?是通过增加样本以增强表示能力,还是牺牲精度以提升鲁棒性?能否同时提升深度模型的鲁棒性和精度?

4. 对于“攻击样本”,怎样判断其是一个攻击样本,而不是一个新的样本?对抗样本和原数据很相近,如何检测数据是否为对抗样本?即:如何判断伪攻击问题:一只老虎通过某种转换变成一只猫,此时机器判定其不是一只老虎,而它确实也不是老虎,那么其是否是一个攻击样本?

5. 对抗攻击与防御在工业应用中有何作用?

6. 如何看待图数据上的对抗攻击与防御?

7. 对抗攻击与防御终极目标是什么?

8. 未来对抗攻击与防御有哪些可研究的方向?


Panel嘉宾:

朱占星(北京大学)、马兴军(墨尔本大学)、苏航(清华大学)、韩波(香港浸会大学)


*欢迎大家在下方留言提出主题相关问题,主持人和panel嘉宾会从中选择若干热度高的问题加入panel议题!

报告嘉宾:朱占星(北京大学)

报告时间:2020年04月08日(星期三)晚上19:00(北京时间)

报告题目:Adversarial Training for Deep Learning: A Framework for Improving Robustness, Generalization and Interpretability


报告人简介:

Dr. Zhanxing Zhu, is assistant professor at School of Mathematical  Sciences, Peking University, also affiliated with Center for Data Science, Peking University. He obtained Ph.D degree in  machine learning from University of Edinburgh in 2016. His research interests cover machine learning and its applications in various domains.  Currently he mainly focuses on deep learning theory and optimization algorithms, reinforcement learning, and applications in traffic, computer security, computer graphics, medical and healthcare etc. He has published more than 40 papers on top AI journals and conferences, such as NeurIPS, ICML, CVPR, ACL, IJCAI, AAAI, ECML etc.  He was awarded “2019 Alibaba Damo Young Fellow”, and obtained “Best Paper Finalist” from the top computer security conference ACM CCS 2018.


个人主页:

https://sites.google.com/view/zhanxingzhu/


报告摘要:

Deep learning has achieved tremendous success in various application areas. Unfortunately, recent works show that an adversary is able to fool the deep learning models into producing incorrect predictions by manipulating the inputs maliciously. The corresponding manipulated samples are called adversarial examples. This robustness issue dramatically hinders the deployment of deep learning, particularly in safety-critical scenarios.

In this talk, I will introduce various approaches for how to construct adversarial examples. Then I will present a framework, named as adversarial training, for improving robustness of deep networks to defense the adversarial examples. Several proposed approaches will be introduced for improving and accelerating adversarial training from perspective of Bayesian inference and optimal control theory. We also discover that adversarial training could help to enhance the interpretability of CNNs. Moreover, I will show that the introduced adversarial learning framework can be extended as an effective regularization strategy to improve the generalization in semi-supervised learning. 


参考文献:

[1] Dinghuai Zhang*, Tianyuan Zhang*, Yiping Lu*, Zhanxing Zhu and Bin Dong. You Only Propagate Once: Accelerating Adversarial Training Using Maximal Principle. 33rd Annual Conference on Neural Information Processing Systems.[NeurIPS 2019]

[2] Tianyuan Zhang, Zhanxing Zhu. Interpreting Adversarial Trained Convolutional Neural Networks. 36th International Conference on Machine Learning. [ICML 2019]

[3] Bing Yu*, Jingfeng Wu*, Jinwen Ma and Zhanxing Zhu. Tangent-Normal Adversarial Regularization for Semi-supervised Learning. The 30th IEEE Conference on Computer Vision and Pattern Recognition. [CVPR 2019] (Oral)

[4] Nanyang Ye, Zhanxing Zhu. Bayesian Adversarial Learning. 32nd Annual Conference on Neural Information Processing Systems. [NeurIPS 2018]

报告嘉宾:马兴军(墨尔本大学)

报告时间:2020年04月08日(星期三)晚上19:50(北京时间)

报告题目:Recent advances in adversarial machine learning: defense, transferable and camouflaged attacks


报告人简介:

Xingjun Ma is a research fellow and an associate lecturer in the School  of Computing and Information Systems, The University of Melbourne. Xingjun obtained his B.Eng., M.Eng. and PhD degrees from Jilin University, Tsinghua University and The University of Melbourne, successively. Xingjun is an active researcher in the fields of adversarial machine learning, deep learning and computer vision, and has published 10+ papers in top-tier conferences including ICML, ICLR, CVPR, ICCV, AAAI and IJCAI. He also serves as a program committee member or reviewer for a number of conferences and journals such as ICML, ICLR, NeurIPS, ECCV, KDD, AAAI, TPAMI, TNNLS and TKDE. He was invited to give a tutorial on adversarial machine learning at the 32nd Australasian Joint Conference on Artificial Intelligence (AI 2019), Adelaide.


个人主页:

http://www.xingjunma.com


报告摘要:

The discovery of adversarial examples (attacks) has raised deep concerns on the security and reliability of machine learning models in safety-crucial applications. This has motivated a body of work on developing either new attacks to explore the adversarial vulnerability of machine learning models, or effective defenses to train robust models against adversarial attacks. In this seminar, I will introduce three of our recent works in this "arms race" between adversarial attack and defense: 1) a new SOTA defense method: Misclassification Aware adveRsarial Training (MART); 2) a new attack method Skip Gradient Method (SGM) to craft highly transferable attacks via manipulating the skip connections of ResNets; and 3) a new framework Adversarial Camouflage (AdvCam) to camouflage adversarial attacks into stealthy natural styles in the physical world.


参考文献:

[1] Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma and Quanquan Gu. "Improving Adversarial Robustness Requires Revisiting Misclassified Examples", In Proc. International Conference on Learning Representations (ICLR'2020), Addis   Ababa,Ethiopia, 2020.

[2] Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey and Xingjun Ma. "Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets", In Proc. International Conference on Learning Representations (ICLR'2020), Addis Ababa,Ethiopia, 2020.

[3] Ranjie Duan, Xingjun Ma, Yisen Wang, James Bailey, Kai Qin, Yun Yang. "Adversarial Camouflage: Hiding Adversarial Examples with Natural Styles," in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR'2020), Seattle, Washington, 2020.

Panel嘉宾:苏航(清华大学)


嘉宾简介:

苏航,清华大学计算机系 副研究员。主要关注对抗机器学习、人工智能的可解释理论、计算机视觉等。先后CVPR、ECCV和NIPS等人工智能顶级国际会议和期刊发表论文50余篇,并荣获ICME2018“铂金最佳论文”, AVSS2012“最佳论文奖”和MICCAI2012的“青年学者奖”。


个人主页:

http://www.suhangss.me

Panel嘉宾:韩波(香港浸会大学)


嘉宾简介:

韩波,香港浸会大学计算机科学系助理教授,数据分析和人工智能实验室成员,日本国家级实验室理化研究所人工智能项目 (RIKEN AIP)访问科学家。主要研究方向为机器学习和深度学习。加入香港浸会大学前,在日本理化研究所人工智能项目从事博士后研究(Prof. Masashi Sugiyama杉山将教授团队)。其负责开发针对噪声数据(标签和样本)的鲁棒深度学习方法,成果荣获2019年度理研最佳成就奖(RIKEN BAIHO奖)。其分别于2014年在中国海洋大学,2019年在悉尼科技大学获得硕士(信号处理)和博士(计算机科学)学位。其作为程序委员会委员和审稿人长期服务机器学习顶级会议(ICML, NeurIPS, AISTATS和ICLR)和顶级期刊(JMLR, TPAMI和MLJ),并当选为NeurIPS’20的领域主席(Area Chair)。


个人主页:

https://bhanml.github.io/

主持人:谭明奎(华南理工大学)


主持人简介:

谭明奎,男,博士,华南理工大学教授、博士生导师。2006年和2009年于湖南大学获得环境科学与工程学士学位与控制科学与工程硕士学位。2014年获得新加坡南洋理工大学计算机科学博士学位。随后在澳大利亚阿德莱德大学计算机科学学院担任计算机视觉高级研究员。谭明奎教授于于2018年入选广东省“珠江人才团队”。自2016年9月全职回国以来,主持了国家自然科学基金青年项目、广东省新一代人工智能重点研发项目等多个重点项目。谭明奎教授一直从事机器学习和深度学习方面的研究工作,在深度神经网络结构优化及理论分析方面具有一定的研究基础。近年来以一作或者通讯作者完成的相关成果发表于人工智能顶级国际会议如NIPS、ICML、ACML、AAAI、CVPR、IJCAI和人工智能权威期刊如IEEE TNNLS、IEEE TIP、IEEE TSP、IEEE TKDE、JMLR等。


个人主页:

https://tanmingkui.github.io/


20-08期VALSE在线学术报告参与方式:


长按或扫描下方二维码,关注“VALSE”微信公众号(valse_wechat),后台回复“08期”,获取直播地址。



特别鸣谢本次Webinar主要组织者:

主办AC:山世光(中科院计算所)

协办AC:王楠楠(西安电子科技大学)、杨猛(中山大学)、杨恒(深圳爱莫科技有限公司)

责任AC:欧阳万里(The University of Sydney)


VALSE Webinar改版说明:

自2019年1月起,VALSE Webinar改革活动形式,由过去每次一个讲者的方式改为两种可能的形式:

1)Webinar专题研讨:每次活动有一个研讨主题,先邀请两位主题相关的优秀讲者做专题报告(每人30分钟),随后邀请额外的2~3位嘉宾共同就研讨主题进行讨论(30分钟)。

2)Webinar特邀报告:每次活动邀请一位资深专家主讲,就其在自己熟悉领域的科研工作进行系统深入的介绍,报告时间50分钟,主持人与主讲人互动10分钟,自由问答10分钟。


活动参与方式:

1、VALSE Webinar活动依托在线直播平台进行,活动时讲者会上传PPT或共享屏幕,听众可以看到Slides,听到讲者的语音,并通过聊天功能与讲者交互;

2、为参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ群(目前A、B、C、D、E、F、G、H、I、J、K群已满,除讲者等嘉宾外,只能申请加入VALSE M群,群号:531846386);

*注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。

3、在活动开始前5分钟左右,讲者会开启直播,听众点击直播链接即可参加活动,支持安装Windows系统的电脑、MAC电脑、手机等设备;

4、活动过程中,请不要说无关话语,以免影响活动正常进行;

5、活动过程中,如出现听不到或看不到视频等问题,建议退出再重新进入,一般都能解决问题;

6、建议务必在速度较快的网络上参加活动,优先采用有线网络连接;

7、VALSE微信公众号会在每周四发布下一周Webinar报告的通知及直播链接。

8、Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新[slides]

9、Webinar报告的视频(经讲者允许后),会更新在VALSE爱奇艺空间,请在爱奇艺关注Valse Webinar进行观看。


 朱占星 [slides]

 马兴军 [slides]

小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2024-4-27 08:16 , Processed in 0.014904 second(s), 14 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

返回顶部