报告嘉宾:李博 (UIUC) 报告题目:Secure Learning via Reasoning and Inference 报告嘉宾:张欢 (CMU) 报告题目:Efficient and Provable Robustness Verification of Neural Networks Panel嘉宾: 李博 (UIUC)、张欢 (CMU)、谢慈航 (UC Santa Cruz)、韦星星 (北京航空航天大学) Panel议题: 1. 在实际场景中,我们需要考虑对抗样本的威胁性吗?在工业界,有人工智能系统受到对抗攻击的例子吗? 2. 尽管鲁棒性依旧不理想 (~60% on CIFAR-10),新工作带来的提升已经越来越少 (~1%)。当前对抗训练 (adversarial training)的发展是不是达到了瓶颈期? 3. 不同于一般的训练,对抗训练具有很大的鲁棒泛化差距 (训练集鲁棒性很高,测试集鲁棒性很低),遭受了严重的过拟合问题 (鲁棒性在第一次lr decay后持续下降)。模型的鲁棒性,为什么这么困难? 4. 当前结构的神经网络 (CNN和Transformer等),能不能解决鲁棒性的问题?或者模型本身存在着一些鲁棒性上的缺陷? 5. 整个对抗学习 (adversarial learning)领域的下一个突破口在哪里? 6. 除了鲁棒性方面的研究 (例如攻击和防御),对抗样本能不能给我们的社会带来其他的益处?攻击向善? 7. 对抗样本与模型的可解释性有什么联系吗? *欢迎大家在下方留言提出主题相关问题,主持人和panel嘉宾会从中选择若干热度高的问题加入panel议题! 报告嘉宾:李博 (UIUC) 报告时间:2021年06月16日 (星期三)晚上21:00 (北京时间) 报告题目:Secure Learning via Reasoning and Inference 报告人简介: Dr. Bo Li is an assistant professor in the department of Computer Science at University of Illinois at Urbana–Champaign, and the recipient of the Symantec Research Labs Fellowship, Rising Stars, MIT Technology Review TR-35 award, Intel Rising Star award, several Amazon Research Awards, and best paper awards in several machine learning and security conferences. Previously she was a postdoctoral researcher in UC Berkeley. Her research focuses on both theoretical and practical aspects of security, machine learning, privacy, game theory, and adversarial machine learning. She has designed several robust learning algorithms, scalable frameworks for achieving robustness for a range of learning methods, and a privacy preserving data publishing system. Her work have been featured by major publications and media outlets such as Nature, Wired, Fortune, and New York Times. 个人主页: http://boli.cs.illinois.edu/ 报告摘要: Advances in machine learning have led to rapid and widespread deployment of learning based inference and decision making for safety-critical applications, such as autonomous driving and security diagnostics. Current machine learning systems, however, assume that training and test data follow the same, or similar, distributions, and do not consider active adversaries manipulating either distribution. Recent work has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors in inference time through poisoning attacks. In this talk, I will describe my recent research about security and privacy problems in machine learning systems. In particular, I will introduce several adversarial attacks in different domains, and discuss potential defensive approaches and principles, focusing on leverage human knowledge as extrinsic information to improve the reasoning ability for machine learning models towards practical robust learning systems with robustness guarantees. 参考文献: [1] Nezihe Merve Grel*, Xiangyu Qi*, Luka Rimanic, Ce Zhang, Bo Li. “Knowledge-Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks”. (ICML 2021) [2] Yang, Zhuolin, et al. "End-to-end robustness for sensing-reasoning machine learning pipelines." (CCS 2021) [3] Yulong Cao, Ningfei Wang, Chaowei Xiao, Dawei Yang, Jin Fang, RuigangYang, Qi Alfred Chen, Mingyan Liu, Bo Li. “Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks”. (IEEE Symposium on Security and Privacy (Oakland), 2021) [4] Zhuolin Yang, Zhaoxi Chen, Tiffany Cai, Xinyun Chen, Bo Li, Yuandong Tian. “Understanding Robustness in Teacher-Student Setting: A New Perspective”. (AISTATS 2021) 报告嘉宾:张欢 (CMU) 报告时间:2021年06月16日 (星期三)晚上21:30 (北京时间) 报告题目:Efficient and Provable Robustness Verification of Neural Networks 报告人简介: 张欢现任卡耐基梅隆大学 (CMU)计算机系博士后研究员,主要研究方向为机器学习的鲁棒性,其对于不同类型的机器学习模型 (如深度神经网络和基于决策树的模型)均有研究,研究结果涵盖了对抗样本攻击、对抗样本防御和鲁棒性严格验证等多个方面。张欢于2020年获得美国加州大学洛杉矶分校 (UCLA)的博士学位,导师是在机器学习优化、支持向量机方向有重大贡献的谢卓叡 (Cho-Jui Hsieh)教授。张欢在NeurIPS,ICML,ICLR等国际机器学习顶级会议上发表过数十篇论文,其论文被累计引用四千余次,并于2017-2019年获得IBM PhD奖学金。 个人主页: http://huan-zhang.com 报告摘要: 由于神经网络通常缺乏鲁棒性,在很多基于深度神经网络的应用中很容易找到对抗样本。一般的对抗样本防御技术虽然可以抵御目前常见的攻击,然而人们很难证明这些方法对任意的攻击都是有效的。实际上,很多防御技术往往都会被新的攻击方法所打破。通过神经网络鲁棒性验证 (Robustness Verification)算法,可以从理论上严格证明神经网络的鲁棒性,保证威胁模型中的任意攻击都无法找到对抗样本。本报告中介绍了神经网络鲁棒性验证的基本概念,并介绍了一种非常高效且通用的基于限界传播 (Bound Propagation)的算法来严格证明神经网络的鲁棒性。这种方法可以用来高效的严格验证较大规模的神经网络,在一些情况中比传统方法快两到三个数量级;它也可以被用在神经网络训练中,提升神经网络对威胁模型中任意类型攻击的防御能力。最后,我将介绍在神经网络鲁棒性验证问题中存在的机遇和挑战。 参考文献: [1] Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Verification, Shiqi Wang*, Huan Zhang*, Kaidi Xu*, Xue Lin, Suman Jana, Cho-Jui Hsieh and Zico Kolter. ArXiv 2103.06624 (* indicates equal contribution) [2] Efficient Neural Network Robustness Certification with General Activation Functions. Huan Zhang*, Tsui-Wei Weng*, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel. NIPS 2018. [3] Towards Stable and Efficient Training of Verifiably Robust Neural Networks. Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, Cho-Jui Hsieh. ICLR 2020. Panel嘉宾:谢慈航 (UC Santa Cruz) 嘉宾简介: Cihang Xie is an Assistant Professor of Computer Science and Engineering at University of California, Santa Cruz. His research interest lies at the intersection of computer vision and machine learning, with the goal of building robust & explainable AI systems. Cihang received his Ph.D. degree from Johns Hopkins University, advised by Bloomberg Distinguished Professor Alan Yuille. He was awarded the Facebook Fellowship in 2020. 个人主页: https://cihangxie.github.io/ Panel嘉宾:韦星星 (北京航空航天大学) 嘉宾简介: 韦星星,北航人工智能研究院副教授。2017年至2019年在清华大学人工智能研究院/计算机系从事博士后研究。先后于北京航天航空大学和天津大学获得学士及博士学位,曾在阿里巴巴任计算机视觉资深算法工程师。主要研究方向为对抗机器学习理论及其应用,计算机视觉等,先后在CVPR,ECCV,IJCAI,AAAI,ACMMM和TCYB,TMM,TGRS等人工智能领域顶级会议和期刊发表学术论文30余篇。与团队一起获得在世界黑客大会上举办的对抗样本国际测评赛CAAD CTF冠军。多次受邀担任人工智能领域顶级国际会议的程序委员会委员。作为项目负责人,受到“新一代人工智能”2030重大项目课题、国家自然科学基金面上项目、青年项目、中国博士后基金特别资助项目、博士后基金面上项目、CCF-腾讯犀牛鸟基金等多个国家级和省部级项目资助。 个人主页: https://sites.google.com/site/xingxingwei1988/ 主持人:王奕森 (北京大学) 主持人简介: 王奕森,北京大学助理教授,博士生导师。研究方向为机器学习理论和算法,重点关注对抗鲁棒性、图学习、弱/自监督学习理论等。目前已发表人工智能领域顶级学术论文30余篇,包括ICML、NeurIPS、ICLR等,多篇被选为Oral或Spotlight。主持国家自然科学基金、军科委JCJQ子课题等国家项目,担任NeurIPS 2021领域主席 (Area Chair),Neurocomputing编委 (Associate Editor)。曾获百度奖学金 (2017)和ACM China优秀博士论文提名奖 (2019)。 个人主页: https://yisenwang.github.io/ 21-16期VALSE在线学术报告参与方式: 长按或扫描下方二维码,关注“VALSE”微信公众号 (valse_wechat),后台回复“16期”,获取直播地址。 特别鸣谢本次Webinar主要组织者: 主办AC:王奕森 (北京大学) 活动参与方式 1、VALSE Webinar活动依托在线直播平台进行,活动时讲者会上传PPT或共享屏幕,听众可以看到Slides,听到讲者的语音,并通过聊天功能与讲者交互; 2、为参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ群(目前A、B、C、D、E、F、G、H、I、J、K、L、M、N群已满,除讲者等嘉宾外,只能申请加入VALSE Q群,群号:698303207); *注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。 3、在活动开始前5分钟左右,讲者会开启直播,听众点击直播链接即可参加活动,支持安装Windows系统的电脑、MAC电脑、手机等设备; 4、活动过程中,请不要说无关话语,以免影响活动正常进行; 5、活动过程中,如出现听不到或看不到视频等问题,建议退出再重新进入,一般都能解决问题; 6、建议务必在速度较快的网络上参加活动,优先采用有线网络连接; 7、VALSE微信公众号会在每周四发布下一周Webinar报告的通知及直播链接。 8、Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新[slides]。 9、Webinar报告的视频(经讲者允许后),会更新在VALSEB站、西瓜视频,请在搜索Valse Webinar进行观看。 张欢 [slides] |
小黑屋|手机版|Archiver|Vision And Learning SEminar
GMT+8, 2024-11-22 14:36 , Processed in 0.013314 second(s), 14 queries .
Powered by Discuz! X3.4
Copyright © 2001-2020, Tencent Cloud.