VALSE

VALSE 首页 活动通知 查看内容

VALSE 论文速览 第111期:Rethinking Attention-Model Explainability

2023-4-23 16:51| 发布者: 程一-计算所| 查看: 528| 评论: 0

摘要: 为了使得视觉与学习领域相关从业者快速及时地了解领域的最新发展动态和前沿技术进展,VALSE最新推出了《论文速览》栏目,将在每周发布一至两篇顶会顶刊论文的录制视频,对单个前沿工作进行细致讲解。本期VALSE论文速 ...

为了使得视觉与学习领域相关从业者快速及时地了解领域的最新发展动态和前沿技术进展,VALSE最新推出了《论文速览》栏目,将在每周发布一至两篇顶会顶刊论文的录制视频,对单个前沿工作进行细致讲解。本期VALSE论文速览选取了来自香港城市大学在可解释人工智能 (Explainable AI) 方向的工作。该工作由李浩亮老师指导,论文一作刘一兵同学录制。


论文题目:Rethinking Attention-Model Explainability through Faithfulness Violation Test

作者列表:刘一兵 (香港城市大学)、李皓亮 (香港城市大学)、郭洋洋 (新加坡国立大学)、孔臣祺 (香港城市大学)、李菁 (香港理工大学)、王诗淇 (香港城市大学)

B站观看网址:

https://www.bilibili.com/video/BV1tm4y1y73t/


论文摘要:

Attention mechanisms are dominating the explainability of deep models. They produce probability distributions over the input, which are widely deemed as feature-importance indicators. However, in this paper, we find one critical limitation in attention explanations: weakness in identifying the polarity of feature impact. This would be somehow misleading-features with higher attention weights may not faithfully contribute to model predictions; instead, they can impose suppression effects. With this finding, we reflect on the explainability of current attention-based techniques, such as Attention Gradient and LRP-based attention explanations. We first propose an actionable diagnostic methodology (henceforth faithfulness violation test)to measure the consistency between explanation weights and the impact polarity. Through the extensive experiments, we then show that most tested explanation methods are unexpectedly hindered by the faithfulness violation issue, especially the raw attention. Empirical analyses on the factors affecting violation issues further provide useful observations for adopting explanation methods in attention models.


论文信息:

[1] Yibing Liu, Haoliang Li, Yangyang Guo, Chenqi Kong, Jing Li, Shiqi Wang, “Rethinking Attention-Model Explainability through Faithfulness Violation Test”. International Conference on Machine Learning (ICML), 2022.


论文链接:

[https://proceedings.mlr.press/v162/liu22i.html]


代码链接:

[https://github.com/BierOne/Attention-Faithfulness]


视频讲者简介:

Yibing Liu is currently a second-year Ph.D. student in the Department of Computer Science at City University of Hong Kong. Before that, he received my B.E. degree in School of Computer Science and Technology from Shandong University in 2020. His research focuses on understanding internal workings of various machine learning algorithms, and designing tools to make them explainable and robust.



特别鸣谢本次论文速览主要组织者:

月度轮值AC:林迪 (天津大学)、郑乾 (浙江大学)


活动参与方式

1、VALSE每周举行的Webinar活动依托B站直播平台进行,欢迎在B站搜索VALSE_Webinar关注我们!

直播地址:

https://live.bilibili.com/22300737;

历史视频观看地址:

https://space.bilibili.com/562085182/ 


2、VALSE Webinar活动通常每周三晚上20:00进行,但偶尔会因为讲者时区问题略有调整,为方便您参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ S群,群号:317920537);


*注:申请加入VALSE QQ群时需验证姓名、单位和身份缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。


3、VALSE微信公众号一般会在每周四发布下一周Webinar报告的通知。


4您也可以通过访问VALSE主页:http://valser.org/ 直接查看Webinar活动信息。Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新。

小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2024-11-21 20:03 , Processed in 0.012354 second(s), 14 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

返回顶部