VALSE

VALSE 首页 活动通知 查看内容

VALSE 论文速览 第168期:Sparse Transformer for Image Deraining

2024-3-29 13:05| 发布者: 程一-计算所| 查看: 79| 评论: 0

摘要: 为了使得视觉与学习领域相关从业者快速及时地了解领域的最新发展动态和前沿技术进展,VALSE最新推出了《论文速览》栏目,将在每周发布一至两篇顶会顶刊论文的录制视频,对单个前沿工作进行细致讲解。本期VALSE论文速 ...

为了使得视觉与学习领域相关从业者快速及时地了解领域的最新发展动态和前沿技术进展,VALSE最新推出了《论文速览》栏目,将在每周发布一至两篇顶会顶刊论文的录制视频,对单个前沿工作进行细致讲解。本期VALSE论文速览选取了来自本期VALSE论文速览选取了来自南京理工大学的图像去雨(Image Deraining)的工作。该工作由潘金山教授指导,论文一作陈翔同学录制。


论文题目:

Learning A Sparse Transformer Network for Effective Image Deraining

作者列表:

陈翔 (南京理工大学),李灏 (南京理工大学),李明强 (中电科信息科学研究院),潘金山 (南京理工大学)


B站观看网址:

https://www.bilibili.com/video/BV17D42157Lc/



论文摘要:

Transformers-based methods have achieved significant performance in image deraining as they can model the non-local information which is vital for high-quality image reconstruction. In this paper, we find that most existing Transformers usually use all similarities of the tokens from the query-key pairs for the feature aggregation. However, if the tokens from the query are different from those of the key, the self-attention values estimated from these tokens also involve in feature aggregation, which accordingly interferes with the clear image restoration. To overcome this problem, we propose an effective DeRaining network, Sparse Transformer (DRSformer) that can adaptively keep the most useful self-attention values for feature aggregation so that the aggregated features better facilitate high-quality image reconstruction. Specifically, we develop a learnable top-k selection operator to adaptively retain the most crucial attention scores from the keys for each query for better feature aggregation. Simultaneously, as the naive feed-forward network in Transformers does not model the multi-scale information that is important for latent clear image restoration, we develop an effective mixed-scale feed-forward network to generate better features for image deraining. To learn an enriched set of hybrid features, which combines local context from CNN operators, we equip our model with mixture of experts feature compensator to present a cooperation refinement deraining scheme. Extensive experimental results on the commonly used benchmarks demonstrate that the proposed method achieves favorable performance against state-of-the-art approaches.


参考文献:

[1] Xiang Chen, Hao Li, Mingqiang Li, Jinshan Pan, “Learning A Sparse Transformer Network for Effective Image Deraining”, in Proceeding of IEEE Computer Vision and Pattern Recognition (CVPR 2023), Vancouver, Canada, June 2023.


论文链接:

[https://openaccess.thecvf.com/content/CVPR2023/papers/Chen_Learning_a_Sparse_Transformer_Network_for_Effective_Image_Deraining_CVPR_2023_paper.pdf]


代码链接:

[https://github.com/cschenxiang/DRSformer]


视频讲者简介:

陈翔,南京理工大学博士生,师从潘金山教授,主要研究方向为图像去雨。以第一作者在CVPR、ECCV、AAAI等国际高水平会议上发表多篇论文,曾获华为终端Camera学术之星等多项荣誉。


个人主页:

https://cschenxiang.github.io/



特别鸣谢本次论文速览主要组织者:

月度轮值AC:胡迪 (中国人民大学)


活动参与方式

1、VALSE每周举行的Webinar活动依托B站直播平台进行,欢迎在B站搜索VALSE_Webinar关注我们!

直播地址:

https://live.bilibili.com/22300737;

历史视频观看地址:

https://space.bilibili.com/562085182/ 


2、VALSE Webinar活动通常每周三晚上20:00进行,但偶尔会因为讲者时区问题略有调整,为方便您参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ S群,群号:317920537);


*注:申请加入VALSE QQ群时需验证姓名、单位和身份缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。


3、VALSE微信公众号一般会在每周四发布下一周Webinar报告的通知。


4您也可以通过访问VALSE主页:http://valser.org/ 直接查看Webinar活动信息。Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新。

小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2024-7-16 13:36 , Processed in 0.015492 second(s), 14 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

返回顶部