为了使得视觉与学习领域相关从业者快速及时地了解领域的最新发展动态和前沿技术进展,VALSE最新推出了《论文速览》栏目,将在每周发布一至两篇顶会顶刊论文的录制视频,对单个前沿工作进行细致讲解。本期VALSE论文速览选取了来自新加坡科技研究局前沿人工智能研究中心的基于图像重采样(image resampling)的对抗防御的工作。该工作由郭青博士指导,论文一作曹玥同学录制。 论文题目: IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks 作者列表: Yue Cao (A*STAR, Nanyang Technological University), Tianlin Li (Nanyang Technological University), Xiaofeng Cao (Jilin University), Ivor Tsang (A*STAR, Nanyang Technological University), Yang Liu (Nanyang Technological University), Qing Guo (A*STAR) B站观看网址: https://www.bilibili.com/video/BV1JWK5eLE9y/ 复制链接到浏览器打开或点击阅读原文即可跳转至观看页面。 论文摘要: We introduce a novel approach to counter adversarial attacks, namely, image resampling. Image resampling transforms a discrete image into a new one, simulating the process of scene recapturing or rerendering as specified by a geometrical transformation. The underlying rationale behind our idea is that image resampling can alleviate the influence of adversarial perturbations while preserving essential semantic information, thereby conferring an inherent advantage in defending against adversarial attacks. To validate this concept, we present a comprehensive study on leveraging image resampling to defend against adversarial attacks. We have developed basic resampling methods that employ interpolation strategies and coordinate shifting magnitudes. Our analysis reveals that these basic methods can partially mitigate adversarial attacks. However, they come with apparent limitations: the accuracy of clean images noticeably decreases, while the improvement in accuracy on adversarial examples is not substantial.We propose implicit representation-driven image resampling (IRAD) to overcome these limitations. First, we construct an implicit continuous representation that enables us to represent any input image within a continuous coordinate space. Second, we introduce SampleNet, which automatically generates pixel-wise shifts for resampling in response to different inputs. Furthermore, we can extend our approach to the state-of-the-art diffusion-based method, accelerating it with fewer time steps while preserving its defense capability. Extensive experiments demonstrate that our method significantly enhances the adversarial robustness of diverse deep models against various attacks while maintaining high accuracy on clean images. 参考文献: [1] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy, "Explaining and harnessing adversarial examplesm" International Conference on Learning Representations (ICLR), 2014. [2] Neil Anthony Dodgson. Image resampling. Technical report, University of Cambridge, Computer Laboratory, 1992. 论文链接: [https://arxiv.org/abs/2310.11890] 代码链接: [https://github.com/tsingqguo/irad] 视频讲者简介: Yue Cao is currently a Ph.D. student at Nanyang Technological University, with research interests focused on Trustworthy AI. 特别鸣谢本次论文速览主要组织者: 月度轮值AC:杨文瀚 (鹏城实验室) |
小黑屋|手机版|Archiver|Vision And Learning SEminar
GMT+8, 2025-10-13 15:51 , Processed in 0.013867 second(s), 15 queries .
Powered by Discuz! X3.4
Copyright © 2001-2020, Tencent Cloud.