为了使得视觉与学习领域相关从业者快速及时地了解领域的最新发展动态和前沿技术进展,VALSE最新推出了《论文速览》栏目,将在每周发布一至两篇顶会顶刊论文的录制视频,对单个前沿工作进行细致讲解。本期VALSE论文速览选取了来自中山大学和南京大学的无源域数据域适应的工作。该工作由李冠彬副教授指导张子逸硕士完成,由论文一作张子逸录制。 论文题目:Divide and Contrast: Source-free Domain Adaptation via Adaptive Contrastive Learning 作者列表:张子逸 (南京大学)、陈伟凯 (腾讯美国)、程卉 (中山大学)、李镇 (香港中文大学深圳)、李思远 (西湖大学)、林倞 (中山大学)、李冠彬 (中山大学) B站观看网址: 论文摘要: We investigate a practical domain adaptation task, called source-freedomain adaptation (SFUDA), where the source-pretrained model isadapted to the target domain without access to the source data. Existing techniques mainly leverage self-supervised pseudo labelingto achieve class-wise global alignment[1]or rely on local structureextraction that encourages feature consistency among neighborhoods[2].While impressive progress has been made, both lines of methodshave their own drawbacks - the "global" approach is sensitive tonoisy labels while the "local" counterpart suffers from source bias. In this paper, we present Divide and Contrast (DaC), a new paradigmfor SFUDA that strives to connect the good ends of both worlds whilebypassing their limitations. Based on the prediction confidence ofthe source model, DaC divides the target data into source-like andtarget-specific samples, where either group of samples is treatedwith tailored goals under an adaptive contrastive learning framework.Specifically, the source-like samples are utilized for learningglobal class clustering thanks to their relatively clean labels. Themore noisy target-specific data are harnessed at the instance levelfor learning the intrinsic local structures. We further align thesource-like domain with the target-specific samples using a memorybank-based Maximum Mean Discrepancy (MMD)loss to reduce thedistribution mismatch. Extensive experiments on VisDA, Office-Home,and the more challenging DomainNet have verified the superiorperformance of DaC over current state-of-the-art approaches. 论文信息: [1] Z. Zhang, W. Chen, H. Cheng, M. Mao, L. Lin, and G. Li. Dual Adversarial Adaptation forCross-Device Real-World Image Super-Resolution. InNeurIPS,2022. 论文链接: [https://arxiv.org/abs/2211.06612] 代码链接: [https://github.com/ZyeZhang/DaC.git] 视频讲者简介: 张子逸,南京大学人工智能学院硕士二年级。主要研究方向为强化学习、计算机视觉,尤其是基于环境的强化学习、迁移学习方法的研究。 特别鸣谢本次论文速览主要组织者: 月度轮值AC:林迪 (天津大学)、胡鹏 (四川大学) 活动参与方式 1、VALSE每周举行的Webinar活动依托B站直播平台进行,欢迎在B站搜索VALSE_Webinar关注我们! 直播地址: https://live.bilibili.com/22300737; 历史视频观看地址: https://space.bilibili.com/562085182/ 2、VALSE Webinar活动通常每周三晚上20:00进行,但偶尔会因为讲者时区问题略有调整,为方便您参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ S群,群号:317920537); *注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。 3、VALSE微信公众号一般会在每周四发布下一周Webinar报告的通知。 4、您也可以通过访问VALSE主页:http://valser.org/ 直接查看Webinar活动信息。Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新。 |
小黑屋|手机版|Archiver|Vision And Learning SEminar
GMT+8, 2024-11-21 23:00 , Processed in 0.014697 second(s), 14 queries .
Powered by Discuz! X3.4
Copyright © 2001-2020, Tencent Cloud.