为了使得视觉与学习领域相关从业者快速及时地了解领域的最新发展动态和前沿技术进展,VALSE最新推出了《论文速览》栏目,将在每周发布一至两篇顶会顶刊论文的录制视频,对单个前沿工作进行细致讲解。本期VALSE论文速览选取了来自华中科技大学和华为公司的夜间光流估计工作。该工作由颜露新教授和昌毅副教授指导,论文一作周寒宇同学录制。 论文题目: Exploring the Common Appearance-Boundary Adaptation for Nighttime Optical Flow 作者列表: 周寒宇 (华中科技大学),昌毅 (华中科技大学),刘昊岳 (华中科技大学),闫问鼎 (华为),段宇兴 (华中科技大学),石志伟 (华中科技大学),颜露新 (华中科技大学) B站观看网址: 论文摘要: We investigate a challenging task of nighttime optical flow, which suffers from weakened texture and amplified noise. These degradations weaken discriminative visual features, thus causing invalid motion feature matching. Typically, existing methods employ domain adaptation to transfer knowledge from auxiliary domain to nighttime domain in either input visual space or output motion space. However, this direct adaptation is ineffective, since there exists a large domain gap due to the intrinsic heterogeneous nature of the feature representations between auxiliary and nighttime domains. To overcome this issue, we explore a common-latent space as the intermediate bridge to reinforce the feature alignment between auxiliary and nighttime domains. In this work, we exploit two auxiliary daytime and event domains, and propose a novel common appearance-boundary adaptation framework for nighttime optical flow. In appearance adaptation, we employ the intrinsic image decomposition to embed the auxiliary daytime image and the nighttime image into a reflectance-aligned common space. We discover that motion distributions of the two reflectance maps are very similar, benefiting us to consistently transfer motion appearance knowledge from daytime to nighttime domain. In boundary adaptation, we theoretically derive the motion correlation formula between nighttime image and accumulated events within a spatiotemporal gradient-aligned common space. We figure out that the correlation of the two spatiotemporal gradient maps shares significant discrepancy, benefitting us to contrastively transfer boundary knowledge from event to nighttime domain. Moreover, appearance adaptation and boundary adaptation are complementary to each other, since they could jointly transfer global motion and local boundary knowledge to the nighttime domain. Extensive experiments have been performed to verify the superiority of the proposed method. 参考文献: [1] Hanyu Zhou, Yi Chang, Haoyue Liu, Wending Yan, Yuxing Duan, Zhiwei Shi, Luxin Yan. Exploring the Common Appearance-Boundary Adaptation for Nighttime Optical Flow. International Conference on Learning Representations (ICLR), 2024. 视频讲者简介: Hanyu Zhou is currently a Ph.D. student at Huazhong University of Science and Technology (HUST), advised by Prof. Luxin Yan and worked closely with Prof. Yi Chang. Before that, he got the B.Eng. degree in Central South University (CSU) in 2019. His research interests include motion estimation, robotic vision, event camera and multimodal learning. His research focuses on multimodal-based scene motion perception in challenging scenes, as well as a track record of ICLR/CVPR/AAAI/ICRA. Moreover, he also serves as the reviewers of several top conferences and journals, such as CVPR/ECCV/TIP/TCSVT.
个人主页: https://hyzhouboy.github.io/ 特别鸣谢本次论文速览主要组织者: 季度责任AC:郑乾 (浙江大学) 月度轮值AC:冯尊磊 (浙江大学)、韩波 (香港浸会大学) |
小黑屋|手机版|Archiver|Vision And Learning SEminar
GMT+8, 2025-10-22 15:23 , Processed in 0.013031 second(s), 14 queries .
Powered by Discuz! X3.4
Copyright © 2001-2020, Tencent Cloud.