VALSE

VALSE 首页 活动通知 查看内容

VALSE 论文速览 第141期:基于注意力聚合与表征解耦的任意交互双手重建 ...

2023-10-25 14:24| 发布者: 程一-计算所| 查看: 448| 评论: 0

摘要: 为了使得视觉与学习领域相关从业者快速及时地了解领域的最新发展动态和前沿技术进展,VALSE最新推出了《论文速览》栏目,将在每周发布一至两篇顶会顶刊论文的录制视频,对单个前沿工作进行细致讲解。本期VALSE论文速 ...

为了使得视觉与学习领域相关从业者快速及时地了解领域的最新发展动态和前沿技术进展,VALSE最新推出了《论文速览》栏目,将在每周发布一至两篇顶会顶刊论文的录制视频,对单个前沿工作进行细致讲解。本期VALSE论文速览选取了来来自杜伦大学和腾讯AI Lab在 交互双手姿态估计和重建领域方面的工作,黄少立博士,Toby Breckon教授指导,一作在读硕士生余正谛录制 (中稿时是硕士,目前是帝国理工博士在读)。


论文题目:Attention Collaboration-based Regressor for Arbitrary Two-Hand Reconstruction

作者列表:

余正谛 (杜伦大学),黄少立 (腾讯 AI Lab),Chen Fang (腾讯AI Lab),Toby Breckon (杜伦大学),王珏 (腾讯AI Lab)


B站观看网址:

https://www.bilibili.com/video/BV1B84y197SD/


论文摘要:

Reconstructing two hands from monocular RGB images is challenging due to frequent occlusion and mutual confusion. Existing methods mainly learn an entangled representation to encode two interacting hands, which are incredibly fragile to impaired interaction, such as truncated hands, separate hands, or external occlusion. This paper presents ACR (Attention Collaboration based Regressor), which makes the first attempt to reconstruct hands in arbitrary scenarios. To achieve this, ACR explicitly mitigates interdependencies between hands and between parts by leveraging center and part-based attention for feature extraction. However, reducing interdependence helps release the input constraint while weakening the mutual reasoning about reconstructing the interacting hands. Thus, based on center attention, ACR also learns cross-hand prior that handle the interacting hands better. We evaluate our method on various types of hand reconstruction datasets. Our method significantly outperforms the best interacting-hand approaches on the InterHand2.6M dataset while yielding comparable performance with the state-of-the-art single-hand methods on the FreiHand dataset. More qualitative results on in-the-wild and hand-object interaction datasets and web images/videos further demonstrate the effectiveness of our approach for arbitrary hand reconstruction. 


论文信息:

[1] Zhengdi Yu, Shaoli Huang, Chen Fang, Toby P. Breckon, Jue Wang, Attention Collaboration-based Regressor for Arbitrary Two-Hand Reconstruction, In CVPR, 2023.


论文链接:

[https://arxiv.org/abs/2303.05938]


代码链接:

[https://github.com/ZhengdiYu/Arbitrary-Hands-3D-Reconstruction]


视频讲者简介:

余正谛,帝国理工博士在读,此文为作者在Tencent AI Lab视觉计算中心实习完成的工作。



特别鸣谢本次论文速览主要组织者:

月度轮值AC:焦剑波 (伯明翰大学)

季度轮值AC:叶茫 (武汉大学)


活动参与方式

1、VALSE每周举行的Webinar活动依托B站直播平台进行,欢迎在B站搜索VALSE_Webinar关注我们!

直播地址:

https://live.bilibili.com/22300737;

历史视频观看地址:

https://space.bilibili.com/562085182/ 


2、VALSE Webinar活动通常每周三晚上20:00进行,但偶尔会因为讲者时区问题略有调整,为方便您参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ S群,群号:317920537);


*注:申请加入VALSE QQ群时需验证姓名、单位和身份缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。


3、VALSE微信公众号一般会在每周四发布下一周Webinar报告的通知。


4您也可以通过访问VALSE主页:http://valser.org/ 直接查看Webinar活动信息。Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新。

小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2024-7-17 09:18 , Processed in 0.014642 second(s), 14 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

返回顶部