报告嘉宾:卢策吾 (Shanghai Jiao Tong University) 报告题目:3D Pose Estimation and Knowledge Pose 报告嘉宾:王春雨 (Microsoft Research Asia) 报告题目:VoxelPose: Towards Multi-Camera 3D Human Pose Estimation in Wild Environment Panel嘉宾: 卢策吾 (Shanghai Jiao Tong University)、王春雨 (Microsoft Research Asia)、周晓巍 (浙江大学)、张杰 (中国科学院计算技术研究所)、王可泽 (University of California, Los Angeles) Panel议题: 1. 人体姿态估计任务与人脸对齐(即人脸关键点定位)、服装关键点检测等任务有哪些关联和区别,人体姿态估计可应用到哪些实用场景?可能还有哪些挑战性问题? 2. 许多姿态估计方法均假设已给定人体检测框,因此其性能很大程度依赖于行人检测精度,模型中如何考虑人体框检测和姿态估计两任务间的一致性? 3. 对于拥挤场景中关节点二义性问题,是否可以充分利用2D和3D训练数据?其他任务能否为人体姿态估计问题提供补偿信息?另外,由于标注2D-3D人体位姿数据集代价昂贵,能否利用自监督学习的方式从大规模未标注的互联网数据实现信息挖掘? 4. 在多人姿态估计中存在着大量的遮挡和人体的交错。对于这样的场景,人类可以运用自己的推理能力进行推断,但传统的深度学习方法没有这种能力,这是否是未来的一个研究热点,有什么可尝试的解决方法? 5. 如何解决人体姿态差异较大或非常规的姿态的复杂场景?可否考虑难样本挖掘的方法或者生成式模型?对于基于视频的人体姿态估计,如何降低计算复杂度以保证其实时的效率? 6. 人体姿态估计对于医学影像中重要人体器官检测与定位有哪些借鉴意义? *欢迎大家在下方留言提出主题相关问题,主持人和panel嘉宾会从中选择若干热度高的问题加入panel议题! 报告嘉宾:卢策吾 (Shanghai Jiao Tong University) 报告时间:2020年8月20日(星期四)晚上19:00(北京时间) 报告题目:3D Pose Estimation and Knowledge Pose 报告人简介: 卢策吾,上海交通大学研究员,清源研究院院长助理。2019年被《麻省理工科技评论》35位35岁以下科技精英,同年2019年求是杰出青年学者(近三年唯一AI方向)获得者。长期从事计算机视觉、智能机器人及相关领域的研究和系统研发方面的工作,已发表于Nature Int. Mach./PAMI/CVPR/ICCV等100多篇高水平论文,《科学》《自然》人工智能方向审稿人。担任CVPR 2020 领域主席,中国青年AI科学家联盟(创始)执行理事。 个人主页: mvig.sjtu.edu.cn 报告摘要: Estimating 3D human poses from a monocular RGB camera is fundamental and challenging. To address the lack of a global perspective of the conventional top-down approaches we introduce a novel form of supervision - Hierarchical Multi-person Ordinal Relations (HMOR). The HMOR encodes interaction information as the ordinal relations of depths and angles hierarchically, which captures the body-part and joint level semantic and maintains global consistency at the same time. The proposed method significantly outperforms state-of-the-art methods on publicly available multi-person 3D pose datasets. We introduce new feature in AlphaPose. AlphaPose now supports full-body pose estimation (136 keypoints) and accurate tracking in real-time (25 fps). We can achieve 76 mAP on COCO dataset. It is also the first open-source real time pose tracker. We also introduce a new dataset of full body pose estimation to promote the development of our community. We also propose a new knowledge pose based on our human activity knowledge engine. 参考文献: [1] Jiefeng Li, Can Wang, Hao Zhu, Yihuan Mao, Hao-Shu Fang, Cewu Lu, "CrowdPose: Efficient Crowded Scenes Pose Estimation and A New Benchmark," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR'2019), Long Beach, CA, June 2019. [2] Hao-Shu Fang, Guansong Lu, Xiaolin Fang, Jianwen Xie, Yu-Wing Tai and Cewu Lu, "Weakly and Semi Supervised Human Body Part Parsing via Pose-Guided Knowledge Transfer," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR'2018), Salt Lake City, Utah, June 2018. [3] Hao-Shu Fang, Jinkun Cao, Yu-Wing Tai and Cewu Lu, "Pairwise Body-Part Attention for Recognizing Human-Object Interactions," in Proc. European Conference on Computer Vision (ECCV'2018), Munich, Germany, September, 2018. [4] Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, Cewu Lu, "RMPE: Regional Multi-person Pose Estimation," in Proc. IEEE International Conf. on Computer Vision (ICCV'2017), Venice, Italy, October, 2017. 报告嘉宾:王春雨 (Microsoft Research Asia) 报告时间:2020年8月20日(星期四)晚上19:30(北京时间) 报告题目:VoxelPose: Towards Multi-Camera 3D Human Pose Estimation in Wild Environment 报告人简介: Chunyu Wang is a senior researcher in Microsoft Research Asia. He received his Ph. D from Peking University supervised by Prof. Yizhou Wang and B.E from Dalian University of Technology. He visited University of California, Los Angeles (UCLA) for two years (2013 and 2015), working with Prof. Alan L. Yuille. His research has been focused on 3D human pose estimation and tracking and their applications. His research results have been applied to several Microsoft products including Microsoft Connected Store and Xiaoice. 个人主页: https://www.chunyuwang.org 报告摘要: Accurate 3D human pose estimation has been a longstanding goal in computer vision. However, till now, it has only gained limited success in easy scenarios such as studios which have little occlusion. In this talk, I will present our recent work termed as VoxelPose which allows us to reliably estimate and track people in crowded scenes. In contrast to the previous efforts which require to establish cross-view correspondence based on noisy and incomplete 2D pose estimates, we present an end-to-end solution which directly operates in the 3D space, therefore avoids making incorrect hard decisions in the 2D space. To achieve this goal, the features in all camera views are warped and aggregated in a common 3D space and fed to Cuboid Proposal Network (CPN) to coarsely localize all people. Then we propose Pose Regression Network (PRN) to estimate a detailed 3D pose for each proposal. The approach is robust to occlusion which occurs frequently in practice. Without bells and whistles, it significantly outperforms the state-of-the-arts on the benchmark datasets. 参考文献: [1] VoxelPose: Towards Multi-Camera 3D Human Pose Estimation in Wild Environment, Hanyue Tu, Chunyu Wang, Wenjun Zeng, ECCV 2020. [2] Cross View Fusion for 3D Human Pose Estimation, Haibo Qiu, Chunyu Wang, Wenjun Zeng, ICCV 2019. [3] Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A Geometric Approach, Zhe Zhang, Chunyu Wang, Wenhu Qin, Wenjun Zeng, CVPR 2020. [4] MetaFuse: A Pre-trained Fusion Model for Human Pose Estimation, Rongchang Xie, Chunyu Wang, Yizhou Wang, CVPR 2020. Panel嘉宾:周晓巍 (浙江大学) 嘉宾简介: 周晓巍,浙江大学CAD&CG国家重点实验室“百人计划”研究员、博士生导师。2008年本科毕业于浙江大学,2013年博士毕业于香港科技大学。2014年至2017年在美国宾夕法尼亚大学GRASP实验室从事博士后研究,2017年入选国家级青年项目并加入浙江大学。研究方向主要为三维计算机视觉及其在增强现实、机器人等领域的应用。曾获得CVPR19 Best Paper Finalists,CVPR18 3DHUMANS Workshop Best Poster Award,“陆增镛CAD&CG高科技奖”一等奖。担任CVPR21和ACCV21领域主席以及人工智能领域顶级会议AAAI20高级程序委员。 个人主页: http://xzhou.me Panel嘉宾:张杰 (中国科学院计算技术研究所) 嘉宾简介: 张杰,中科院计算技术研究所副研究员,硕士生导师,研究方向为深度学习技术及其在计算机视觉领域的应用,主要从事人脸识别、图像分割、弱/半监督学习、生成对抗学习等基础研究和应用研究。在计算机视觉领域知名会议和期刊ICCV, CVPR,ECCV和PR等上发表学术论文十余篇。相关研究成果在华为主流手机(Mate20,P20,Nova3)、社保异地认证、安防监控等产品中得到实际应用,团队连续两年荣获华为优秀合作奖。SeetaFace开源引擎的作者之一。曾获得计算机视觉领域相关国际竞赛1次冠军和3次亚军。 个人主页: https://vipl.ict.ac.cn/people/~jzhang Panel嘉宾:王可泽 (University of California, Los Angeles) 嘉宾简介: Keze Wang, received the B.S. degree in software engineering from Sun Yat-sen University, Guangzhou, China, in 2012. He received the dual Ph.D. degree with Sun Yat-sen University and The Hong Kong Polytechnic University under the supervision of Prof. L. Lin and L. Zhang. He is currently a postdoctoral scholar in VCLA lab at UCLA. He has published around 20 academic papers including TPAMI/TNNLS/CVPR/TIP. He has rich experience in human-centric analysis (including 2D/3D human pose estimation), semi-supervised learning and self-supervised learning. 个人主页: http://kezewang.com 主持人:刘昊 (宁夏大学) 主持人简介: 刘昊,工学博士,毕业于清华大学控制科学与工程专业,现为宁夏大学信息工程学院副教授、硕士生导师、计算机系副主任。研究方向包括计算机视觉与模式识别,先后在IEEE T-PAMI/T-IP等领域权威期刊和会议上发表学术论文20余篇,其中第一作者T-PAMI长文2篇。2020年依托中国人工智能学会入选第五届中国科协青年人才托举工程,获得2019年度中国人工智能学会优秀博士论文奖和2018年度清华大学优秀博士论文奖。担任IEEE ICME 2020领域主席以及T-PAMI/T-IP/CVPR/AAAI审稿人。目前作为负责人主持国家自然科学基金青年项目以及中科院“西部之光”青年学者A类项目。 个人主页: https://haoliuphd.github.io/ 20-21期VALSE在线学术报告参与方式: 长按或扫描下方二维码,关注“VALSE”微信公众号 (valse_wechat),后台回复“21期”,获取直播地址。 特别鸣谢本次Webinar主要组织者: 主办AC:刘昊 (宁夏大学) 责任AC:王兴刚 (华中科技大学) 活动参与方式: 1、VALSE Webinar活动依托在线直播平台进行,活动时讲者会上传PPT或共享屏幕,听众可以看到Slides,听到讲者的语音,并通过聊天功能与讲者交互; 2、为参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ群(目前A、B、C、D、E、F、G、H、I、J、K、L、M、N群已满,除讲者等嘉宾外,只能申请加入VALSE O群,群号:1149026774); *注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。 3、在活动开始前5分钟左右,讲者会开启直播,听众点击直播链接即可参加活动,支持安装Windows系统的电脑、MAC电脑、手机等设备; 4、活动过程中,请不要说无关话语,以免影响活动正常进行; 5、活动过程中,如出现听不到或看不到视频等问题,建议退出再重新进入,一般都能解决问题; 6、建议务必在速度较快的网络上参加活动,优先采用有线网络连接; 7、VALSE微信公众号会在每周四发布下一周Webinar报告的通知及直播链接。 8、Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新[slides]。 9、Webinar报告的视频(经讲者允许后),会更新在VALSE爱奇艺、B站、西瓜视频,请在搜索Valse Webinar进行观看。 王春雨 [slides] |
小黑屋|手机版|Archiver|Vision And Learning SEminar
GMT+8, 2024-11-22 11:27 , Processed in 0.014475 second(s), 14 queries .
Powered by Discuz! X3.4
Copyright © 2001-2020, Tencent Cloud.