报告嘉宾1：吕江波 (ADSC of UIUC, Singapore)
报告题目：High-Quality Visual Correspondence Estimation with Simple, Generic and Efficient Techniques
Visual correspondence estimation is the cornerstone in numerous computer vision, graphics and robotics tasks. For instance, it can be used to estimate object motion, camera trajectory, 3-D scene geometry, or to infer semantic labels from scenes. Correspondence algorithms hence serve as key building blocks for diverse high-level applications, including e.g. autonomous vehicles and robotics, urban modelling and monitoring, video surveillances, computational photography, and augmented reality. However, there exist several key challenges for high-quality visual correspondence estimation, and diversified applications and performance requirements only make the design of correspondence algorithms even more challenging. This talk will introduce a series of visual correspondence techniques recently developed from our group. We highlight that simple, generic and global formulations, when combined with efficient and effective optimization algorithms, are not only elegant, but also highly competitive and advantageous over several complex and task-specific approaches. Specifically, I will present 1) a discrete pixel-labeling approach to dense visual correspondence estimation (e.g. stereo and optical flow) based on Markov random fields (MRFs) [1,2,3], 2) a unifying optimization framework for fast guided global interpolation by taking e.g. sparse depth or motion data as inputs , and 3) a coherence-based reliable feature matcher over wide baselines (important for e.g. camera pose estimation and 3D reconstruction) [5,6]. The focus will be put on the dense correspondence techniques in this talk.
 Y. Li, D. Min, M. S. Brown, M. N. Do, and J. Lu, “SPM-BP: Sped-up PatchMatch Belief Propagation for Continuous MRFs,” ICCV 2015. (Oral)
 J. Lu, Y. Li, H. Yang, D. Min, W. Eng, and M. N. Do, “PatchMatch Filter: Edge-Aware Filtering Meets Randomized Search for Correspondence Field Estimation,” TPAMI 2016, CVPR 2013. (Oral)
 H. Yang, W.-Y. Lin, and J. Lu, “Daisy Filter Flow: A Generalized Discrete Approach to Dense Correspondences,” CVPR 2014.
 Y. Li, D. Min, M. N. Do, and J. Lu, “Fast Guided Global Interpolation for Depth and Motion,” ECCV 2016. (Spotlight)
 W.-Y. Lin, S. Liu, N. Jiang, M. N. Do, P. Tan, and J. Lu, “RepMatch: Robust Feature Matching and Pose for Reconstructing Modern Cities,” ECCV 2016.
 W.-Y. Lin, M. Cheng, J. Lu, H. Yang, M. N. Do, and P. H. S. Torr, "Bilateral Functions for Global Motion Modeling,” ECCV 2014.
Jiangbo Lu is a Senior Research Scientist with the Advanced Digital Sciences Center (ADSC), a Singapore-based research center of University of Illinois at Urbana-Champaign (UIUC). As the first technical staff joining ADSC, he has been leading and working on use-inspired research projects that involve basic research, applied research, as well as commercialization of technology. He served and is serving as the PI and Co-PI for several research or technology commercialization projects, with the latest core research project on “Visual Modeling and Analytics of Dynamic Environments for the Masses”. He was an Associate Editor for IEEE Trans. on Circuits and Systems for Video Tech. (TCSVT) in 2012-2016, and received the 2012 TCSVT Best Associate Editor Award. He and his team won a DEMOguru Award at the DEMO Asia 2012 conference, and the Best Paper Award in the IEEE ICCV 2009 Workshop on Embedded Computer Vision, among other honours. He is a Senior Member of IEEE. He received the Ph.D. degree from University of Leuven, Belgium, in 2009. His research interests include computer vision, visual computing, robotic vision, and computational imaging.
GMT+8, 2017-3-28 23:52 , Processed in 0.035127 second(s), 19 queries .