VALSE

VALSE 首页 活动通知 好文作者面授招 查看内容

20150527-16 郑帅:ImageSpirit - Verbal Guided Image Parsing

2015-5-24 19:28| 发布者: 郑海永海大| 查看: 6260| 评论: 0

摘要: 报告嘉宾2:郑帅(牛津大学) 报告题目:ImageSpirit: Verbal Guided Image Parsing 报告时间:2015年5月27日晚21:00(北京时间)
报告嘉宾2:郑帅(牛津大学)
主持人:郑海永(中国海洋大学)
报告题目:ImageSpirit: Verbal Guided Image Parsing http://valser.org/webinar/slide/slides/20150527/ShuaiZheng_Valse20150527.pdf
报告时间:2015年5月27日晚21:00(北京时间)

文章信息:
[1] ImageSpirit: Verbal Guided Image Parsing, Ming-Ming Cheng, Shuai Zheng, Wen-Yan Lin, Vibhav Vineet, Paul Sturgess, Nigel Crook, Niloy Mitra, Philip Torr, ACM Transactions on Graphics (ACM TOG), 2014.
[2] Dense Semantic Image Segmentation with Objects and Attributes, S. Zheng, M. Cheng, J. Warrell, P. Sturgess, V. Vineet, C Rother, P. Torr, IEEE International Conference on Computer Vision and Pattern Recognition (IEEE CVPR), 2014.
报告简介:Humans describe images in terms of nouns and adjectives while algorithms operate on images represented as sets of pixels. Bridging this gap between how humans would like to access images versus their typical representation is the goal of image parsing, which involves assigning object and attribute labels to pixel. In this paper we propose treating nouns as object labels and adjectives as visual attribute labels. This allows us to formulate the image parsing problem as one of jointly estimating per-pixel object and attribute labels from a set of training images. We propose an efficient (interactive time) solution. Using the extracted labels as handles, our system empowers a user to verbally refine the results. This enables hands-free parsing of an image into pixel-wise object/attribute labels that correspond to human semantics. Verbally selecting objects of interests enables a novel and natural interaction modality that can possibly be used to interact with new generation devices (e.g. smart phones, Google Glass, living room devices). We demonstrate our system on a large number of real-world images with varying complexity. To help understand the trade-off compared to traditional mouse based interactions, results are reported for both a large scale quantitative evaluation and a user study. The related publication has been published in ACM Transactions on Graphics (TOG) and will be presented at ACM SIGGRAPH 2015.
报告人简介:Shuai Zheng (Kyle) is currently a DPhil student at Oxford Torr Vision Group, working on Computer Vision and Machine Learning with Professor Philip Torr. Before that, He worked with Professor Kaiqi Huang in Professor Tieniu Tan’s group at National Laboratory of Pattern Recognition (NLPR). He obtained MEng in Pattern Recognition from Chinese Academy of Sciences, and BEng in Information Engineering from Beijing Institute of Technology. His research interests include semantic image segmentation, object recognition, probabilistic graphical models and large-scale deep learning. http://www.robots.ox.ac.uk/~szheng/

最新评论

小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2024-5-9 05:39 , Processed in 0.014835 second(s), 15 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

返回顶部