VALSE

VALSE 首页 活动通知 好文作者面授招 查看内容

20180328-6 仉尚航:Deep Understanding of Urban Traffic from Multiple City Camera ...

2018-3-22 17:02| 发布者: 程一-计算所| 查看: 5982| 评论: 0

摘要: 报告嘉宾:仉尚航(Carnegie Mellon University)报告时间:2018年03月28日(星期三)晚上20:00(北京时间)报告题目:Deep Understanding of Urban Traffic from Multiple City Cameras主持人:杨恒(剑桥)报告摘 ...

报告嘉宾: 仉尚航(Carnegie Mellon University

报告时间:2018年03月28日(星期三)晚上20:00(北京时间)

报告题目:Deep Understanding of Urban Traffic from Multiple City Cameras

主持人:杨恒(剑桥)


报告摘要:

Deep understanding of urban traffic is of great significance for traffic management, and route planning, etc. In this talk, I introduce our research on extracting vehicle counts from streaming real-time videos captured by multiple low-resolution city cameras. The large-scale videos from the city cameras have low spatial and temporal resolution, high occlusion, large perspective, and variable environment conditions, making most existing methods lose their efficacy. To overcome these challenges, we propose the FCN-rLSTM network to jointly estimates vehicle density and vehicle count by connecting fully convolutional neural networks (FCN) with long short term memory networks (LSTM) in a residual learning fashion. Such design leverages the strengths of FCN for spatially pixel-level prediction and the strengths of LSTM for learning complex temporal dynamics. The residual learning connection reformulates the vehicle count regression as learning residual functions with reference to the sum of densities in each frame, which significantly accelerates the training of networks. To adapt the deep counting model to multiple city cameras, we further propose a new generalization bound for multi-camera domain adaptation, when there are multiple cameras with labeled instances and one target camera with unlabeled instances. Interestingly, our theory also leads to an efficient learning strategy using adversarial neural networks: we show how to interpret it as learning feature representations that are invariant to the multiple camera shifts while still being discriminative for the counting task. To this end, we propose two models: the first model directly optimizes our bound, while the second model is a smoothed approximation of the first one, leading to a more data-efficient and task-adaptive model. The optimization tasks of both models are minimax saddle point problems that can be optimized by adversarial training. To evaluate the proposed methods, we collected and labelled a large-scale traffic video dataset, containing 60 million frames from 212 webcams. Experimental results demonstrate the effectiveness and robustness of the proposed methods.


报告相关文献列表:

1.Understanding Traffic Density from Large-Scale Web Camera Data, Shanghang Zhang, Guanhang Wu, Joao P Costeira, José MF Moura,IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2017

2.Multiple Source Domain Adaptation with Adversarial Learning, Shanghang Zhang, Han Zhao, Guanhang Wu, Joao P Costeira, José MF Moura,Geoffrey J Gordon, International Conference on Learning Representations (ICLR), Invited to Workshop, 2018.

3.FCN-rLSTM: Deep Spatio-Temporal Neural Networks for Vehicle Counting in City Cameras, Shanghang Zhang, Guanhang Wu, Joao P Costeira, José MF Moura,International Conference on Computer Vision (ICCV),2017.

4.Traffic flow from a low frame rate city camera, Evgeny Toropov, Liangyan Gui, Shanghang Zhang, Satwik Kottur, José MF Moura, IEEE International Conference on Image Processing (ICIP), 2015.


报告人简介:

Shanghang Zhang is currently a fifth-year PhD student from Carnegie Mellon University. Her research interests include computer vision and deep learning. She has been working on vehicle detection and counting, salient object segmentation, and Image Synthesis with GAN. She is the recipient of Adobe Academic Collaboration Funding, Qualcomm Innovation Fellowship (QInF), Competition Finalist Award, and Chiang Chen Overseas Graduate Fellowship. She severs as the reviewer for PLOS ONE, CVPR, ICCV, ECML-PKDD, CIS-RAM etc. Prior to join CMU, Shanghang finished her Master study from Peking University, under the supervision of Prof. Wen Gao and Prof. Xiaodong Xie. She is also fortunate to have internship at Adobe Research, working with Xiaohui Shen, Zhe Lin, and Radomír Méch.

个人主页:

https://www.shanghangzhang.com/


特别鸣谢本次Webinar主要组织者:

VOOC责任委员:杨恒(剑桥)

VODB协调理事:禹之鼎(NVIDIA Research


彩蛋在此~请各位看官笑纳~


重要的事情说三遍:


3月28日VALSE在线Webinar结束后会开放部分注册!


3月28日VALSE在线Webinar结束后会开放部分注册!


3月28日VALSE在线Webinar结束后会开放部分注册!


特别提醒拟参会的研究生尽早和自己的导师确认可否参会,以便届时及时注册。


VALSE组委会呼吁已知自己不能参会的已注册人员及时取消注册,以便我们留出更多注册名额给确定参会的老师和同学。


谢谢大家理解!


VASLE 注册网址:http://valser.org/reg/2018/ 


请复制链接到浏览器打开。


活动参与方式:

1、VALSE Webinar活动依托在线直播平台进行,活动时讲者会上传PPT或共享屏幕,听众可以看到Slides,听到讲者的语音,并通过聊天功能与讲者交互;

2、为参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ群(目前A、B、C、D、E、F、G群已满,除讲者等嘉宾外,只能申请加入VALSE H群,群号:701662399),直播链接会在报告当天(每周三)在VALSE微信公众号和VALSE QQ群发布;

*注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。

3、在活动开始前10分钟左右,讲者会开启直播,听众点击直播链接即可参加活动,支持安装Windows系统的电脑、MAC电脑、手机等设备;

4、活动过程中,请勿送花、打赏等,也不要说无关话语,以免影响活动正常进行;

5、活动过程中,如出现听不到或看不到视频等问题,建议退出再重新进入,一般都能解决问题;

6、建议务必在速度较快的网络上参加活动,优先采用有线网络连接;

7、VALSE微信公众号会在每周一推送上一周Webinar报告的总结及视频(经讲者允许后),每周四发布下一周Webinar报告的通知。

最新评论

小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2024-4-27 08:27 , Processed in 0.015053 second(s), 15 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

返回顶部