VALSE 首页 活动通知 好文作者面授招 查看内容

20180523-14杨建朝:WSNet: Learning Compact and Efficient Networks through...

2018-5-17 17:35| 发布者: 程一-计算所| 查看: 3193| 评论: 0

摘要: 报告嘉宾:杨建朝(Toutiao AI Lab)报告时间:2018年05月23日(星期三)早上11:00(北京时间)报告题目:WSNet: Learning Compact and Efficient Networks through Weight Sampling主持人:庄连生(中国科学技术大 ...

报告嘉宾:杨建朝Toutiao AI Lab


报告题目:WSNet: Learning Compact and Efficient Networks through Weight Sampling



Dr. Jianchao Yang is currently director of Toutiao AI Lab in Silicon Valley. His expertise and research interests are on computer vision, machine learning, deep learning, and image/video processing. Before joining Toutiao, Jianchao was a manager and Principal Research Scientist at Snap, where Jianchao was one of the funding members of Snap Research started in 2015. Prior to Snap, he was a Research Scientist at Adobe Research. He obtained his Ph.D. under supervision of Professor Thomas Huang from University of Illinois at Urbana-Champaign in 2011. He has published over 80 technical papers on top conferences and journals, which have attracted over 10K citations from the community. He is the receipt of Best Student Paper Award in ICCV 2011 and Best Paper Final List in ECCV 2016. He and his collaborators have won multiple times international competitions and challenges, including NIST VID, PASCAL VOC, ImageNet and WebVision.



In this talk, we present a novel network architecture, termed WSNet, for learning compact and efficient deep convolutional neural networks. Existing approaches learn model parameters independently, and when needed, apply different ways of model compression to reduce model size. Instead, based on the observation that model parameters are highly redundant, WSNet proposes to learn a compact model by sampling convolution filters from a compact set of learnable parameters, which naturally enforces parameter sharing throughout the learning process. We demonstrate that our novel weight sampling strategy promotes both weight and computation sharing favorably. We the new architecture, we can learn smaller and more efficient networks with competitive accuracy compared to the baseline conventional networks. We applied WSNet to both 1D CNNs and 2D CNNs for various recognition tasks. Extensive experiments on multiple audio classification datasets and ImageNet verifies the effectiveness of WSNet. For 1D CNNs, our new models are up to 180X smaller and theoretically up to 16X faster than the well-established baselines, without noticeable performance drop. For 2D CNNs, WSNet achieves comparable performance with state-of-the-art compact networks, but with less parameters and computation cost.







1、VALSE Webinar活动依托在线直播平台进行,活动时讲者会上传PPT或共享屏幕,听众可以看到Slides,听到讲者的语音,并通过聊天功能与讲者交互;

2、为参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ群(目前A、B、C、D、E、F、G群已满,除讲者等嘉宾外,只能申请加入VALSE H群,群号:701662399);

*注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。








小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2021-3-7 23:30 , Processed in 0.012076 second(s), 15 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.