VALSE

VALSE 首页 活动通知 好文作者面授招 查看内容

20180523-14杨建朝:WSNet: Learning Compact and Efficient Networks through...

2018-5-17 17:35| 发布者: 程一-计算所| 查看: 4730| 评论: 0

摘要: 报告嘉宾:杨建朝(Toutiao AI Lab)报告时间:2018年05月23日(星期三)早上11:00(北京时间)报告题目:WSNet: Learning Compact and Efficient Networks through Weight Sampling主持人:庄连生(中国科学技术大 ...

报告嘉宾:杨建朝Toutiao AI Lab

报告时间:2018年05月23日(星期三)早上11:00(北京时间)

报告题目:WSNet: Learning Compact and Efficient Networks through Weight Sampling

主持人:庄连生(中国科学技术大学


报告人简介:

Dr. Jianchao Yang is currently director of Toutiao AI Lab in Silicon Valley. His expertise and research interests are on computer vision, machine learning, deep learning, and image/video processing. Before joining Toutiao, Jianchao was a manager and Principal Research Scientist at Snap, where Jianchao was one of the funding members of Snap Research started in 2015. Prior to Snap, he was a Research Scientist at Adobe Research. He obtained his Ph.D. under supervision of Professor Thomas Huang from University of Illinois at Urbana-Champaign in 2011. He has published over 80 technical papers on top conferences and journals, which have attracted over 10K citations from the community. He is the receipt of Best Student Paper Award in ICCV 2011 and Best Paper Final List in ECCV 2016. He and his collaborators have won multiple times international competitions and challenges, including NIST VID, PASCAL VOC, ImageNet and WebVision.


个人主页:

http://www.ifp.illinois.edu/~jyang29/


报告摘要:

In this talk, we present a novel network architecture, termed WSNet, for learning compact and efficient deep convolutional neural networks. Existing approaches learn model parameters independently, and when needed, apply different ways of model compression to reduce model size. Instead, based on the observation that model parameters are highly redundant, WSNet proposes to learn a compact model by sampling convolution filters from a compact set of learnable parameters, which naturally enforces parameter sharing throughout the learning process. We demonstrate that our novel weight sampling strategy promotes both weight and computation sharing favorably. We the new architecture, we can learn smaller and more efficient networks with competitive accuracy compared to the baseline conventional networks. We applied WSNet to both 1D CNNs and 2D CNNs for various recognition tasks. Extensive experiments on multiple audio classification datasets and ImageNet verifies the effectiveness of WSNet. For 1D CNNs, our new models are up to 180X smaller and theoretically up to 16X faster than the well-established baselines, without noticeable performance drop. For 2D CNNs, WSNet achieves comparable performance with state-of-the-art compact networks, but with less parameters and computation cost.


18-14期VALSE在线学术报告参与方式:


长按或扫描下方二维码,关注”VALSE“微信公众号(valse_wechat),后台回复”14期“,获取直播地址。



特别鸣谢本次Webinar主要组织者:

VOOC责任委员:李策兰州理工大学

VODB协调理事:郑海永(中国海洋大学


活动参与方式:

1、VALSE Webinar活动依托在线直播平台进行,活动时讲者会上传PPT或共享屏幕,听众可以看到Slides,听到讲者的语音,并通过聊天功能与讲者交互;

2、为参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ群(目前A、B、C、D、E、F、G群已满,除讲者等嘉宾外,只能申请加入VALSE H群,群号:701662399);

*注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。

3、在活动开始前5分钟左右,讲者会开启直播,听众点击直播链接即可参加活动,支持安装Windows系统的电脑、MAC电脑、手机等设备;

4、活动过程中,请不要说无关话语,以免影响活动正常进行;

5、活动过程中,如出现听不到或看不到视频等问题,建议退出再重新进入,一般都能解决问题;

6、建议务必在速度较快的网络上参加活动,优先采用有线网络连接;

7、VALSE微信公众号会在每周一推送上一周Webinar报告的总结及视频(经讲者允许后),每周四发布下一周Webinar报告的通知及直播链接。


[slides]

最新评论

小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2024-4-27 00:54 , Processed in 0.017363 second(s), 15 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

返回顶部