报告嘉宾:俞刚 (Tencent) 报告题目:GAN生成在泛娱乐移动场景的探索和应用 报告嘉宾:林杰 (A*STAR Singapore) 报告题目:Fast, Compact and Energy-Efficient Neural Networks and its applications Panel嘉宾: 俞刚 (Tencent)、林杰 (A*STAR Singapore)、张力 (复旦大学)、胡鹏 (四川大学) Panel议题: 1. 在物联网中,神经网络的部署从手机端转移到资源极端有限的微控制器端所面临的新机遇和挑战? 2. 知识蒸馏/ 自训练在模型轻量化中是否有较为成熟的应用? 3. 环境感知的上下文信息怎么用在移动端拍摄的视频或者图像目标分析中? 4. 基于上下文建模的视觉识别通常需要全局感知(如Transformer)得到更好的性能,但是其相比于传统卷积神经网络需要更多的计算与存储消耗,基于全局感知的网络结构/Transformer在移动端是否具有实际落地价值? *欢迎大家在下方留言提出主题相关问题,主持人和panel嘉宾会从中选择若干热度高的问题加入panel议题! 报告嘉宾:俞刚 (Tencent) 报告时间:2021年05月12日 (星期三)晚上20:00 (北京时间) 报告题目:GAN生成在泛娱乐移动场景的探索和应用 报告人简介: 俞刚目前在腾讯光影研究室负责AI算法部分,于2014年博士毕业于新加坡南洋理工大学。加入腾讯之前,曾在北京旷视科技负责Detection组,带队参加并获得多项Challege挑战赛第一,包含COCO, WiderFace, ActivityNet等。主要研究方向包含:物体检测,分割,GAN生成,动作行为分析等。 个人主页: http://www.skicyyu.org/ 报告摘要: 随着StyleGAN等GAN生成技术的发展,GAN生成的内容质量越来越高,同时也渐渐得打开了在泛娱乐场景的落地之路。基于移动端的内容生成在泛娱乐场景有很多机遇和挑战。本次分享重点围绕GAN生成在移动端的应用进行展开,包含人脸卡通画,驱动,姿态迁移,字体生成等技术的落地。除了在具体场景的应用之外,目前GAN生成在移动端落地同时会有两个通用的挑战:小型化和稳定性。因为移动端对于算力的限制要求比较高,如何进行小型化是工业界和学术界都非常关注的问题。同时从落地的角度来讲,对于生成结果的稳定性也会要求比较高,如何提升生成的质量也是很有挑战性的。 参考文献: [1] Xi Chen, Zuoxin Li, Ye Yuan, Gang Yu, Jian-Xin Shen, Donglian Qi, “State-Aware Tracker for Real-Time Video Object Segmentation,” in CVPR, 2020. [2] Guan'an Wang, Shuo Yang, Huanyu Liu, Zhicheng Wang, Yang Yang, Shuliang Wang, Gang Yu, Erjin Zhou, Jian Sun, " High-Order Information Matters: Learning Relation and Topology for Occluded Person Re-Identification," CVPR, 2020. [3] Changqian Yu, Jingbo Wang, Changxin Gao, Gang Yu, Chunhua Shen, Nong Sang, " Context Prior for Scene Segmentation," CVPR, 2020. [4] Yinda Xu, Zeyu Wang, Zuoxin Li, Ye Yuan, Gang Yu, " SiamFC++: Towards Robust and Accurate Visual Tracking with Target Estimation Guidelines," AAAI, 2020. [5] Lin Song, Yanwei Li, Zeming Li, Gang Yu, Hongbin Sun, Jian Sun, Nanning Zheng, " Learnable Tree Filter for Structure-preserving Feature Transform," NIPS, 2019. 报告嘉宾:林杰 (A*STAR Singapore) 报告时间:2021年05月12日 (星期三)晚上20:30 (北京时间) 报告题目:Fast, Compact and Energy-Efficient Neural Networks and its applications 报告人简介: Lin Jie is currently group leader and principal investigator at the Institute for Infocomm Research, A*STAR Singapore. His research interests include deep learning, AI accelerator, privacy-preserving machine learning, data compression, and computer vision. His Ph.D. work on compact global visual descriptors contributed as a core invention to the MPEG-7 standard on Compact Descriptors for Visual Search (CDVS). The technology has been adopted by WeChat, Baidu, Boyun Vision, etc. At I2R, he works on the development of compact neural networks, which serve as the foundation for accelerating AI workloads, and drive the next-generation hardware-software co-optimized AI hardware, efficient AI on encrypted data, and so on. 个人主页: https://lin-j.github.io/ 报告摘要: The deployment of deep learning is moving from the cloud to the edge for a wide range of applications like autonomous driving and IoT for smart healthcare. To tailor deep learning models for resource-limited edge platforms, we explore new methodologies for the design of compact, fast, and energy-efficient neural networks that are capable of handling large-scale real-world problems at very low cost in terms of memory footprint, inference time, and power consumption. In this talk, I will briefly introduce our recent works on neural network compression, hardware-software co-optimized neural networks as well as efficient neural networks for privacy-preserving applications. 参考文献: [1] Chunyun Chen, Zhe Wang, Xiaowei Chen, Jie Lin, Mohamed M. Sabry Aly. Efficient Tunstall Decoder for Deep Neural Network Compression. Design Automation Conference (DAC), 2021。 [2] Peng Hu, Xi Peng, Hongyuan Zhu, Mohamed M. Sabry Aly, Jie Lin*. OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization. AAAI Conference on Artificial Intelligence (AAAI), 2021. [3] Tianyi Zhang, Jie Lin*, Peng Hu, Bin Zhao, Mohamed M. Sabry Aly. PSRR-MaxpoolNMS: Pyramid Shifted MaxpoolNMS with Relationship Recovery. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. [4] Lile Cai, Bin Zhao, Zhe Wang, Jie Lin, Chuan Sheng Foo, Mohamed Sabry Aly, Vijay Chandrasekhar. MaxpoolNMS: Getting Rid of NMS Bottlenecks in Two-Stage Object Detectors. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. [5] Yuxiao Lu, Jie Lin*, Chao Jin, Zhe Wang, Khin Mi Mi Aung, Xiaoli Li. FFConv: Fast Factorized Neural Network Inference on Encrypted Data. arXiv:2102.03494. Panel嘉宾:张力 (复旦大学) 嘉宾简介: Li Zhang is a tenure-track Associate Professor at the School of Data Science, Fudan University. He was elected to the Shanghai Science & Technology 35 Under 35. Previously, he was a Research Scientist at Samsung AI Center Cambridge, and a Postdoctoral Research Fellow at the University of Oxford. Prior to joining Oxford, he read his PhD in computer science at Queen Mary University of London. The aim of his research group at Fudan is to make the machine see and empower the next generation AI by striving to achieve the most universal representation of understanding objects, scene and motion with mathmatical models of neural networks. 个人主页: https://fudan-zvg.github.io/ Panel嘉宾:胡鹏 (四川大学) 嘉宾简介: 胡鹏是四川大学副研究员。2019年毕业于四川大学并获得博士学位。2019至2020年在新加坡信息通信研究所(Institute for Infocomm., Research Agency for Science, Technology and Research (A*STAR))担任研究员(Research Scientist)。主要研究兴趣是多视图学习与神经网络压缩,目前在CVPR,SIGIR,ACM MM',AAAI,TIP,TNNLS,TCYB等国际会议及期刊上发表论文20余篇。 个人主页: https://penghu-cs.github.io/ 主持人:陈涛 (复旦大学) 主持人简介: Dr. Chen Tao received his Ph.D. from Nanyang Technological University in Singapore in 2012. At the beginning of 2018, he was selected into the National High Level Oversea Talent Plan, and joined Fudan University as a tenure_track Professor. Before joining Fudan, he worked at the top research institutions such as the Singapore Science and Technology Bureau, the Singapore Intelligent Robot Laboratory, and the Singapore Institute for Infocomm Research. He has undertaken and participated in a series of projects from the Singapore government and enterprises. In addition, Dr. Chen Tao has also worked on the research and development of AI chipsets at the AI lab of Huawei Asia Pacific Research Institute from 2017 to 2019, and have a number of AI products. To date, Dr. Chen has published more than 40 academic papers in CCF Class A or JCR District One publications such as IEEE T-PAMI/T-IP/T-CYB/ACM’MM, and has granted a US patent. 个人主页: http://homepage.fudan.edu.cn/eetchen/ 21-13期VALSE在线学术报告参与方式: 长按或扫描下方二维码,关注“VALSE”微信公众号 (valse_wechat),后台回复“13期”,获取直播地址。 特别鸣谢本次Webinar主要组织者: 主办AC:陈涛 (复旦大学) 协办AC:胡鹏 (四川大学) 责任AC:王楠楠 (西安电子科技大学) 活动参与方式 1、VALSE Webinar活动依托在线直播平台进行,活动时讲者会上传PPT或共享屏幕,听众可以看到Slides,听到讲者的语音,并通过聊天功能与讲者交互; 2、为参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ群(目前A、B、C、D、E、F、G、H、I、J、K、L、M、N群已满,除讲者等嘉宾外,只能申请加入VALSE Q群,群号:698303207); *注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。 3、在活动开始前5分钟左右,讲者会开启直播,听众点击直播链接即可参加活动,支持安装Windows系统的电脑、MAC电脑、手机等设备; 4、活动过程中,请不要说无关话语,以免影响活动正常进行; 5、活动过程中,如出现听不到或看不到视频等问题,建议退出再重新进入,一般都能解决问题; 6、建议务必在速度较快的网络上参加活动,优先采用有线网络连接; 7、VALSE微信公众号会在每周四发布下一周Webinar报告的通知及直播链接。 8、Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新[slides]。 9、Webinar报告的视频(经讲者允许后),会更新在VALSEB站、西瓜视频,请在搜索Valse Webinar进行观看。 |
小黑屋|手机版|Archiver|Vision And Learning SEminar
GMT+8, 2024-11-22 02:09 , Processed in 0.014332 second(s), 14 queries .
Powered by Discuz! X3.4
Copyright © 2001-2020, Tencent Cloud.