报告时间:2019年12月4日(星期三)晚上20:00(北京时间) 主题:举一反三:小样本学习新进展 报告主持人:邓成(西安电子科技大学) 报告嘉宾:付彦伟(复旦大学) 报告题目:One-shot learning in semantic embedding, and data augmentation 报告嘉宾:何旭明(上海科技大学) 报告题目:Learning Structured Visual Concepts with Few-shot Supervision Panel议题: 1. 小样本 vs 大样本,多“小”才算小,多“大”才算大?什么样的case下,需要专门设计“小样本”学习算法?小样本学习在智能体学习过程中如何和不同样本大小的数据融合?如何在数据积累中过渡到大样本学习? 2. 小样本中引入知识来弥补数据不足是一个较为公认的趋势,到底什么算是“知识”?有哪些形式?目前真正管用/好用的“知识”是什么?来自哪里? 3. 在小样本学习中,除了显然的可学习数据量不足,在实际场景中也会因为数据量少带来domain gap的问题,怎么看待domain gap这个因素给小样本学习带来的挑战? 4. fewshot模型训练的时候performance会受到随机性的影响,并且近期论文用的backbone与其他实验setting不太一致。如何公平的比较各种方法? 5. 小样本学习中,训练得到的模型效果受到训练集中样本的影响,什么样的小样本数据集能够产生较好的模型? 6. one-shot要解决的是在少量训练数据时模型过拟合问题嘛?那传统解决过拟合的方法如:特征选取,正则化,提高训练样本多样性等方式是如何体现在现有的one-shot方法中的呢? 7. 机器学习(深度学习)如今主要应用在海量数据问题中,样本量过小会容易过拟合,模型表达能力不足。但某一些现实场景里,样本很难收集,应该如何处理这些问题?如何防止过拟合? 8. 在小样本学习中如何考虑任务之间的相关程度?如何在新领域的任务中应用小样本学习方法? 9. 零样本学习中,辅助信息(属性,词向量,文本描述等)未来的发展趋势? 10. 可解释性学习能否促进零样本学习的发展? Panel嘉宾: 付彦伟(复旦大学)、何旭明(上海科技大学)、王瑞平(中科院计算所研究员)、马占宇(北京邮电大学) *欢迎大家在下方留言提出主题相关问题,主持人和panel嘉宾会从中选择若干热度高的问题加入panel议题! 报告嘉宾:付彦伟(复旦大学) 报告时间:2019年12月4日(星期三)晚上20:00(北京时间) 报告题目:One-shot learning in semantic embedding, and data augmentation 报告人简介: 付彦伟,博士,复旦大学青年研究员,上海高校特聘教授 (即东方学者)、国家青年千人计划学者,2011--2014年在英国伦敦大学玛丽皇后学院攻读并获得博士学位, 2015.01-2016.07在美国匹兹堡迪士尼研究院任博士后研究员,曾获2017年ACM China Multimedia新星奖, IEEE ICME 2019 最佳论文奖。主要研究领域包括零样本、小样本识别、终生学习算法等。其已在IEEE TPAMI,IEEE TIP,CVPR,ECCV,ICCV,等计算机视觉与模式识别、机器学习、多媒体领域顶级国际期刊及会议发表论文共50篇。已申请的中国专利20多项(其中已授权10项),已授权美国专利3项,并曾获得Google优秀学生奖学金、国家自费留学生奖学金等奖项。担任多个国际期刊、学术会议长期审稿人及程序委员会委员(如IEEE TPAMI, IJCV, ACM MM, NIPS, ICCV等)等。 个人主页: http://yanweifu.github.io 报告摘要: The ability to quickly recognize and learn new visual concepts from limited samples enables humans to quickly adapt to new tasks and environments. This ability is enabled by semantic association of novel concepts with those that have already been learned and stored in memory. Computers can start to ascertain similar abilities by utilizing a semantic concept space. A concept space is a high-dimensional semantic space in which similar abstract concepts appear close and dissimilar ones far apart. In this talk, we introduce some of our previous works about one-shot learning in semantic embedding, published in ECCV12, TPAMI14, CVPR16, and TPAMI19. Furthermore, data augmentation has been also investigated in our recent study. Particularly, we will briefly present the core ideas of our recent works, published in TIP19, AAAI19, CVPR19 and NeurPIS19. 参考文献: [1] Attribute Learning for Understanding Unstructured Social Activity. Yanwei Fu ;Timothy M. Hospedales; Tao Xiang ; Shaogang Gong. (ECCV 2012, SCI/EI) [2] Learning Multi-modal Latent Attributes. Yanwei Fu; Timothy M. Hospedales ; Tao Xiang ; Shaogang Gong. IEEE TPAMI,Feb 2014, [3] Semi-supervised Vocabulary-informed Learning. Yanwei Fu and Leonid Sigal (CVPR 2016, oral) [4] Vocabulary-informed Zero-shot and Open-set Learning. Yanwei Fu, Xiaomei Wang, Hanze Dong, Yu-Gang Jiang, Meng Wang, Xiangyang Xue, Leonid Sigal. IEEE TPAMI to appear [5] Multi-level Semantic Feature Augmentation for One-shot Learning. Zitian Chen, Yanwei Fu, Yinda Zhang, Yu-Gang Jiang, Xiangyang Xue, and Leonid Sigal. IEEE Transaction on Image Processing (IEEE TIP 2019) [6] Image Block Augmentation for One-Shot Learning. Zitian Chen, Yanwei Fu, Kaiyu Chen, Yu-Gang Jiang, AAAI 2019 [7] Image Deformation Meta-Networks for One-Shot Learning. Zitian Chen, Yanwei Fu, Yu-Xiong Wang, Lin Ma, Wei Liu, Martial Hebert. CVPR 2019 (Oral) [8] Meta-Reinforced Synthetic Data for One-Shot Fine-Grained Visual Recognition, Satoshi Tsutsui, Yanwei Fu, David Crandall. NeurPIS 2019. 报告嘉宾:何旭明(上海科技大学) 报告时间:2019年12月4日(星期三)晚上20:30(北京时间) 报告题目:Learning Structured Visual Concepts with Few-shot Supervision 报告人简介: Xuming He is currently an Associate Professor in the School of Information Science and Technology at ShanghaiTech University. He received Ph.D. degree in computer science from the University of Toronto in 2008. He held a postdoctoral position at the University of California at Los Angeles from 2008 to 2010. After that, he joined in National ICT Australia (NICTA) and was a Senior Researcher from 2013 to 2016. He was also an adjunct Research Fellow at the Australian National University from 2010 to 2016. His research interests include semantic segmentation, 3D scene understanding, and probabilistic graphical models. He is the author of more than 50 papers in top-tier journals and conferences such as IEEE TPAMI, IEEE TIP, CVPR, ICCV, ECCV, NIPS, AAAI, IJCAI. He serves as senior program chair in IJCAI 2019 and AAAI 2020, and area chair in ICCV 2019 and ECCV 2020. 个人主页: http://sist.shanghaitech.edu.cn/2018/0502/c2739a24302/page.htm 报告摘要: Despite recent success of deep neural networks, it remains challenging to efficiently learn new visual concepts from limited training data. To address this problem, a prevailing strategy is to build a meta-learner that learns prior knowledge on learning from a small set of annotated data. However, most of existing meta-learning approaches rely on a global representation of images or videos which are sensitive to background clutter and difficult to interpret. In this talk, I will present our recent work on learning structured visual representations for scene understanding tasks in a few-shot setting. The first topic is on few-shot action localization, in which we introduce a meta-learning method that utilizes sequence matching and correlations to learn to localize action instances. In the second half of the talk, we will discuss a new few-shot classification method based on a dual attention mechanisms. Finally, we will briefly discuss our recent effort on applying meta-learning to the low-shot problems. We will demonstrate the efficacy of our methods on several real-world datasets, including THUMOS14, ActivityNet and MiniImageNet. 参考文献: [1] Fei-Fei, L.; Fergus, R.; and Perona, P. One-shot learning of object categories. TPAMI 2006. [2] Lake, B. M.; Salakhutdinov, R.; and Tenenbaum, J. B. Human-level concept learning through probabilistic program induction. Science 2015. [3] Qiao, S.; Liu, C.; Shen, W.; and Yuille, A. L. Fewshot image recognition by predicting parameters from activations. In CVPR 2018. [4] Vinyals, O.; Blundell, C.; Lillicrap, T.; Wierstra, D.; et al. Matching networks for one shot learning. In NIPS 2016. [5] Snell, J.; Swersky, K.; and Zemel, R. Prototypical networks for few-shot learning. In NIPS 2017. [6] Sung, F.; Yang, Y.; Zhang, L.; Xiang, T.; Torr, P. H.; and Hospedales, T. M. Learning to compare: Relation network for few-shot learning. In CVPR 2018. [7] Satorras, V. G., and Estrach, J. B. Few-shot learning with graph neural networks. In ICLR 2018. [8] Munkhdalai, T., and Yu, H. Meta networks. In ICML 2017. [9] Mishra, N.; Rohaninejad, M.; Chen, X.; and Abbeel, P. A simple neural attentive meta-learner. In ICLR 2018. [10] Santoro, A.; Bartunov, S.; Botvinick, M.; Wierstra, D.; and Lillicrap, T. Meta-learning with memory-augmented neural networks. In ICML 2016. [11] Finn, C.; Abbeel, P.; and Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML 2017. [12] Andrychowicz, M.; Denil, M.; Gomez, S.; Hoffman, M. W.; Pfau, D.; Schaul, T.; and de Freitas, N. Learning to learn by gradient descent by gradient descent. In NIPS 2016. [13] Ravi, S., and Larochelle, H. Optimization as a model for few-shot learning. In ICLR 2017. [14] Yang, H.;He, X.; Porikli, F.. One-shot Action Localization by Learning Sequence Matching Network, In CVPR 2018. [15] Yan, S.; Zhang, S.; He, X. A Dual Attention Network with Semantic Embedding for Few-shot Learning, In AAAI 2019. Panel嘉宾:王瑞平(中科院计算所) 嘉宾简介: 王瑞平,博士,中科院计算所研究员,博士生导师。主要研究真实开放环境下的图像视频目标识别检索与视觉场景理解等问题。目前在领域主流国际期刊和会议发表论文70余篇,Google Scholar引用3400余次,获授权国家发明专利6项。围绕近年来的学术研究专题,相继在国际会议ACCV2014、CVPR2015、ECCV2016、ICIP2017、ICCV2019上合作组织并主讲Tutorial。担任Pattern Recognition (Elsevier)、Neurocomputing (Elsevier)、The Visual Computer (Springer)、IEEE Access、IEEE Biometrics Compendium等国际期刊的编委,国际会议IEEE WACV2018-2020、ICME2019/2020、IJCB2020领域主席,IEEE FG2018、IJCB2020出版主席,ICB2019宣传主席等。 个人主页: http://vipl.ict.ac.cn/homepage/rpwang/index.htm Panel嘉宾:马占宇(北京邮电大学) 嘉宾简介: 马占宇,瑞典皇家理工学院博士、博士后,现任北京邮电大学副教授、博士生导师,丹麦奥尔堡大学兼职副教授、博士生导师,IEEE高级会员,中国图象图形学学会高级会员、理事、副秘书长,中国计算机学会高级会员、计算机视觉专委会委员、副秘书长;主要研究方向是模式识别与机器学习及其在非高斯概率模型、小样本数据建模与分析、计算机视觉、多媒体信号处理等领域的应用。共在包括IEEE TPAMI在内的顶级国际期刊和会议上发表论文80多篇,担任IEEE Trans on Vehicular Technology 等期刊编委、国际学术会议SPLINE2016技术委员会联合主席(Technical Co-chair)、IEEE MLSP2018大会联合程序主席(Program Co-Chair)。 主持包括国家自然科学基金“优青”、重点研发计划课题等在内的多个项目;授权发明专利15项;获2017年度“第七届吴文俊人工智能科学技术奖”一等奖,2017年度“北京市科学技术奖”二等奖,第十四届北京青年优秀科技论文一等奖,国际会议IEEE IC-NIDC最佳论文奖等,入选2017年度“北京市科技新星”计划。 个人主页: http://www.pris.net.cn/introduction/teacher/zhanyu_ma 主持人:邓成 (西安电子科技大学) 主持人简介: 邓成,西安电子科技大学教授、博士生导师。中国图象图形学会高级会员、中国计算机学会高级会员。2012年获陕西省青年科技新星,2012年入选教育部新世纪优秀人才支持计划,2017年入选陕西省中青年科技创新领军人才,2018年获得陕西省青年科技奖。主要研究多模态数据协同计算理论与方法。主持包括国家自然科学基金面上项目、科技部“863”计划、陕西省重点研发计划等科研课题近30项。获发明专利近20项,在国际一流期刊IEEE T-NNLS、T-CYB、T-IP、T-MM等和国际顶级会议ICML、NeurIPS、ICCV、CVPR、KDD、AAAI、IJCAI等上发表论文近90篇。担任国际知名期刊Pattern Recognition、Neurocomputing 副编辑;担任多个国际学术会议的高级程序委员/程序委员和20余个国际刊物的审稿人。研究成果获国家自然科学奖二等奖1项、陕西省科学技术奖一等奖2项、教育部自然科学奖二等奖1项。 个人主页: http://see.xidian.edu.cn/faculty/chdeng 19-29期VALSE在线学术报告参与方式: 长按或扫描下方二维码,关注“VALSE”微信公众号(valse_wechat),后台回复“29期”,获取直播地址。 特别鸣谢本次Webinar主要组织者: 主办AC:邓成 (西安电子科技大学) 协办AC:简萌(北京工业大学)、明悦(北京邮电大学) 责任AC:卢孝强(中科院西安光机所) VALSE Webinar改版说明: 自2019年1月起,VALSE Webinar改革活动形式,由过去每次一个讲者的方式改为两种可能的形式: 1)Webinar专题研讨:每次活动有一个研讨主题,先邀请两位主题相关的优秀讲者做专题报告(每人30分钟),随后邀请额外的2~3位嘉宾共同就研讨主题进行讨论(30分钟)。 2)Webinar特邀报告:每次活动邀请一位资深专家主讲,就其在自己熟悉领域的科研工作进行系统深入的介绍,报告时间50分钟,主持人与主讲人互动10分钟,自由问答10分钟。 活动参与方式: 1、VALSE Webinar活动依托在线直播平台进行,活动时讲者会上传PPT或共享屏幕,听众可以看到Slides,听到讲者的语音,并通过聊天功能与讲者交互; 2、为参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ群(目前A、B、C、D、E、F、G、H、I、J、K群已满,除讲者等嘉宾外,只能申请加入VALSE L群,群号:641069169); *注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。 3、在活动开始前5分钟左右,讲者会开启直播,听众点击直播链接即可参加活动,支持安装Windows系统的电脑、MAC电脑、手机等设备; 4、活动过程中,请不要说无关话语,以免影响活动正常进行; 5、活动过程中,如出现听不到或看不到视频等问题,建议退出再重新进入,一般都能解决问题; 6、建议务必在速度较快的网络上参加活动,优先采用有线网络连接; 7、VALSE微信公众号会在每周四发布下一周Webinar报告的通知及直播链接。 8、Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新[slides]。 9、Webinar报告的视频(经讲者允许后),会更新在VALSE爱奇艺空间,请在爱奇艺关注Valse Webinar进行观看。 付彦伟[slides] 何旭明[slides] |
小黑屋|手机版|Archiver|Vision And Learning SEminar
GMT+8, 2024-11-22 20:49 , Processed in 0.013408 second(s), 14 queries .
Powered by Discuz! X3.4
Copyright © 2001-2020, Tencent Cloud.