报告嘉宾2: Jack Valmadre (University of Oxford) 报告时间:2016年9月30日(星期五)晚21:00(北京时间) 主持人:郑帅 报告摘要: One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark. 报告人简介: Jack Valmadre has been a post-doc in Phil Torr's computer vision group at the University of Oxford since 2015. His research interests include object detection, object tracking, point correspondence and non-rigid 3D reconstruction. Jack obtained his PhD from the Queensland University of Technology under the supervision of Simon Lucey while based at the Australian research organisation CSIRO. He also spent a year as a visiting student at Carnegie Mellon University. He has published at CVPR, ECCV, ICCV and NIPS. |
小黑屋|手机版|Archiver|Vision And Learning SEminar
GMT+8, 2024-11-24 10:07 , Processed in 0.011943 second(s), 15 queries .
Powered by Discuz! X3.4
Copyright © 2001-2020, Tencent Cloud.