VALSE 首页 活动通知 好文作者面授招 查看内容

20160930-33 Jack Valmadre: Learning feed-forward one-shot learner

2016-9-27 15:29| 发布者: 程一-计算所| 查看: 7543| 评论: 0

摘要: 报告嘉宾2: Jack Valmadre报告时间:2016年9月30日(星期五)晚21:00(北京时间)报告题目:Learning feed-forward one-shot learner主持人:郑帅报告摘要:One-shot learning is usually tackled by using generati ...

报告嘉宾2: Jack Valmadre (University of Oxford)

One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark.

Jack Valmadre has been a post-doc in Phil Torr's computer vision group at the University of Oxford since 2015. His research interests include object detection, object tracking, point correspondence and non-rigid 3D reconstruction. Jack obtained his PhD from the Queensland University of Technology under the supervision of Simon Lucey while based at the Australian research organisation CSIRO. He also spent a year as a visiting student at Carnegie Mellon University. He has published at CVPR, ECCV, ICCV and NIPS.


小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2022-10-2 01:32 , Processed in 0.011708 second(s), 15 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.