VALSE

VALSE 首页 活动通知 专题侠客群论剑 查看内容

20150204-05 邓伟洪|杨猛:Sparse Representation Classification

2015-2-26 15:52| 发布者: zhenghaiyong| 查看: 7421| 评论: 0|来自: VALSE

摘要: 报告嘉宾1:邓伟洪(北京邮电大学)主持人:程明明(南开大学)报告时间:2015年2月4日晚20:30(北京时间)报告题目:基于扩展稀疏表示的人脸识别DengWH.pptx文章信息:Weihong Deng, Jiani Hu, Jun Guo, Extended S ...
  1. Weihong Deng, Jiani Hu, Jun Guo, Extended SRC: Undersampled Face Recognition via Intra-Class Variant Dictionary, IEEE Trans. Pattern Anal. Mach. Intell., 34 (9): 1864–1870, 2012
  2. Weihong Deng, Jiani Hu, Jun Guo, In Defense of Sparsity Based Face Recognition, CVPR 2013.
  • 报告摘要:稀疏表示分类算法(Sparse Representation-based Classification, SRC)算法为解决像素噪声、光照、遮挡等鲁棒人脸识别难题提供了全新的思路,掀起了国际上稀疏表示模式识别的研究热潮。然而,SRC要求每个类别都具有充足受控的训练样本以满足稀疏性假设,在现实环境中的应用价值受到约束。本报告介绍一种扩展稀疏表示分类(Extended Sparse Representation-based Classification,ESRC)算法,使用一种新颖的类内变化原子来描述同类样本的图像特征变化规律,通过L1最小化方法把测试样本分解为同类训练样本和类内变化基的稀疏线性组合。实验证明了即使在每类只有极少样本(甚至单一样本)的条件下,ESRC也能获得很高的识别性能。通过把训练样本的信号原型和变化成分分离,本报告进一步介绍一种叠加的稀疏表示分类(Superposed Sparse Representation-based Classification,SSRC )解决不受控训练样本条件下的稀疏表示分类问题。本报告介绍的两个模型有助于在现实环境下应用稀疏表示模型解决人脸识别问题。
  • 报告人简介:邓伟洪,北京邮电大学副教授,博士生导师。2004年7月获得北京邮电大学信息工程专业工学学士学位,2010年1月获得北京邮电大学信号与信息处理专业工学博士学位(论文题目:高精度人脸识别算法研究,获得2011年度北京市优秀博士学位论文奖)。主要研究方向是以图像识别为代表的计算机视觉与模式识别理论和方法,近年来在包括PAMI、PR、CVPR、SIGIR在内的国际一流期刊和会议上发表论文40余篇。作为项目负责人主持多项图像识别方向的国家自然科学基金项目和企业委托开发项目,担任10余个国际学术期刊(IEEE TPAMI / TIP / TIFS / TNNLS, IJCV, PR等)的审稿人。2013年入选北京邮电大学青年骨干教师计划,北京市高等学校青年英才计划,教育部新世纪优秀人才计划。

  1. M. Yang, L. Zhang, X.C. Feng, and D. Zhang, “Sparse representation based Fisher discrimination dictionary learning for image classification”, International Journal of Computer Vision (IJCV), V.109, no. 3, pp. 209-232.
  2. M. Yang, D.X. Dai, L.L. Shen, and L. Van Gool, “Latent dictionary learning for sparse representation based classification,” in Proc. 27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
  • 报告摘要:In this talk, I will present a topic of sparse learning based image classification. We develop a series of dictionary models, including Fisher discrimination dictionary learning (FDDL) and latent dictionary learning (LDL).
  1. In FDDL, we present a novel dictionary learning method based on the Fisher discrimination criterion. A structured dictionary, whose atoms have correspondences to the subject class labels, is learned, with which not only the representation residual can be used to distinguish different classes, but also the representation coefficients.
  2. In LDL, we propose to learn a discriminative dictionary and build its relationship to class labels adaptively. Each dictionary atom is jointly learned with a latent vector, which associates this atom to the representation of different classes. More specifically, we introduce a latent representation model, in which discrimination of the learned dictionary is exploited via minimizing the within-class scatter of coding coefficients and the latent-value weighted dictionary coherence.
FDDL learns a predefined structured dictionary, while LDL is a further advance in jointly learning dictionary atoms and dictionary structure. These two approaches have been evaluated in several applications. The experimental results demonstrate their advantages.
  • 报告人简介:香港理工电子计算学哲学博士(2009-2012年,导师Associate Prof. Lei Zhang), 瑞士苏黎世联邦理工大学(ETHz) 博士后(2012-2014年,导师Prof. Luc Van Gool),现任深圳大学副教授。曾于2002年至2009年在西北工业大学教改班和自动化学院学习,获得工学学士与硕士。科学网学术主页http://id.sciencenet.cn/u/mikemengyang。研究兴趣包含稀疏表示、字典学习、物体识别与检测、人脸识别、机器学习等。代表性工作Collaborative Representation based Classification, Robust Sparse Coding, Fisher Discrimination Dictionary Learning 等,发表20多篇国际期刊/会议文章,工作发表的代表期刊有International Journal of Computer Vision (IJCV), IEEE Trans. Neural Networks and Learning Systems (TNNLS), IEEE Trans. Image Processing (TIP), Pattern Recognition (PR)等,工作发表的代表会议有 ICCV, CVPR 和 ECCV 8篇。 他是多个期刊及会议的审稿人,如 PAMI,TNNLS, TIP, PR,CVPR14/15, ECCV14 等。截至 2015年 1月底,Google 总引用次数超1400。

最新评论

小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2024-12-22 19:01 , Processed in 0.013163 second(s), 15 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

返回顶部