VALSE

VALSE 首页 活动通知 查看内容

VALSE 论文速览 第151期:Discriminative vs. Generative Classifiers

2023-11-28 19:27| 发布者: 程一-计算所| 查看: 339| 评论: 0

摘要: 论文题目:Revisiting Discriminative vs. Generative Classifiers:Theory and Implications作者列表:郑晨宇 (中国人民大学),吴国强(山东大学),鲍凡(清华大学),曹越(北京智源人工智能研究院),李崇轩(中国人民 ...

论文题目:

Revisiting Discriminative vs. Generative Classifiers:Theory and Implications

作者列表:

郑晨宇 (中国人民大学),吴国强 (山东大学),鲍凡 (清华大学),曹越 (北京智源人工智能研究院),李崇轩 (中国人民大学),朱军 (清华大学)


B站观看网址:

https://www.bilibili.com/video/BV1eN411T78A/



论文摘要:

A large-scale deep model pre-trained on massive labeled or unlabeled data transfers well to downstream tasks. Linear evaluation freezes parameters in the pre-trained model and trains a linear classifier separately, which is efficient and attractive for transfer. However, little work has investigated the classifier in linear evaluation except for the default logistic regression. Inspired by the statistical efficiency of naive Bayes, the paper revisits the classical topic on discriminative vs. generative classifiers [1]. Theoretically, the paper considers the surrogate loss instead of the zero-one loss in analyses and generalizes the classical results from binary cases to multiclass ones. We show that, under mild assumptions, multiclass naive Bayes requires O(log n) samples to approach its asymptotic error while the corresponding multiclass logistic regression requires O(n) samples, where n is the feature dimension. To establish it, we present a multiclass H-consistency bound framework and an explicit bound for logistic loss, which are of independent interests. Simulation results on a mixture of Gaussian validate our theoretical findings. Experiments on various pre-trained deep vision models show that naïve Bayes consistently converges faster as the number of data increases. Besides, naïve Bayes shows promise in few-shot cases and we observe the "two regimes" phenomenon in pre-trained supervised models. Our code is available at https://github.com/ML-GSAI/Revisiting-Dis-vs-Gen-Classifiers.


论文信息:

[1] Ng, A. Y. and Jordan, M. I. On discriminative vs. generative classifiers: A comparison of logistic regression and naïve bayes. In Advances in Neural Information Processing Systems, pp. 841–848, 2001

[2] Awasthi, P., Mao, A., Mohri, M., and Zhong, Y. H-consistency bounds for surrogate loss minimizers. In International Conference on Machine Learning, volume 162, pp. 1117–1174, 2022a.


视频讲者简介:

郑晨宇,中国人民大学高瓴人工智能学院直博生,导师李崇轩老师。研究兴趣为机器学习,包括学习理论和生成模型。


个人主页:

https://github.com/Chen-yu-Zheng



特别鸣谢本次论文速览主要组织者:

月度轮值AC:王啸 (北京航空航天大学)

季度轮值AC:张磊 (重庆大学)

小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2024-12-26 19:28 , Processed in 0.012752 second(s), 14 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

返回顶部