VALSE

VALSE 首页 活动通知 查看内容

VALSE 论文速览 第154期:mPLUG-2: A Modularized Foundation Model

2023-12-7 19:39| 发布者: 程一-计算所| 查看: 371| 评论: 0

摘要: 论文题目:mPLUG-2:AModularizedMulti-modalFoundationModelAcrossText,ImageandVideo作者列表:徐海洋 (阿里巴巴,共同一作)、叶晴昊 (阿里巴巴,共同一作)、严明 (阿里巴巴,通讯作者)、史雅雅 (阿里巴巴)、叶加博 ...

论文题目:

mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video

作者列表:

徐海洋 (阿里巴巴,共同一作)、叶晴昊 (阿里巴巴,共同一作)、严明 (阿里巴巴,通讯作者)、史雅雅 (阿里巴巴)、叶加博 (阿里巴巴)徐渊鸿 (阿里巴巴)、李晨亮 (阿里巴巴)、闭斌 (阿里巴巴)、钱祺 (阿里巴巴)、王玮 (阿里巴巴)、徐国海 (阿里巴巴)、张佶 (阿里巴巴)、黄松芳 (阿里巴巴)、黄非 (阿里巴巴)、周靖人 (阿里巴巴)


B站观看网址:

https://www.bilibili.com/video/BV11c411Q75R/


论文摘要:

Recent years have witnessed a big convergence of language, vision, and multi-modal pretraining. In this work, we present mPLUG-2, a new unified paradigm with modularized design for multi-modal pretraining, which can benefit from modality collaboration while addressing the problem of modality entanglement. In contrast to predominant paradigms of solely relying on sequence-to-sequence generation or encoder-based instance discrimination, mPLUG-2 introduces a multi-module composition network by sharing common universal modules for modality collaboration and disentangling different modality modules to deal with modality entanglement. It is flexible to select different modules for different understanding and generation tasks across all modalities including text, image, and video. Empirical study shows that mPLUG-2 achieves state-of-the-art or competitive results on a broad range of over 30 downstream tasks, spanning multi-modal tasks of image-text and video-text understanding and generation, and uni-modal tasks of text-only, image-only, and video-only understanding. Notably, mPLUG-2 shows new state-of-the-art results of 48.0 top-1 accuracy and 80.3 CIDEr on the challenging MSRVTT video QA and video caption tasks with a far smaller model size and data scale. It also demonstrates strong zero-shot transferability on vision-language and video-language tasks. We also introduce mPLUG-Owl, a modularized large language model based on mPLUG-2, which shows incredible zero-shot performance on many aspects.


参考文献:

[1] Xu H, Ye Q, Yan M, et al. mplug-2: A modularized multi-modal foundation model across text, image and video[C]. ICML 2023.

[2] Ye Q, Xu H, Xu G, et al. mplug-owl: Modularization empowers large language models with multimodality[J]. arXiv preprint arXiv:2304.14178, 2023.


论文链接:

[https://arxiv.org/abs/2302.00402]


代码链接:

[https://github.com/X-PLUG/mPLUG-2]


视频讲者简介:

Qinghao Ye is an Algorithm Engineer in DAMO Academy, Alibaba Group. He received the M.S. degree in Computer Scient & Engineering from University of California, San Diego in 2022. He has authored multiple publications, including works represented on ICCV, ACL and ICML. His work has been cited over 400 times according to Google Scholar, and has an H-index of 9.



特别鸣谢本次论文速览主要组织者:

月度轮值AC:胡鹏 (四川大学)

季度轮值AC:张磊 (重庆大学)

小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2024-7-16 13:41 , Processed in 0.015632 second(s), 14 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

返回顶部