VALSE

VALSE 首页 活动通知 查看内容

VALSE Webinar 20240605-15期 总第350期

2024-5-30 18:03| 发布者: 程一-计算所| 查看: 1339| 评论: 0

摘要: 报告嘉宾:刘东方 (Rochester Institute of Technology)报告题目:Prompt Tuning as Sustainable Fast Learner报告嘉宾:陈广义 (Carnegie Mellon University / Mohamed bin Zayed University of Artificial Intelli ...

报告嘉宾:刘东方 (Rochester Institute of Technology)

报告题目:Prompt Tuning as Sustainable Fast Learner


报告嘉宾:陈广义 (Carnegie Mellon University / Mohamed bin Zayed University of Artificial Intelligence)

报告题目:Prompt Learning Meets Dense Context for Vision-Language Models


报告嘉宾:刘东方 (Rochester Institute of Technology)

报告时间:2024年6月5日 (星期三)晚上20:00 (北京时间)

报告题目:Prompt Tuning as Sustainable Fast Learner


报告人简介:

Dr. Liu's scholarly pursuits revolve around artificial general intelligence, guided by a broader mission to cultivate versatile AI systems that efficaciously tackle pressing societal dilemmas. His scholarly endeavors have garnered substantial acclaim, evident in the endorsement of his research endeavors by the prestigious National Science Foundation (NSF). Throughout his academic trajectory, Dr. Liu has made significant contributions to the AI field, a fact underscored by his prolific publication portfolio. Distinguished conferences encompassing his publication record encompass CVPR、ECCV、ICCV、ICLR、NIPS、ICML、AAAI、IJCAI、ACL、EMNLP、WWW、IROS、among several others.

 

Beyond his influential research, Dr. Liu actively fosters engagement within the academic community, assuming pivotal roles in prominent organizations. Commencing in 2023, he has taken on the role of an Area Chair at CVPR, and he also serves as a senior program committee for AAAI and IJCAI, thereby assuming a central role in shaping the scholarly discourse within these domains. Furthermore, his expertise is sought after as an associate editor for esteemed journals including IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), Multimedia Tools and Applications (MTAP), and ACM Journal on Autonomous Transportation (JATS).


个人主页:

https://dongfang-liu.github.io/


报告摘要:

As the size of large pre-trained models continues to increase, adapting these models to downstream tasks via full fine-tuning has become computationally prohibitive. Prompt tuning emerges as an efficient alternative, as it involves updating only a small set of task-specific prompts while keeping most of the model parameters frozen. This talk delves into the efficacy of prompt tuning across two crucial domains. In the domain of computer vision, we introduce Efficient and Effective Visual Prompt Tuning (E2VPT). This novel approach leverages visual prompts, key-value prompts, and specialized pruning methods to enable state-of-the-art performance on various vision benchmarks. Notably, E2VPT achieves these results by updating only a fraction of the model parameters compared to traditional fine-tuning methods. Additionally, we explore the application of prompt tuning within the scientific field of inertial confinement fusion (ICF) research. Our framework integrates prompt tuning with reservoir computing to accurately forecast hot electron dynamics using limited experimental fusion data. This is achieved through the tuning of fusion-centric prompts, demonstrating the potential of prompt tuning in specialized scientific domains.

 

参考文献:

[1] Han, C., Wang, Q., Cui, Y., Cao, Z., Wang, W., Qi, S. and Liu, D, "E^2vpt: An effective and efficient approach for visual prompt tuning". ICCV 2023


报告嘉宾:陈广义 (Carnegie Mellon University / Mohamed bin Zayed University of Artificial Intelligence)

报告时间:2024年6月5日 (星期三)晚上20:30 (北京时间)

报告题目:Prompt Learning Meets Dense Context for Vision-Language Models


报告人简介:

Guangyi Chen is currently a Postdoctoral Research Fellow at Carnegie Mellon University, Pittsburgh, USA, and Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), Abu Dhabi, UAE. He received the B.S. and Ph.D. degrees from the Department of Automation, Tsinghua University, China, in 2016 and 2021, respectively. His research interests include computer vision and machine learning, with particular expertise in causal representation learning, attention learning, and video understanding. He has published around 20 papers as the first or co-first author in top-tier journals and conferences, such as CVPR、ICCV、ICLR、ICML、ECCV and IEEE TIP.

 

个人主页:

https://chengy12.github.io/

 

报告摘要:

Recent advancements indicate that large-scale pre-trained vision-language models (VLMs), such as CLIP, offer a promising alternative for high-quality visual representation learning using natural language supervision. To elicit the pre-trained knowledge of VLMs for downstream tasks, prompt learning, a key parameter-efficient fine-tuning method, has proven significantly successful. However, a gap exists where language with prompts typically conveys coarse, high-level overviews, whereas vision offers detailed, fine-grained context. This talk introduces how to bridge this gap and leverage dense visual context to enhance prompt learning. First, we demonstrate that multiple comprehensive prompts can be developed to describe diverse category characteristics, guided by dense visual context. Second, by transforming a pre-trained image-text matching task into a pixel-text matching task, we can learn prompts that facilitate dense prediction tasks, such as segmentation and detection.

 

参考文献:

[1] Guangyi Chen, Weiran Yao, Xiangchen Song, Xinyue Li, Yongming Rao, Kun Zhang, " PLOT: Prompt Learning with Optimal Transport for Vision-Language Models". ICLR 2023.

[2] Yongming Rao, Wenliang Zhao, Guangyi Chen, Yansong Tang, Zheng Zhu, Guan Huang, Jie Zhou, Jiwen Lu, "Denseclip: Language-guided dense prediction with context-aware prompting". CVPR 2022


主持人:周天飞 (北京理工大学)


主持人简介:

周天飞,北京理工大学计算机学院教授,博士生导师,入选国家高层次青年人才计划。2017年于北京理工大学取得博士学位,之后在联想研究院、起源人工智能研究院 (IIAI)、苏黎世联邦理工学院 (ETH Zurich)从事科研工作。2023年加入北京理工大学计算机学院。主要研究方向为人工智能和计算机视觉,在相关领域国际顶级期刊和会议 (如IEEE TPAMI、ICML、ICLR、NeurIPS、CVPR、ICCV)等发表学术论文50余篇。第一作者论文获医疗图像分析领域顶会MICCAI的最佳论文奖 (2022年)、世界人工智能大会青年优秀论文提名奖 (2021年)、入选斯坦福“全球前2%顶尖科学家”榜单、带队在7个国际学术竞赛中获得冠军。担任IEEE TCSVT特邀编辑、Multimedia Tools and Applications副主编等。

 

个人主页:

https://www.tfzhou.com/



特别鸣谢本次Webinar主要组织者:

主办AC:周天飞 (北京理工大学)


活动参与方式

1、VALSE每周举行的Webinar活动依托B站直播平台进行,欢迎在B站搜索VALSE_Webinar关注我们!

直播地址:

https://live.bilibili.com/22300737;

历史视频观看地址:

https://space.bilibili.com/562085182/ 


2、VALSE Webinar活动通常每周三晚上20:00进行,但偶尔会因为讲者时区问题略有调整,为方便您参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ T群,群号:863867505);


*注:申请加入VALSE QQ群时需验证姓名、单位和身份缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。


3、VALSE微信公众号一般会在每周四发布下一周Webinar报告的通知。


4、您也可以通过访问VALSE主页:http://valser.org/ 直接查看Webinar活动信息。Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新。




小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2024-11-22 03:32 , Processed in 0.012735 second(s), 14 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

返回顶部