VALSE

VALSE 首页 活动通知 查看内容

VALSE Webinar 25-22期 总第393期 大模型时代下的联邦学习:进阶之路与挑战 ...

2025-7-24 21:00| 发布者: 程一-计算所| 查看: 20| 评论: 0

摘要: 报告嘉宾:叶茫 (武汉大学)报告题目:自适应联邦大模型微调报告嘉宾:刘洋 (香港理工大学)报告题目:Federated Domain Adaption of Large Language Models报告嘉宾:Xiaoxiao Li (不列颠哥伦比亚大学)报告题目:Effi ...

报告嘉宾:叶茫 (武汉大学)

报告题目:自适应联邦大模型微调


报告嘉宾:刘洋 (香港理工大学)

报告题目:Federated Domain Adaption of Large Language Models


报告嘉宾:Xiaoxiao Li (不列颠哥伦比亚大学)

报告题目:Efficient Federated Learning for Large Models: Advances in Aggregation, Communication, and Pruning


报告嘉宾:叶茫 (武汉大学)

报告时间:2025年7月30日 (星期三)晚上20:00 (北京时间)

报告题目:自适应联邦大模型微调


报告人简介:

叶茫,武汉大学计算机学院教授、智能科学系主任、国家高层次青年人才。长期从事多模态计算、联邦学习、医学人工智能等领域研究,以第一/通讯作者发表 CCF-A 类论文80余篇,谷歌学术引用 13000余次。担任CCF-A类IEEE TIP 和 IEEE TIFS等SCI期刊编委,CVPR、ICLR、NeurIPS、ICML等会议领域主席等学术职务。主持国自科-香港联合基金、科技部重点研发计划课题等10余项科研项目。2021-2024年连续入选斯坦福排行榜“全球前2%顶尖科学家”,2022年百度AI华人青年学者等荣誉。

 

个人主页:

https://marswhu.github.io/

 

报告摘要:

大模型应用面临“公域有算无数、私域有数无算”瓶颈,联邦大模型微调为垂域大模型应用突破“数据受限” 与“隐私合规”困局提供了一种新的学习范式。联邦大模型微调主要面临两大挑战:一是如何在保护数据隐私的前提下实现高效的知识更新和自适应领域微调,二是如何在分布式环境中保证模型的可信泛化能力,有效应对数据异构与噪声干扰等问题。本次报告聚焦联邦大模型微调场景,分享我们团队发表于ICML2025的最新进展,探索领域可泛化、模态可对齐、噪声可平衡、任务可持续的自适应联邦大模型高效微调方法。

 

参考文献:

[1] Yangxu Liao, Mang Ye et. al.“Splitting with Importance-aware Updating for Heterogeneous Federated Learning with Large Language Models” in ICML , 2025

[2] Chengying Fang, Mang Ye, et. al.“FedPHA: Federated Prompt Learning for Heterogeneous Client Adaptation.” in ICML, 2025

[3] Yihao Yang, Mang Ye, et. al.“Federated Disentangled Tuning with Textual Prior Decoupling and Visual Dynamic Adaptation.” in ICML, 2025

[4] Xuankun Rong, Mang Ye, et. al.“CAN: Leveraging Clients as Navigators for Generative Replay in Federated Continual Learning.” in ICML , 2025


报告嘉宾:刘洋 (香港理工大学)

报告时间:2025年7月30日 (星期三)晚上20:30 (北京时间)

报告题目:Federated Domain Adaption of Large Language Models


报告人简介:

Dr Yang Liu is currently an Associate Professor (Presidential Young Scholar) in the Department of Computing, with a joint appointment in the Department of Data Science & Artificial Intelligence at HK Polytechnic University. Before joining PolyU, she was an Associate Professor at the Institute for AI Industry Research (AIR), Tsinghua University. Her research interests include machine learning, federated learning, trustworthy AI, statistical mechanics, and the industrial applications of these technologies. Dr. Liu holds over 30 patents and her research has been published in top-tier scientific conferences and journals, accumulating over 20,000 citations. She is the co-author of "Federated Learning," the first comprehensive monograph on the subject. Her research has been recognized with multiple awards, including the IEEE Computer Society Best Paper Award, the AAAI-IAAI Innovative Applications of AI Award, the IJCAI Innovation Award and “Frontiers of Science Award” by International Congress of Basic Science. Dr. Liu serves as an Associate Editor for ACM TIST since 2021 and as a Guest Editor for multiple journals. She has also co-chaired several workshops at leading AI conferences. In 2022, she was recognized by MIT Technology Review as one of the "Privacy-Preserving Computation Tech Innovators China".


个人主页:

https://sites.google.com/site/yangliuveronica/home


报告摘要:

In an era where Artificial Intelligence (AI) is increasingly integrated into our daily lives, bridging the power of large language models (LLMs) to private domain remains a crucial challenge. Federated learning (FL) emerges as a promising paradigm for developing private intelligence, enabling AI models to be trained collaboratively on decentralized devices without exposing private data. This talk will delve into recent advances in federated learning, with a focus on key techniques and applications to foster domain adaption of LLMs by enabling collaboration between LLMs and small domain models.


报告嘉宾:Xiaoxiao Li (不列颠哥伦比亚大学)

报告时间:2025年7月30日 (星期三)晚上21:00 (北京时间)

报告题目:Efficient Federated Learning for Large Models: Advances in Aggregation, Communication, and Pruning


报告人简介:

Dr. Xiaoxiao Li is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of British Columbia, a Faculty Member at the Vector Institute, and an Adjunct Professor at Yale University. She holds a Canada Research Chair (Tier II) in Responsible AI and is recognized as a Canada CIFAR AI Chair. Dr. Li's research aims to enhance the trustworthiness and efficiency of AI models, bridging the gap between cutting-edge AI research and practical real-world applications, such as healthcare. Her current interests include mechanistic analysis of large language and vision-language models (LLMs/VLMs), developing hypothesis-driven evaluations, and advancing methodologies toward artificial general intelligence (AGI). Dr. Li has published over 40 papers on the top ML/AI venues, including ICML, ICLR, NeurIPS, CVPR, ECCV, AAAI, Nature Methods, etc.


个人主页:

https://xxlya.github.io/


报告摘要:

This talk provides an integrated overview of cutting-edge techniques to enhance the efficiency and performance of federated learning (FL), particularly for Large Language Models (LLMs). We begin by exploring how model merging strategies, such as those in FedSoup and Local Superior Soups, can create more robust and generalized models while accelerating convergence in decentralized environments. The presentation then pivots to address the unique challenges of improving reasoning in LLMs in a federated setting, introducing a novel paradigm FedTextGrad, which explores the use of textual prompts for gradient updates instead of traditional numerical methods. Finally, to tackle the significant deployment challenge of massive models, we will delve into extreme model compression with "DARE the Extreme," a powerful pruning technique that drastically reduces model size without compromising performance. This talk will offer attendees a cohesive narrative on advancing federated learning, from foundational algorithmic improvements to practical solutions for the era of large-scale, decentralized AI.


主持人:屈靓琼 (香港大学)


主持人简介:

屈靓琼,香港大学计算与数据学院助理教授,中国科学院大学和香港城市大学计算机博士,斯坦福大学和北卡罗来纳大学博士后。主要从事计算机视觉和医学图像分析等领域的研究工作,在PNAS、Nature Machine Intelligence、Cancer Cell、Nature Methods、CVPR、ICLR、MICCAI、AAAI等国顶级期刊和会议上发表论文60余篇。曾获国际会议CHIL最佳论文奖及辽宁省自然科学学术成果一等奖。


个人主页:

https://liangqiong.github.io/



特别鸣谢本次Webinar主要组织者:

主办AC:屈靓琼 (香港大学)

小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2025-10-15 07:13 , Processed in 0.013742 second(s), 15 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

返回顶部