报告嘉宾:张坤 (Carnegie Mellon University & MBZUAI) 报告题目:Advances in Causal Representation Learning: Discovery of the Hidden World 报告嘉宾:崔鹏 (清华大学) 报告题目:人工智能泛化安全的因果性与异质性视角 报告嘉宾:张坤 (Carnegie Mellon University & MBZUAI) 报告时间:2023年11月22日 (星期三)晚上20:00 (北京时间) 报告题目:Advances in Causal Representation Learning: Discovery of the Hidden World 报告人简介: Kun Zhang is currently on leave from Carnegie Mellon University (CMU), where he is an associate professor of philosophy and an affiliate faculty in the machine learning department; he is working as a Professor of machine learning, Acting Chair of the machine learning department, and Director of the Center for Integrative AI at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). He has been actively developing methods for automated causal discovery from various kinds of data and investigating machine learning problems including transfer learning, representation learning, and reinforcement learning from a causal perspective. He has been frequently serving as a senior area chair, area chair, or senior program committee member for major conferences in machine learning or artificial intelligence, including UAI, NeurIPS, ICML, IJCAI, AISTATS, and ICLR. He was a general & program co-chair of the first Conference on Causal Learning and Reasoning (CLeaR 2022), a program co-chair of the 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022), and is a general co-chair of UAI 2023. 个人主页: https://www.andrew.cmu.edu/user/kunz1/ 报告摘要: Causality is a fundamental notion in science, engineering, and even in machine learning. Causal representation learning aims to reveal the underlying high-level hidden causal variables and their relations. It can be seen as a special case of causal discovery, whose goal is to recover the underlying causal structure or causal model from observational data. The modularity property of a causal system implies properties of minimal changes and independent changes of causal representations, and in this talk, we show how such properties make it possible to recover the underlying causal representations from observational data with identifiability guarantees: under appropriate assumptions, the learned representations are consistent with the underlying causal process. Various problem settings are considered, involving independent and identically distributed (i.i.d.) data, temporal data, or data with distribution shift as input. We demonstrate when identifiable causal representation learning can benefit from flexible deep learning and when suitable parametric assumptions have to be imposed on the causal process, with various examples and applications. 参考文献: [1] Peter Spirtes, Clark Glymour, Richard Scheines, “Causation, Prediction, and Search,” MIT Press, Cambridge, MA, 2nd edition, 2001. [2] Feng Xie, Ruichu Cai, Biwei Huang, Clark Glymour, Zhifeng Hao, Kun Zhang, "Generalized Independent Noise Condition for Estimating Linear Non-Gaussian Latent Variable Causal Graphs," Conference on Neural Information Processing Systems (NeurIPS) 2020 https://proceedings.neurips.cc/paper/2020/file/aa475604668730af60a0a87cc92604da-Paper.pdf [3] Feng Xie, Biwei Huang, Zhengming Chen, Yangbo He, Zhi Geng, Kun Zhang, "Estimation of Linear Non-Gaussian Latent Hierarchical Structure," Proceedings of International Conference on Machine Learning (ICML) 2022 https://proceedings.mlr.press/v162/xie22a.html [4] Biwei Huang*, Charles Low*, Feng Xie, Clark Glymour, Kun Zhang, “Latent Hierarchical Causal Structure Discovery with Rank Constraints,” Conference on Neural Information Processing Systems (NeurIPS) 2022 https://openreview.net/pdf?id=lIeuKiTZsLY [5] Weiran Yao, Guangyi Chen, Kun Zhang “Temporally Disentangled Representation Learning,” Conference on Neural Information Processing Systems (NeurIPS) 2022 https://openreview.net/pdf?id=Vi-sZWNA_Ue [6] Lingjing Kong, Shaoan Xie, Weiran Yao, Yujia Zheng, Guangyi Chen, Petar Stojanov, Victor Akinwande, Kun Zhang, "Partial disentanglement for domain adaptation," Proceedings of International Conference on Machine Learning (ICML) 2022 https://proceedings.mlr.press/v162/kong22a.html [7] Shaoan Xie, Lingjing Kong, Mingming Gong, Kun Zhang, “Multi-domain image generation and translation with identifiability guarantees”, Proceedings of International Conference on Learning Representations (ICLR) 2023 https://openreview.net/pdf?id=U2g8OGONA_V 报告嘉宾:崔鹏 (清华大学) 报告时间:2023年11月22日 (星期三)晚上20:30 (北京时间) 报告题目:人工智能泛化安全的因果性与异质性视角 报告人简介: 崔鹏,清华大学计算机系长聘副教授,博士生导师。研究兴趣聚焦于因果启发的稳定预测和决策、大规模网络表征学习等。2016年开始将因果统计思想与机器学习框架进行融合性研究,提出并发展了因果启发的稳定学习理论方法体系,在智慧医疗、互联网经济等场景取得显著应用价值。在ICML、KDD等顶级国际会议及Nature Machine Intelligence等期刊发表论文100余篇,先后7次获得国际会议或期刊论文奖。担任IEEE TKDE、ACM TOMM、ACM TIST、IEEE TBD、KAIS等国际期刊编委。曾获得国家自然科学二等奖、教育部自然科学一等奖、CCF-IEEE CS青年科学家奖;入选中组部万人计划青年拔尖人才,ACM杰出科学家;担任第九届科协全国委员会委员,CCF YOCSEF第二十三届学术委员会主席。 报告摘要: 近年来人工智能技术的发展,包括以GPT为代表的大模型,在诸多领域取得了性能突破。但当我们将这些系统或技术应用于医疗、司法、工业生产等风险敏感领域时,发现当前人工智能在分布外泛化方面存在严重缺陷。究其深层次原因,当前统计机器学习的基础——关联统计自身不稳定、不可解释、不公平、不可回溯可能是问题的根源。相对于关联统计,因果统计在保证分布外泛化能力方面具有更好的理论基础。但如何将因果统计融入机器学习框架,是一个开放并有挑战的基础性问题。本报告中,讲者将回顾将因果统计思想引入机器学习的研究历程,并重点介绍利用不变性和异质性提升机器学习分布外泛化能力的最新研究进展。 参考文献: [1] Shen Z, Liu J, He Y, et al. Towards out-of-distribution generalization: A survey[J]. arXiv preprint arXiv:2108.13624, 2021. [2] Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead[J]. Nature machine intelligence, 2019, 1(5): 206-215. [3] Corbett-Davies S, Goel S. The measure and mismeasure of fairness: A critical review of fair machine learning[J]. arXiv preprint arXiv:1808.00023, 2018. [4] Bühlmann P. Invariance, causality and robustness[J]. 2020. [5] Imbens G W, Rubin D B. Causal inference in statistics, social, and biomedical sciences[M]. Cambridge University Press, 2015. [6] Cui P, Athey S. Stable learning establishes some common ground between causal inference and machine learning[J]. Nature Machine Intelligence, 2022, 4(2): 110-115. [7] Xu R, Zhang X, Shen Z, et al. A theoretical analysis on independence-driven importance weighting for covariate-shift generalization[C]/International Conference on Machine Learning. PMLR, 2022: 24803-24829. [8] Liu J, Hu Z, Cui P, et al. Heterogeneous risk minimization[C]/International Conference on Machine Learning. PMLR, 2021: 6804-6814. [9] Liu J, Wu J, Pi R, et al. Measure the Predictive Heterogeneity[C]/The Eleventh International Conference on Learning Representations. 2022. [10] Liu J, Wang T, Cui P, et al. On the Need for a Language Describing Distribution Shifts: Illustrations on Tabular Datasets[J]. arXiv preprint arXiv:2307.05284, 2023. 主持人:况琨 (浙江大学) 主持人简介: 况琨,浙江大学计算机学院副教授,博士生导师,人工智能系副主任。主要研究方向包括因果推理、因果可信学习和智慧司法。在数据挖掘和机器学习领域已发表70余篇会议和期刊文章。曾获2022年ACM SIGAI China 新星奖,2022年度教育部科技进步一等奖,2021年度中国电子学会科技进步一等奖,2021年度中国科协青年人才托举工程项目支持。 个人主页: https://kunkuang.github.io/ 特别鸣谢本次Webinar主要组织者: 主办AC:况琨 (浙江大学) 活动参与方式 1、VALSE每周举行的Webinar活动依托B站直播平台进行,欢迎在B站搜索VALSE_Webinar关注我们! 直播地址: https://live.bilibili.com/22300737; 历史视频观看地址: https://space.bilibili.com/562085182/ 2、VALSE Webinar活动通常每周三晚上20:00进行,但偶尔会因为讲者时区问题略有调整,为方便您参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ S群,群号:317920537); *注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。 3、VALSE微信公众号一般会在每周四发布下一周Webinar报告的通知。 4、您也可以通过访问VALSE主页:http://valser.org/ 直接查看Webinar活动信息。Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新。 张坤【slide】 |
小黑屋|手机版|Archiver|Vision And Learning SEminar
GMT+8, 2024-11-25 11:43 , Processed in 0.013223 second(s), 14 queries .
Powered by Discuz! X3.4
Copyright © 2001-2020, Tencent Cloud.