报告嘉宾:陆路 (UPenn) 报告题目:Learning operators using deep neural networks for diverse applications 报告嘉宾:王建勋 (University of Notre Dame) 报告题目:Leveraging physics-induced bias in scientific machine learning for computational mechanics Panel嘉宾: 陆路 (Upenn)、王建勋 (University of Notre Dame)、许志钦 (上海交大)、孙浩 (中国人民大学) Panel议题: 1. 神经网络相比于传统数值方法在哪些问题上有潜在的优势? 2. 将先验物理知识嵌入到深度学习模型和算法中,可有效提高学习效率和能力。目前这个方向有哪些重要进展和趋势? 3. 神经网络在求解PDE时有哪些重要的理论问题? 4. 物理信息驱动神经网络面临的理论、算法和应用等方面的主要挑战有哪些?是否有办法解决? 5. 视觉数据 (特别是视频数据)大多是对真实物理世界的观测,物理信息如何能对机器视觉 (如3D检测、目标跟踪、关系建模等)起到促进作用? *欢迎大家在下方留言提出主题相关问题,主持人和panel嘉宾会从中选择若干热度高的问题加入panel议题! 报告嘉宾:陆路 (Upenn) 报告时间:2022年06月15日 (星期三)晚上20:00 (北京时间) 报告题目:Learning operators using deep neural networks for diverse applications 报告人简介: Lu Lu is an Assistant Professor in the Department of Chemical and Biomolecular Engineering at University of Pennsylvania. He is also a faculty of Penn Institute for Computational Science and of Graduate Group in Applied Mathematics and Computational Science. Prior to joining Penn, he was an Applied Mathematics Instructor in the Department of Mathematics at Massachusetts Institute of Technology from 2020 to 2021. He obtained his Ph.D. degree in Applied Mathematics at Brown University in 2020, master's degrees in Engineering, Applied Mathematics, and Computer Science at Brown University, and bachelor's degrees in Mechanical Engineering, Economics, and Computer Science at Tsinghua University in 2013. Lu has a multidisciplinary research background with research experience at the interface of applied mathematics, physics, computational biology, and computer science. The goal of his research is to model and simulate physical and biological systems at different scales by integrating modeling, simulation, and machine learning, and to provide strategies for system learning, prediction, optimization, and decision making in real time. His current research interest lies in scientific machine learning, including theory, algorithms, and software, and its applications to engineering, physical, and biological problems. His broad research interests focus on multiscale modeling and high performance computing for physical and biological systems. 个人主页: https://lu.seas.upenn.edu/people/ 报告摘要: It is widely known that neural networks (NNs)are universal approximators of continuous functions. However, a less known but powerful result is that a NN can accurately approximate any nonlinear continuous operator. This universal approximation theorem of operators is suggestive of the structure and potential of deep neural networks (DNNs)in learning continuous operators or complex systems from streams of scattered data. In this talk, I will present the deep operator network (DeepONet)to learn various explicit operators, such as integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. More generally, DeepONet can learn multiscale operators spanning across many scales and trained by diverse sources of data simultaneously. I will demonstrate the effectiveness of DeepONet to multiphysics and multiscale problems. I will also present several extensions of DeepONet for realistic diverse applications such as DeepONet with proper orthogonal decomposition (POD-DeepONet)and multifidelity DeepONet. 报告嘉宾:王建勋 (University of Notre Dame) 报告时间:2022年06月15日 (星期三)晚上20:30 (北京时间) 报告题目:Leveraging physics-induced bias in scientific machine learning for computational mechanics 报告人简介: Dr. Jian-xun Wang is an assistant professor of Aerospace and Mechanical Engineering at the University of Notre Dame. He received a Ph.D. in Aerospace Engineering from Virginia Tech in 2017 and was a postdoctoral scholar at UC Berkeley before joining Notre Dame in 2018. He is a recipient of the 2021 NSF CAREER Award. His research focuses on scientific machine learning, data-enabled computational modeling, Bayesian data assimilation, and uncertainty quantification. 个人主页: https://sites.nd.edu/jianxun-wang/ 报告摘要: First-principle modeling and simulation of complex systems based on partial differential equations (PDEs)and numerical discretization have been developed for decades and achieved great success. Nonetheless, traditional numerical solvers face significant challenges in many practical scenarios, e.g., inverse problems, uncertainty quantification, design, and optimizations. Moreover, for complex systems, the governing equations might not be fully known due to a lack of complete understanding of the underlying physics, for which a first-principled numerical solver cannot be built. Recent advances in data science. and machine learning, combined with the ever-increasing availability of high-fidelity simulation and measurement data, open up new opportunities for developing data-enabled computational mechanics models. Although the state-of-the-art machine/ deep learning techniques hold great promise, there are still many challenges - e.g., requirement of "big data", the challenge in generalizability/ extrapolibity, lack of interpretability/ explainability, etc. On the other hand, there is often a richness of prior knowledge of the systems, including physical laws and phenomenological principles, which can be leveraged in this regard. Thus, there is an urgent need for fundamentally new and transformative machine learning techniques, closely grounded in physics, to address the aforementioned challenges in computational mechanics problems. This talk will briefly discuss our recent developments of scientific machine learning for computational mechanics, focusing on several different aspects of how to bake physics-induced bias into machine/ deep learning models for data-enabled predictive modeling. Specifically, the following topics will be covered: (1)PDE-structure preserved deep learning, where the neural network architectures are built by preserving mathematical structures of the (partially)known governing physics for predicting spatiotemporal dynamics, (2)physics-informed geometric deep learning for predictive modeling involving complex geometries and irregular domains. Panel嘉宾:许志钦 (上海交通大学) 嘉宾简介: 许志钦,上海交通大学自然科学研究院/数学科学学院长聘教轨副教授。2012年本科毕业于上海交通大学致远学院。2016年博士毕业于上海交通大学,获应用数学博士学位。2016年至2019年,在纽约大学阿布分校和柯朗研究所做博士后。主要研究方向是机器学习和计算神经科学。与合作者共同发现深度学习中的频率原则和能量景观嵌入原则等。论文发表于Journal of Machine Learning Research, AAAI, NeurIPS,Communications in Computational Physics等学术期刊和会议。 个人主页: https://ins.sjtu.edu.cn/people/xuzhiqin/ Panel嘉宾:孙浩 (中国人民大学) 嘉宾简介: 孙浩,中国人民大学高瓴人工智能学院 “长聘副教授、博导”,国家高层次人才青年专家,麻省理工学院兼职研究员、美国东北大学兼职教授。2014年在美国哥伦比亚大学取得工程力学博士学位,随后在麻省理工学院从事博士后研究 (2014-2017),曾任美国匹兹堡大学 (2017-2018)、美国东北大学 (2018-2021)终身序列助理教授、博导。主要从事科学智能 (AI for Science)、人工智能数理基础与理工交叉研究,包含可诠释性深度学习、基于物理信息的深度学习、符号强化学习与推理、数据驱动复杂动力系统建模与识别、控制方程找型、基础设施健康监测与智能化管理等方向。主持或共同主持美国科学基金、华为技术公司等研究项目2200余万元;在国际一流SCI期刊 (如:《自然-通讯》)和计算机顶会 (如ICLR、IJCAI)等各类刊物上共发表论文50余篇;受邀到麻省理工学院、加州理工学院、加州大学伯克利分校等世界名校做学术报告;研究成果受到了几十家国际知名媒体的广泛报导 (例如《福克斯新闻》、《麻省理工新闻》、《科学日报》等著名媒体)。担任综合期刊PLOS ONE学科主编、国家科技奖评审人。2018年入选福布斯美国 “30位30岁以下精英榜 (科学类)”,2019年当选 “美国十大华人杰出青年”。 个人主页: https://gsai.ruc.edu.cn/addons/teacher/index/info.html?user_id=25&ruccode=20210163&ln=cn 主持人:黄高 (清华大学) 主持人简介: 黄高,清华大学自动化系副教授,博士生导师。2015年获清华大学博士学位,2015年至2018年在美国康奈尔大学计算机系从事博士后科研工作。主要研究领域为深度学习和计算机视觉,提出了主流卷积网络模型DenseNet。目前在NeurIPS,ICML,CVPR等国际顶级会议及IEEE多个汇刊共计发表学术论文70余篇,谷歌学术引用30000余次。获国家优青、CVPR最佳论文奖、达摩院青橙奖、世界人工智能大会SAIL先锋奖、中国自动化学会优秀博士学位论文、中国百篇最具影响国际学术论文、中国人工智能学会自然科学一等奖和吴文俊优秀青年奖等荣誉,入选北京智源学者、AI 2000人工智能最具影响力学者、《麻省理工科技评论》亚太区“35岁以下科技创新35人”。 个人主页: http://www.gaohuang.net/ 特别鸣谢本次Webinar主要组织者: 主办AC:黄高 (清华大学) 活动参与方式 1、VALSE每周举行的Webinar活动依托B站直播平台进行,欢迎在B站搜索VALSE_Webinar关注我们! 直播地址: https://live.bilibili.com/22300737; 历史视频观看地址: https://space.bilibili.com/562085182/ 2、VALSE Webinar活动通常每周三晚上20:00进行,但偶尔会因为讲者时区问题略有调整,为方便您参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ R群,群号:137634472); *注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。 3、VALSE微信公众号一般会在每周四发布下一周Webinar报告的通知。 4、您也可以通过访问VALSE主页:http://valser.org/ 直接查看Webinar活动信息。Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新。 陆路 [slide] 王建勋 [slide] |
小黑屋|手机版|Archiver|Vision And Learning SEminar
GMT+8, 2024-11-22 04:13 , Processed in 0.013038 second(s), 14 queries .
Powered by Discuz! X3.4
Copyright © 2001-2020, Tencent Cloud.