VALSE

VALSE 首页 活动通知 查看内容

VALSE Webinar 2024-26期 总第361期 医疗人工智能的前沿进展

2024-9-5 11:05| 发布者: 程一-计算所| 查看: 419| 评论: 0

摘要: 报告嘉宾:潘永生 (西北工业大学)报告题目:跨模态影像生成技术发展与应用报告嘉宾:谷林 (日本理化学研究所AIP)报告题目:Bridging the Gap Between Medical AI Research and Physicians, Patients, and Policymake ...

报告嘉宾:潘永生 (西北工业大学)

报告题目:跨模态影像生成技术发展与应用


报告嘉宾:谷林 (日本理化学研究所AIP)

报告题目:Bridging the Gap Between Medical AI Research and Physicians, Patients, and Policymakers


报告嘉宾:马骏 (University Health Network)

报告题目:Towards Biomedical Image Segmentation Foundation Models


报告嘉宾:潘永生 (西北工业大学)

报告时间:2024年9月11日 (星期三)晚上20:00 (北京时间)

报告题目:跨模态影像生成技术发展与应用


报告人简介:

潘永生是西北工业大学计算机学院准聘教授,分别于2015和2021年从西北工业大学计算机学院获得学士和博士学位,2017年-2020年在美国北卡罗来纳大学教堂山分校进行联合培养,2021-2023年在上海科技大学生物医学工程学院沈定刚教授课题组继续从事医学影像相关研究工作。2023年入职西北工业大学,在空天地海一体化大数据应用技术国家工程实验室从事医学影像健康计算相关研究。目前,已在IEEE-TPAMI、Radiology、IEEE-TMI、IEEE-TIP、MICCAI等顶级期刊和会议上发表论文 30 余篇,在5项学科竞赛中获得冠亚季军。获2021年度博士后创新人才支持计划(简称“博新计划”),主持博士后自然科学基金面上项目,国家自然科学基金项目青年项目,参与三项科技部重点研发计划项目。


个人主页:

https://jszy.nwpu.edu.cn/yongshengpan

 

报告摘要:

医学影像是一种利用各种成像技术来捕捉人体内部结构和功能的医学诊断方法,在疾病诊断、治疗和预后预测中发挥着重要的作用。由于不同类型或者子类型的医学影像反应患者身体的不同信息,在医疗诊断时医生往往需要多种不同类型或者子类型的医学影像来获取更加全面的信息从而提高诊断准确率。然而在现实生活中,多模态影像数据获取面临着采集时间长、费用高、可能增加辐射剂量等困难。因此,人们期待能够使用图像处理技术进行跨模态医学影像合成,即使用某一种或一些模态的医学影像去生成另一种或一些模态的医学影像。跨模态医学影像合成虽然能为多模态影像诊断带来便利,但也存在一些技术挑战。例如合成影像和真实影像在诊断性能上具有明显的差异从而导致合成影像的临床失效问题,隐私和伦理问题会导致高质量多模态医学影像数据获取成本高的问题。研究者们大多从模型本身入手,通过提高模型的表示能力或者设计针对具体任务的约束条件来提高合成影像的质量,所开发的跨模态医学影像合成技术已应用于影像采集、重建、配准、分割、检测、诊断等环节,给许多问题带来了新的解决思路和方法。本文主要介绍医学图像领域中跨模态图像合成技术和跨模态医学影像合成的应用。


参考文献:

[1] Disease-Image-Specific Learning for Diagnosis-Oriented Neuroimage Synthesis with Incomplete Multi-Modality Data, Y Pan, M Liu, Y Xia, D Shen, IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6839-6853.

[2] Revealing anatomical structures in PET to generate CT for attenuation correction, Y Pan, F Liu, C Jiang, J Huang, Y Xia, D Shen, International Conference on Medical Image Computing and Computer-Assisted Intervention, 2023, 24-33.

[3] Draw Sketch, Draw Flesh: Whole-body Computed Tomography from Any X-ray Views, Y Pan, Y Ye, Y Xia, D Shen, International Journal of Computer Vision, 2nd-round revision, 2024.

[4] 医学影像中的生成技术, 潘永生,马豪杰,夏勇,图象图形学报,已录用,2024.


报告嘉宾:谷林 (日本理化学研究所AIP)

报告时间:2024年9月11日 (星期三)晚上20:30 (北京时间)

报告题目:Bridging the Gap Between Medical AI Research and Physicians, Patients, and Policymakers


报告人简介:

Dr. Lin Gu is currently a research scientist at RIKEN AIP, Japan, and a special researcher at the University of Tokyo. Prior to these roles, he joined the National Institute of Informatics in Tokyo in June 2016 and served as a regular visiting scholar at Kyoto University from 2016 to 2019. Before relocating to Japan, he was a Postdoctoral Research Fellow at the Bioinformatics Institute, A*STAR, Singapore. He completed his doctoral studies at the Australian National University and NICTA (now Data61) in 2014, focusing on hyperspectral imaging and colour science. Dr. Gu has authored over 90 papers in journals and conferences, including Nature Methods, PAMI, IJCV, CVPR, ECCV, ICCV, ICLR and MICCAI. He is also an associate editor for Pattern Recognition and an area chair for ICML, NeurIPS, and ICLR.

 

个人主页:

https://sites.google.com/view/linguedu/home

 

报告摘要:

Despite the significant advancements in medical AI each year, few are translated into real-world clinical practice. One primary barrier is the background gap between engineers, physicians, patients, and policymakers. In this talk, I will begin by addressing the issue of uncertainties—an inherent part of clinical decision-making that is often overlooked in medical AI research. Moving forward, I will discuss how we are facilitating communication among physicians through a novel system designed to generate and retrieve imagined rather than actual images. Finally, I will introduce our ongoing efforts to translate machine learning successes into actionable insights for policymakers and insurance providers, with a focus on practical implications in real-world scenarios.

 

参考文献:

[1] Hu X, Gu L, Kobayashi K, et al. Interpretable medical image visual question answering via multi-modal relationship graph learning[J]. Medical Image Analysis, 2024, 97: 103279.

[2] Zhang M, Hu X, Gu L, et al. A New Benchmark: Clinical Uncertainty and Severity Aware Labeled Chest X-Ray Images with Multi-Relationship Graph Learning[J]. IEEE Transactions on Medical Imaging, 2024. 

[3] Hu H, Xiong S, Zhang X, et al. The COVID-19 pandemic in various restriction policy scenarios based on the dynamic social contact rate[J]. Heliyon, 2023, 9(3).


报告嘉宾:马骏 (University Health Network)

报告时间:2024年9月11日 (星期三)晚上21:00 (北京时间)

报告题目:Towards Biomedical Image Segmentation Foundation Models


报告人简介:

Jun Ma is a Machine Leaning Lead at Canada's No. 1 hospital University Health Network (UHN). He received the Ph.D. degree in Mathematics from Nanjing University of Science and Technology in 2021. Before joining UHN in July 2024, he was a Postdoctoral Fellow at the University of Toronto, Vector Institute, and Peter Munk Cardiac Centre. His research interests include medical vision and machine learning. His work has been published in top journals, including Nature Methods, Lancet Digital Health, Nature Communications, and TPAMI. He also has won the top three in over 10 international medical image analysis challenges as the first author. In addition, he has organized multiple international competitions, such as MICCAI 2021-2024 FLARE Challenge, NeurIPS 2022 Cell Segmentation Challenge, and CVPR 2024 MedSAM on Laptop Challenge. His work has been cited over 7000 times according to Google Scholar and has an H-index of 25. His GitHub projects have garnered over 10,000 stars.

 

个人主页:

https://scholar.google.com.hk/citations?hl=en&user=bW1UV4IAAAAJ&view_op=list_works&sortby=pubdate

 

报告摘要:

Accurate and efficient segmentation is a prerequisite for quantitative medical and bioimage analysis. This talk will present some of our recent work towards building the promptable medical image segmentation foundation model and its deployment. We will also introduce the winning solutions in the recent competitions on organ and pan-cancer and cell segmentation.

 

参考文献:

[1] Ma, Jun, et al. "Segment anything in medical images." Nature Communications 15.1 (2024): 654.

[2] Ma, Jun, et al. "Unleashing the strengths of unlabeled data in pan-cancer abdominal organ quantification: the FLARE22 challenge." arXiv preprint arXiv:2308.05862 (in press, Lancet Digital Health).

[3] Ma, Jun, and Bo Wang. "Towards foundation models of biological image segmentation." Nature Methods 20.7 (2023): 953-955.

[4] Ma, Jun, et al. "The multimodality cell segmentation challenge: toward universal solutions." Nature Methods 21.6 (2024): 1103-1113.

[5] Ma, Jun, et al. "Abdomenct-1K: Is abdominal organ segmentation a solved problem?." IEEE Transactions on Pattern Analysis and Machine Intelligence 44.10 (2021): 6695-6714.


主持人:谢雨彤 (University of Adelaide)


主持人简介:

谢雨彤,澳大利亚阿德莱德大学机器学习研究所研究员,于2016年和2021年在西北工业大学分别获得学士和博士学位。研究方向为医学影像智能计算,重点关注于有限标注下医学数据的高效分析和解读,多模态医学数据分析,医学通用数据分析模型。先后发表IEEE-TPAMI/TMI/TIP、MedIA、CVPR、ECCV、MICCAI等本领域顶级期刊/会议发表论文40余篇,谷歌学术被引用4700余次,ESI高被引论文4篇。获得“2023年中国图象图形学学会博士学位论文激励计划”奖励和CVPR 2024 Travel Award。担任MICCAI 2023/2024的Area Chair和多个顶级期刊和会议的审稿人,并获得了IEEE-TMI和CVPR 2023的杰出审稿人。

 

个人主页:

https://scholar.google.com/citations?user=ddDL9HMAAAAJ&hl=zh-CN



特别鸣谢本次Webinar主要组织者:

主办AC:谢雨彤 (University of Adelaide)

协办AC:夏勇 (西北工业大学)


活动参与方式

1、VALSE每周举行的Webinar活动依托B站直播平台进行,欢迎在B站搜索VALSE_Webinar关注我们!

直播地址:

https://live.bilibili.com/22300737;

历史视频观看地址:

https://space.bilibili.com/562085182/ 


2、VALSE Webinar活动通常每周三晚上20:00进行,但偶尔会因为讲者时区问题略有调整,为方便您参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ T群,群号:863867505);


*注:申请加入VALSE QQ群时需验证姓名、单位和身份缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。


3、VALSE微信公众号一般会在每周四发布下一周Webinar报告的通知。


4、您也可以通过访问VALSE主页:http://valser.org/ 直接查看Webinar活动信息。Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新。


小黑屋|手机版|Archiver|Vision And Learning SEminar

GMT+8, 2024-11-21 14:29 , Processed in 0.013269 second(s), 14 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

返回顶部