报告时间:2019年8月14日(星期三)晚上20:00(北京时间) 主题:影像质量增强技术(Visual Quality Enhancement) 报告主持人: 顾舒航 (ETH Zurich, Switzerland) 报告嘉宾:张宇伦 (Northeastern University, US) 报告题目:Efficient Neural Networks for Image Restoration 报告嘉宾:董超 (中科院深圳先进技术研究院) 报告题目:Image Super Resolution with Generative Adversarial Networks Panel议题: 1 过去二十年来,图像复原与增强领域的主要进展有哪些?未来的研究方向? 2 过去几年来,深度神经网络在图像复原与增强领域取得了巨大成功,传统方法的研究是否还有意义? 3 深度学习方法用于图像复原的挑战是什么? 4 基于深度学习的复原方法在实际系统中的应用难点是什么,如何解决这些难点? 5 图像复原与增强包括哪些任务?发展趋势是什么? Panel嘉宾: 吕健勤 (新加坡南洋理工大学),沈礼权 (上海大学),董超 (中科院深圳先进技术研究院), 张宇伦 (Northeastern University, US) *欢迎大家在下方留言提出主题相关问题,主持人和panel嘉宾会从中选择若干热度高的问题加入panel议题! 报告嘉宾:张宇伦 (Northeastern University, US) 报告时间:2019年8月14日(星期三)晚上20:00(北京时间) 报告题目:Efficient Neural Networks for Image Restoration 报告人简介: Yulun Zhang is a PhD student at Department of ECE, Northeastern University, USA, advised by Prof. Yun (Raymond) Fu. Before that he received master degree in the Department of Automation, Tsinghua University and B.E degree from School of Electronic Engineering, Xidian University. His main research interests lie in image/video enhancement, generation, and understanding. He published several papers in CVPR, ICCV, ECCV, and ICLR. He was the recipient of the Best Student Paper Award at IEEE International Conference on Visual Communication and Image Processing (VCIP) in 2015. 个人主页: http://yulunzhang.com Google Scholar: https://scholar.google.com/citations?hl=en&user=ORmLjWoAAAAJ 报告摘要: Image restoration aims to recover high-quality (HQ) images from their corrupted low-quality (LQ) observations and plays a fundamental role in various high-level vision tasks. Recently, deep convolutional neural network (CNN) has shown extraordinary capability of modelling image restoration application. In this talk, I will mainly introduce our recent three deep network-based methods for image restoration. First, I will introduce a residual dense network (RDN) to fully make use of all the hierarchical features from the original LQ image with the proposed residual dense block. Then, I will present a residual channel attention network (RCAN), which can reach much deeper than previous CNN-based methods and adaptively learns more useful channel-wise features. Finally, I will introduce a residual non-local attention network (RNAN) for high-quality image restoration. The local and non-local attention blocks are exploited to extract features that capture the long-range dependencies between pixels and pay more attention to the challenging parts. We demonstrate the effectiveness of our proposed methods for various image restoration tasks, including image denoising, demosaicing, compression artifacts reduction, and super-resolution. All the codes are available at https://github.com/yulunzhang. 参考文献: [1] Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, Yun Fu, Residual Dense Network for Image Super-Resolution, CVPR, 2018. [2] Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, Yun Fu, Image Super-Resolution Using Very Deep Residual Channel Attention Networks, ECCV, 2018. [3] Yulun Zhang, Kunpeng Li, Kai Li, Bineng Zhong, Yun Fu, Residual Non-local Attention Networks for Image Restoration, ICLR, 2019. 报告嘉宾:董超 (中科院深圳先进技术研究院) 报告时间:2019年8月14日(星期三)晚上20:30(北京时间) 报告题目:Image Super Resolution with Generative Adversarial Networks 报告人简介: Chao Dong is currently an associate professor in Shenzhen Institute of Advanced Technology, Chinese Academy of Science. He received his Ph.D. degree from The Chinese University of Hong Kong in 2016, advised by Prof. Xiaoou Tang and Prof. Chen-Change Loy. In 2014, he first introduced deep learning method -- SRCNN into the super-resolution field. This seminal work was published in TPAMI and was chosen as one of the top ten “Most popular Articles” in 2016. His team has won the first place in international super-resolution challenges –NTIRE2018, PIRM2018 and NTIRE2019. He worked in SenseTime from 2016 to 2018, as the team leader of Image Quality Group. He, with his team, developed the first deep learning based “digital zoom” for smart phone cameras. His Google citation has surpassed 4800. His current research interest focuses on low-level vision problems, such as image/video super-resolution, denoising and enhancement. He serves as the senior reviewer of CVPR/ICCV/ECCV, and TPAMI/IJCV/TIP. 个人主页: https://scholar.google.com.hk/citations?user=OSDCB0UAAAAJ&hl=zh-CN 报告摘要: Image super resolution (SR) is a classical method to enhance the image visual quality. SR methods based on Generative Adversarial Networks (GAN) have the potential to recover realistic textures and missing details. In this talk, I will introduce four representative GAN based SR methods. As the seminal work, SRGAN first applies GAN for image SR in CVPR2017. Later in CVPR2018, SFT-Net incorporates image segmentation maps as additional priors to generate semantically meaningful details. After that, PIRM2018 held an SR Challenge and employed Perceptual Index (PI) to evaluate the perceptual quality. Enhanced SRGAN (ESRGAN) won the challenge on perceptual scores and became a new state-of-the-art. In ICCV2019, RankSRGAN further surpasses ESRGAN by employing a ranking network. It can be easily optimized in direction of indifferentiable perceptual metrics. Panel嘉宾:吕健勤(新加坡南洋理工大学) 嘉宾简介: 吕健勤,新加坡南洋理工大学副教授,香港中文大学客座副教授,中国科学院深圳先进技术研究院访问学者。2010年获得伦敦大学玛丽王后学院(Queen Mary University of London)博士学位,2013-2018年于香港中文大学担任研究助理教授。其研究方向主要为计算机视觉,研究内容包括人脸分析、深度学习、和底层图像处理等。迄今在顶级国际学术期刊与会议表论文100余篇。其论文被引用超过13000次,H-index为50 (根据Google Scholar的统计)。他和团队提出了多个图像/影像超分辨率和质量增强的标志性工作,包括SRCNN,SFTGAN、ESRGAN和EDVR。他和其团队也建立了多个标志性的计算机视觉数据集。其中包括人脸检测领域最大和最具挑战性的WIDER FACE、行人属性识别PETA数据库和车辆识别CompCars数据库等。他和团队曾获得 NTIRE 2019 Video Restoration and Enhancement 全部四个赛道的所有冠军、COCO Object Detection 2018 Challenge冠军、PIRM-SR 图像超分辨率2018冠军、2017 DAVIS Challenge on Video Object Segmentation冠军、2016 ACCV Best Application Paper Honorable Mention、2016 Hong Kong ICT Awards银奖、ImageNet 2016年物体识别的季军、ImageNet 2014年物体检测的亚军等。现任IJCV和IET Computer Vision的杂志副主编,且担任多个国际顶级会议和期刊的审稿人。CVPR 2019、 BMVC 2019、ECCV 2018 及 BMVC 2018 区域主席。IEEE高级会员。 个人主页: http://www.ntu.edu.sg/home/ccloy/ Google Scholar: https://scholar.google.com/citations?user=559LF80AAAAJ&hl=en Panel嘉宾:沈礼权(上海大学) 嘉宾简介: 沈礼权,上海大学,研究员,主持4项国家自然科学基金项目和5项省部级项目(包括上海市科委攻关项目、上海市自然科学基金、上海市创新重点项目等),并以主要参与人完成2项国家自然科学基金重点项目和2项科技部支撑项目。近五年,发表论文61篇,其中一作SCI论文15篇(发表在IEEE T-IP、IEEE T-CSVT、IEEE T-MM等国际期刊上,入选ESI高被引论文4篇、热点论文1篇)。历年发表论文110 余篇,其中国际期刊论文70 篇(一作31 篇),包括IEEE/ACM Trans.或二区以上的论文20 篇(一作15 篇)。著作2部(“二维和三维视频处理及立体显示技术”,2010年,科学出版社 和“深度增强的3D视频处理技术”,2015年,人民邮电出版社)。申请发明专利12项,其中已授权的发明专利5项。近5年SCI他引1426次(总引1666次),单篇最高SCI他引290次,还被Scopus引用2215次,Google Scholar引用3050次。在第一作者发表的论文中,高清压缩两篇成果分别被评为“IEEE T-MM近10年引用次数最高的论文”(1/1751)和 “IEEE T- CE近5年引用次数最高的论文” (1/622);自由视点编码成果分别获选IEEE T-CSVT杂志的Featured Articles(2013年-2018年间的Most Cited Articles)和Signal Processing: Image Communication杂志2010年-2015年间的Most Cited Articles。在相继获得上海市青年启明星、浦江学者和上海市曙光学者等人才项目,并于2014年获得国家自然科学基金委优秀青年科学基金。曾在2018年获得电子学会自然科学二等奖(排名第一)、2015年获得教育部自然科学二等奖(排名第一)和2012年获得上海市科技进步二等奖。 19-19期VALSE在线学术报告参与方式: 长按或扫描下方二维码,关注“VALSE”微信公众号(valse_wechat),后台回复“19期”,获取直播地址。 特别鸣谢本次Webinar主要组织者: 主办AC:刘家瑛(北京大学) 协办AC:顾舒航 (ETH Zurich, Switzerland)、徐迈(北京航空航天大学)、张健(北京大学深圳研究生院) 责任AC:张林(同济大学) VALSE Webinar改版说明: 自2019年1月起,VALSE Webinar改革活动形式,由过去每次一个讲者的方式改为两种可能的形式: 1)Webinar专题研讨:每次活动有一个研讨主题,先邀请两位主题相关的优秀讲者做专题报告(每人30分钟),随后邀请额外的2~3位嘉宾共同就研讨主题进行讨论(30分钟)。 2)Webinar特邀报告:每次活动邀请一位资深专家主讲,就其在自己熟悉领域的科研工作进行系统深入的介绍,报告时间50分钟,主持人与主讲人互动10分钟,自由问答10分钟。 活动参与方式: 1、VALSE Webinar活动依托在线直播平台进行,活动时讲者会上传PPT或共享屏幕,听众可以看到Slides,听到讲者的语音,并通过聊天功能与讲者交互; 2、为参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ群(目前A、B、C、D、E、F、G、H、I群已满,除讲者等嘉宾外,只能申请加入VALSE J群,群号:734872379); *注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。 3、在活动开始前5分钟左右,讲者会开启直播,听众点击直播链接即可参加活动,支持安装Windows系统的电脑、MAC电脑、手机等设备; 4、活动过程中,请不要说无关话语,以免影响活动正常进行; 5、活动过程中,如出现听不到或看不到视频等问题,建议退出再重新进入,一般都能解决问题; 6、建议务必在速度较快的网络上参加活动,优先采用有线网络连接; 7、VALSE微信公众号会在每周四发布下一周Webinar报告的通知及直播链接。 8、Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新[slides]。 9、Webinar报告的视频(经讲者允许后),会更新在VALSE爱奇艺空间,请在爱奇艺关注Valse Webinar进行观看。 |
小黑屋|手机版|Archiver|Vision And Learning SEminar
GMT+8, 2024-12-25 02:30 , Processed in 0.025390 second(s), 23 queries .
Powered by Discuz! X3.4
Copyright © 2001-2020, Tencent Cloud.