第八届中国模式识别与计算机视觉学术会议(The 8th Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2025)将于 2025年10月16日至19日 在国家会展中心(上海)举办。PRCV 是国内模式识别和计算机视觉领域顶级学术盛会,也是国际上重要且受到国际学术界认可会议,进入 CCF 分区(CCF-C)。本次会议由中国图象图形学学会(CSIG)、中国人工智能学会(CAAI)、中国计算机学会(CCF)和中国自动化学会(CAA)联合主办,上海交通大学承办。
PRCV 2025 旨在汇聚国内外模式识别与计算机视觉领域的学术和工业界研究人员和专业人士,交流理论研究和技术发展的最新进展。会议还将邀请数十位领域顶尖专家共襄盛会。四位院士与国际巨擘的主旨报告高屋建瓴,八位国家级人才及 IEEE Fellow 的特邀演讲洞悉前沿,百余位国家级人才、企业专家参加的十余场专题论坛与讲习班涵盖学术前沿、产业应用与技术创新。女科学家论坛、竞赛论坛、博士生论坛等多元舞台交相辉映,为全球学者搭建展示卓越成果、碰撞思想火花的广阔天地。更有十余家先锋企业共襄会展盛况,深化“产学研”融合,助力模式识别与计算机视觉领域乘风破浪,勇攀高峰。
Josef Kittler, former President of the International Association for Pattern Recognition, Fellow of the Royal Academy of Engineering, Distinguished Professor at the University of Surrey, IAPR Fellow, IEEE/IET Fellow
Presentation title:
Digital Content forensics in the context of large models
Speech abstract:
In the digital era, with the rapid development of artificial intelligence technology, especially the wide application of deep learning technology, the generation and editing of digital content has become more convenient and efficient. However, the double-edged nature of technology also brings new challenges in the field of digital content forensics. Generative large models, which can generate realistic text, images, audio and video, are likely to be widely used for malicious purposes such as false information and deep forgery, posing a threat to social order and information security. In the context of large models, forensics work becomes more complex and requires a higher level of technical means to cope with the continuous progress of counterfeiting technology. To address online disinformation and high quality fake content generated by large models, this report introduces several key technologies and a holistic approach to digital content forensics. This report focuses on the detection and forensics of traditional image tampering, the detection of portrait deep forgery, and the detection of the latest AIGC images and videos, as well as the detection and factual verification of disinformation that has spread widely on the Web. To generate the content for the large model, we also prospectively start from the source, edit the knowledge and limit the output content for the large model. These studies have been explored from the perspectives of generalization, interpretability, generating antagonistic game, etc., and have achieved remarkable results, providing important methods and ideas for guaranteeing the authenticity and credibility of digital content under the background of large models.
Xiong Hongkai, Distinguished Professor of Shanghai Jiao Tong University, Cheung Kong Scholar of the Ministry of Education, National Jieqing, leading talent of Ten thousand People Program, Deputy director of the "Visual Big Data" special Committee of the Chinese Society of Image and Graphics, and member of the Chinese Society of Electronics
Presentation title:
Digital Content forensics in the context of large models
Speech abstract:
In the digital era, with the rapid development of artificial intelligence technology, especially the wide application of deep learning technology, the generation and editing of digital content has become more convenient and efficient. However, the double-edged nature of technology also brings new challenges in the field of digital content forensics. Generative large models, which can generate realistic text, images, audio and video, are likely to be widely used for malicious purposes such as false information and deep forgery, posing a threat to social order and information security. In the context of large models, forensics work becomes more complex and requires a higher level of technical means to cope with the continuous progress of counterfeiting technology. To address online disinformation and high quality fake content generated by large models, this report introduces several key technologies and a holistic approach to digital content forensics. This report focuses on the detection and forensics of traditional image tampering, the detection of portrait deep forgery, and the detection of the latest AIGC images and videos, as well as the detection and factual verification of disinformation that has spread widely on the Web. To generate the content for the large model, we also prospectively start from the source, edit the knowledge and limit the output content for the large model. These studies have been explored from the perspectives of generalization, interpretability, generating antagonistic game, etc., and have achieved remarkable results, providing important methods and ideas for guaranteeing the authenticity and credibility of digital content under the background of large models.
Yang Jian, Professor of Nanjing University of Science and Technology, National Jieqing, Deputy director of Pattern Recognition Special Committee of Artificial Intelligence Society, director of Pattern Recognition Special Committee of Jiangsu Artificial Intelligence Society, IAPR Fellow, national leading talent
Presentation title:
Digital Content forensics in the context of large models
Speech abstract:
In the digital era, with the rapid development of artificial intelligence technology, especially the wide application of deep learning technology, the generation and editing of digital content has become more convenient and efficient. However, the double-edged nature of technology also brings new challenges in the field of digital content forensics. Generative large models, which can generate realistic text, images, audio and video, are likely to be widely used for malicious purposes such as false information and deep forgery, posing a threat to social order and information security. In the context of large models, forensics work becomes more complex and requires a higher level of technical means to cope with the continuous progress of counterfeiting technology. To address online disinformation and high quality fake content generated by large models, this report introduces several key technologies and a holistic approach to digital content forensics. This report focuses on the detection and forensics of traditional image tampering, the detection of portrait deep forgery, and the detection of the latest AIGC images and videos, as well as the detection and factual verification of disinformation that has spread widely on the Web. To generate the content for the large model, we also prospectively start from the source, edit the knowledge and limit the output content for the large model. These studies have been explored from the perspectives of generalization, interpretability, generating antagonistic game, etc., and have achieved remarkable results, providing important methods and ideas for guaranteeing the authenticity and credibility of digital content under the background of large models.
Chen Xilin, Director and Party Secretary of Institute of Computing Technology, Chinese Academy of Sciences, National Jie Qing, ACM/CCF/IAPR/IEEE Fellow, He has been the director of the Key Laboratory of Intelligent Information Processing and the Director of the International Cooperation Bureau of the Chinese Academy of Sciences
Presentation title:
Digital Content forensics in the context of large models
Speech abstract:
In the digital era, with the rapid development of artificial intelligence technology, especially the wide application of deep learning technology, the generation and editing of digital content has become more convenient and efficient. However, the double-edged nature of technology also brings new challenges in the field of digital content forensics. Generative large models, which can generate realistic text, images, audio and video, are likely to be widely used for malicious purposes such as false information and deep forgery, posing a threat to social order and information security. In the context of large models, forensics work becomes more complex and requires a higher level of technical means to cope with the continuous progress of counterfeiting technology. To address online disinformation and high quality fake content generated by large models, this report introduces several key technologies and a holistic approach to digital content forensics. This report focuses on the detection and forensics of traditional image tampering, the detection of portrait deep forgery, and the detection of the latest AIGC images and videos, as well as the detection and factual verification of disinformation that has spread widely on the Web. To generate the content for the large model, we also prospectively start from the source, edit the knowledge and limit the output content for the large model. These studies have been explored from the perspectives of generalization, interpretability, generating antagonistic game, etc., and have achieved remarkable results, providing important methods and ideas for guaranteeing the authenticity and credibility of digital content under the background of large models.