Redirigiendo al acceso original de articulo en 17 segundos...
Inicio  /  Applied Sciences  /  Vol: 13 Par: 21 (2023)  /  Artículo
ARTÍCULO
TITULO

GP-Net: Image Manipulation Detection and Localization via Long-Range Modeling and Transformers

Jin Peng    
Chengming Liu    
Haibo Pang    
Xiaomeng Gao    
Guozhen Cheng and Bing Hao    

Resumen

With the rise of image manipulation techniques, an increasing number of individuals find it easy to manipulate image content. Undoubtedly, this presents a significant challenge to the integrity of multimedia data, thereby fueling the advancement of image forgery detection research. A majority of current methods employ convolutional neural networks (CNNs) for image manipulation localization, yielding promising outcomes. Nevertheless, CNN-based approaches possess limitations in establishing explicit long-range relationships. Consequently, addressing the image manipulation localization task necessitates a solution that adeptly builds global context while preserving a robust grasp of low-level details. In this paper, we propose GPNet to address this challenge. GPNet combines Transformer and CNN in parallel which can build global dependency and capture low-level details efficiently. Additionally, we devise an effective fusion module referred to as TcFusion, which proficiently amalgamates feature maps generated by both branches. Thorough extensive experiments conducted on diverse datasets showcase that our network outperforms prevailing state-of-the-art manipulation detection and localization approaches.

 Artículos similares