Paweł Korus and Jiwu Huang
IEEE International Workshop on Information Forensics and Security, 2016
2016-wifs-mrf.pdf (5 MB)
While it is commonly known that successful forensic detectors should combine the clues from various forensic features, unsupervised multi-modal tampering localization is still an open problem. State-of-the-art fusion methods perform simple pixel-wise combination of the input tampering maps. In this study, we show that pixel-wise combination is sub-optimal and successful fusion needs to model dependencies between neighboring pixels and exploit the content of the tampered image. We evaluate two methods based on conditional random fields and demonstrate that they can exploit image content and precisely delineate the shape of the forgery. In contrast to existing methods based on explicit image segmentation, such an approach does not suffer from subtle object removal forgeries where meaningful segments do not exist. We also demonstrate that existing performance measures are insufficient to accurately assess tampering localization performance. Further work in this direction is needed.
Copyright © 2016 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to pubs-permissions@ieee.org.
Supplementary materials for this paper include:
Slides from conference presentation are available here: wifs-2016-presentation.pdf (5.2 MB).
The realistic image tampering dataset used in this study is available for download here.
Our implementation of selected methods used in this work can be found at github.com.