Leveraging Contrastive Learning for a Similarity-Guided Tampered Document Data Generation Pipeline

Leveraging Contrastive Learning for a Similarity-Guided Tampered Document Data Generation Pipeline
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Detecting tampered text in document images is a challenging task due to data scarcity. To address this, previous work has attempted to generate tampered documents using rule-based methods. However, the resulting documents often suffer from limited variety and poor visual quality, typically leaving highly visible artifacts that are rarely observed in real-world manipulations. This undermines the model’s ability to learn robust, generalizable features and results in poor performance on real-world data. Motivated by this discrepancy, we propose a novel method for generating high-quality tampered document images. We first train an auxiliary network to compare text crops, leveraging contrastive learning with a novel strategy for defining positive pairs and their corresponding negatives. We also train a second auxiliary network to evaluate whether a crop tightly encloses the intended characters, without cutting off parts of characters or including parts of adjacent ones. Using a carefully designed generation pipeline that leverages both networks, we introduce a framework capable of producing diverse, high-quality tampered document images. We assess the effectiveness of our data generation pipeline by training multiple models on datasets derived from the same source images, generated using our method and existing approaches, under identical training protocols. Evaluating these models on various open-source datasets shows that our pipeline yields consistent performance improvements across architectures and datasets.


💡 Research Summary

The paper tackles the chronic shortage of large‑scale, realistic training data for document image tampering detection. Existing synthetic pipelines rely on simple rule‑based operations (copy‑move, splicing, insertion, inpainting, coverage) that often produce visible artifacts such as font mismatches, misaligned baselines, or background inconsistencies. Consequently, models trained on these datasets overfit to these artificial cues and fail to generalize to real‑world forgeries.

To bridge this gap, the authors introduce a two‑stage auxiliary‑network framework that guides a high‑quality tampered‑document generation pipeline.

  1. Similarity Network (Fθ) – Trained with contrastive learning, Fθ learns a visual similarity function between any two image crops. Positive pairs are defined as adjacent text or blank segments on the same OCR‑derived line that share the same width, height, and character count, and whose centers are within a small fraction of the average character width. Negative pairs are selected from different lines or different documents, ensuring a substantial vertical offset and similar aspect ratios; hard negatives are also generated by applying random shifts and visual perturbations to the anchor crop. The network uses two decoupled embedding heads (foreground and background) so that similarity between two text crops leverages both heads, while similarity involving a blank crop uses only the background head.

  2. Bounding‑Box Quality Network (Gθ) – A lightweight CNN that predicts a quality score in


Comments & Academic Discussion

Loading comments...

Leave a Comment