logo SBA

ETD

Archivio digitale delle tesi discusse presso l’Università di Pisa

Tesi etd-11242025-140025


Tipo di tesi
Tesi di dottorato di ricerca
Autore
PURNEKAR, NISCHAY
URN
etd-11242025-140025
Titolo
Physical Domain Adversarial Attacks: Bridging Theory and Real World Threats
Settore scientifico disciplinare
ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI
Corso di studi
DOTTORATO NAZIONALE IN INTELLIGENZA ARTIFICIALE
Relatori
tutor Prof. Barni, Mauro
correlatore Prof.ssa Tondi, Benedetta
Parole chiave
  • Adversarial Machine Learning
  • License Plate Detection
  • Physical-Domain Adversarial Attacks
  • Print-and-Scan Simulation
  • Printer Source Attribution
  • Robustness Evaluation
Data inizio appello
05/12/2025
Consultabilità
Completa
Riassunto
The objective of this thesis is to investigate the feasibility of adversarial attacks in the physical domain, where inputs undergo real-world transformations such as printing, scanning, and photographic acquisition. While Adversarial Machine Learning has been extensively studied in the digital setting, its physical-domain implications remain less explored. The present study focuses on two representative applications of high societal relevance: Printer Source Attribution (PSA) and License Plate Detection (LPD). Both tasks require perturbations that survive complex physical and environmental distortions while remaining visually unobtrusive.

The thesis makes several contributions toward this objective. First, it introduces a realistic Print-and-Scan (P&S) simulation framework capable of modeling degradations such as color shifts, texture variations, ink dispersion, and scanning noise. By embedding this simulator into adversarial optimization, perturbations can be crafted that preserve their effectiveness after physical reproduction. In addition, the proposed simulators are shown to be valuable not only for crafting attacks but also for improving the robustness of systems in multimedia security, by enabling reproducible and controlled evaluations under realistic conditions. Building on this foundation, this research proposes novel adversarial strategies for PSA, demonstrating that printer-specific forensic signatures can be suppressed to mislead attribution systems even after documents are reprinted. In the case of LPD, the design of localized perturbations confined only to the license plate region, which are shown to effectively disrupt state-of-the-art object detectors both in simulation and under real-world acquisition conditions.

Beyond these specific tasks, the framework establishes a unified methodology for generating physically robust adversarial examples and enables reproducible robustness evaluations under realistic operational constraints. Extensive experimental validation confirms that conventional digital-only attacks typically fail once subjected to physical distortions, while simulation-guided perturbations maintain significant adversarial power. By combining methodological innovations with comprehensive experimental evidence, the thesis delivers a systematic understanding of physical adversarial threats and provides practical tools for their assessment. Overall, it contributes to the development of more resilient, reliable, and socially responsible machine learning systems, reinforcing the trustworthiness of technologies deployed in critical infrastructures.
File