This image may contain sensitive or offensive content. Click to view at your own discretion.
Figure:
Adversarial images created with EvoSeed are prime examples of how to deceive a range of classifiers tailored for various tasks.
Note that, the generated natural adversarial images differ from non-adversarial ones, suggesting the adversarial images' unrestricted nature.
Key Contributions:
We propose a black-box algorithmic framework based on an Evolutionary Strategy titled EvoSeed to
generate natural adversarial samples in an unrestricted setting.
Our results show that adversarial samples created using EvoSeed are photo-realistic and do not change the
human perception of the generated image; however, can be misclassified by various robust and non-robust
classifiers.
Abstract
Deep neural networks can be exploited using natural adversarial samples, which do not impact human perception.
Current approaches often rely on deep neural networks' white-box nature to generate these adversarial samples or
synthetically alter the distribution of adversarial samples compared to the training distribution.
In contrast, we propose EvoSeed, a novel evolutionary strategy-based algorithmic framework for generating
photo-realistic natural adversarial samples.
Our EvoSeed framework uses auxiliary Conditional Diffusion and Classifier models to operate in a black-box
setting.
We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional
Diffusion Model, results in the natural adversarial sample misclassified by the Classifier Model.
Experiments show that generated adversarial images are of high image quality, raising concerns about generating
harmful content bypassing safety classifiers.
Our research opens new avenues to understanding the limitations of current safety mechanisms and the risk of
plausible attacks against classifier systems using image generation.
EvoSeed Framework
Figure:
Illustration of the EvoSeed framework to optimize initial seed vector \( z \)
to generate a natural adversarial sample. The Covariance Matrix Adaptation Evolution Strategy (CMA-ES)
iteratively refines the initial seed vector \( z \) and finds an adversarial initial seed vector \( z' \). This
adversarial seed vector \( z' \) can then be utilized by the Conditional Diffusion Model \( G \) to generate a
natural adversarial sample \( x \) capable of deceiving the Classifier Model \( F \).
Adversarial Images for Object Classification Task
Figure: Exemplar adversarial images generated for the Object Classification Task.
We show that images that are aligned with the conditioning can be misclassified.
Adversarial Images bypass Safety Checkers
This image may contain sensitive or offensive content. Click to view at your own discretion.
Figure: We demonstrate a malicious use of EvoSeed to generate harmful content bypassing safety
mechanisms.
These adversarial images are misclassified as appropriate, highlighting better post-image generation checking
for such generated images.
Adversarial Images for Ethinicity Classification Task
Figure: We demonstrate an application of EvoSeed to misclassify the individual's ethnicity in the
generated image. This raises concerns about misrepresenting a demographic group's representation estimated by
such classifiers.
Adversarial Images exploiting Misalignment
Figure: Exemplar adversarial images generated by EvoSeed where the gender of the person in the
generated image was changed. This example also shows brittleness in the current diffusion model to generate
non-aligned images with the conditioning.
Evolution of an Adversarial Images
Figure: Demonstration of degrading confidence on the conditioned object \( c \) by the classifier for
generated images.
Note that the right-most image is the adversarial image misclassified by the classifier model, and the left-most
is the initial non-adversarial image with the highest confidence.
Bibtex
@article{kotyan2024EvoSeed,
title = {Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models!,
author = {Kotyan, Shashank and Mao, Po-Yuan and Chen, Pin-Yu and Vargas, Danilo Vasconcellos},
year = {2024},
month = may,
number = {arXiv:2402.04699},
eprint = {2402.04699},
publisher = {{arXiv}},
doi = {10.48550/arXiv.2402.04699},
}