EvoSeed  Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models!  EvoSeed

Shashank Kotyan 1* Po-Yuan Mao 1* Pin-Yu Chen 2 Danilo Vasconcellos Vargas 1

1 Kyushu University 2 IBM Research * Equal Contribution



Is it a Volcano? Is it a Seashore? It's a generate image that can fool AI




 This image may contain sensitive or offensive content.
Click to view at your own discretion.

						Adversarial images created with EvoSeed are prime examples of how to deceive a range of classifiers tailored for various tasks.
						Note that, the generated natural adversarial images differ from non-adversarial ones, suggesting the adversarial images' unrestricted nature.
Figure: Adversarial images created with EvoSeed are prime examples of how to deceive a range of classifiers tailored for various tasks. Note that, the generated natural adversarial images differ from non-adversarial ones, suggesting the adversarial images' unrestricted nature.



Contributions  Key Contributions:




Abstract  Abstract

Deep neural networks can be exploited using natural adversarial samples, which do not impact human perception. Current approaches often rely on deep neural networks' white-box nature to generate these adversarial samples or synthetically alter the distribution of adversarial samples compared to the training distribution. In contrast, we propose EvoSeed, a novel evolutionary strategy-based algorithmic framework for generating photo-realistic natural adversarial samples. Our EvoSeed framework uses auxiliary Conditional Diffusion and Classifier models to operate in a black-box setting. We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional Diffusion Model, results in the natural adversarial sample misclassified by the Classifier Model. Experiments show that generated adversarial images are of high image quality, raising concerns about generating harmful content bypassing safety classifiers. Our research opens new avenues to understanding the limitations of current safety mechanisms and the risk of plausible attacks against classifier systems using image generation.




EvoSeed  EvoSeed Framework

Illustration of the EvoSeed framework to optimize initial seed vector \( z \) to generate a natural adversarial sample. The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) iteratively refines the initial seed vector \( z \) and finds an adversarial initial seed vector \( z' \). This adversarial seed vector \( z' \) can then be utilized by the Conditional Diffusion Model \( G \) to generate a natural adversarial sample \( x \) capable of deceiving the Classifier Model \( F \).
Figure: Illustration of the EvoSeed framework to optimize initial seed vector \( z \) to generate a natural adversarial sample. The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) iteratively refines the initial seed vector \( z \) and finds an adversarial initial seed vector \( z' \). This adversarial seed vector \( z' \) can then be utilized by the Conditional Diffusion Model \( G \) to generate a natural adversarial sample \( x \) capable of deceiving the Classifier Model \( F \).



Object Classification  Adversarial Images for Object Classification Task

Exemplar adversarial images generated for the Object Classification Task. 
            We show that images that are aligned with the conditioning can be misclassified.
Figure: Exemplar adversarial images generated for the Object Classification Task. We show that images that are aligned with the conditioning can be misclassified.



NSFW  Adversarial Images bypass Safety Checkers

 This image may contain sensitive or offensive content.
Click to view at your own discretion.
We demonstrate a malicious use of EvoSeed to generate harmful content bypassing safety mechanisms. 
						These adversarial images are misclassified as appropriate, highlighting better post-image generation checking for such generated images.
Figure: We demonstrate a malicious use of EvoSeed to generate harmful content bypassing safety mechanisms. These adversarial images are misclassified as appropriate, highlighting better post-image generation checking for such generated images.



Ethinicity  Adversarial Images for Ethinicity Classification Task

We demonstrate an application of EvoSeed to misclassify the individual's ethnicity in the generated image. This raises concerns about misrepresenting a demographic group's representation estimated by such classifiers.
Figure: We demonstrate an application of EvoSeed to misclassify the individual's ethnicity in the generated image. This raises concerns about misrepresenting a demographic group's representation estimated by such classifiers.



Misalignment  Adversarial Images exploiting Misalignment

Exemplar adversarial images generated by EvoSeed where the gender of the person in the generated image was changed. This example also shows brittleness in the current diffusion model to generate non-aligned images with the conditioning.
Figure: Exemplar adversarial images generated by EvoSeed where the gender of the person in the generated image was changed. This example also shows brittleness in the current diffusion model to generate non-aligned images with the conditioning.



Evolution  Evolution of an Adversarial Images

Demonstration of degrading confidence on the conditioned object c by the classifier for generated images.
            Note that the right-most image is the adversarial image misclassified by the classifier model, and the left-most is the initial non-adversarial image with the highest confidence.
Figure: Demonstration of degrading confidence on the conditioned object \( c \) by the classifier for generated images. Note that the right-most image is the adversarial image misclassified by the classifier model, and the left-most is the initial non-adversarial image with the highest confidence.



Bibtex

@article{kotyan2024EvoSeed,
	title = {Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models!,
	author = {Kotyan, Shashank and Mao, Po-Yuan and Chen, Pin-Yu and Vargas, Danilo Vasconcellos},
	year = {2024},
	month = may,
	number = {arXiv:2402.04699},
	eprint = {2402.04699},
	publisher = {{arXiv}},
	doi = {10.48550/arXiv.2402.04699},
}