SNOOPI: Supercharged One-step Diffusion Distillation with Proper Guidance

arXiv Paper 🤗 HuggingFace Paper Code (coming soon) BibTeX

Abstract

Recent approaches have yielded promising results in distilling multi-step text-to-image diffusion models into one-step ones. The state-of-the-art efficient distillation technique, i.e., SwiftBrushv2 (SBv2), even surpasses the teacher model's performance with limited resources. However, our study reveals its instability when handling different diffusion model backbones due to using a fixed guidance scale within the Variational Score Distillation (VSD) loss. Another weakness of the existing one-step diffusion models is the missing support for negative prompt guidance, which is crucial in practical image generation. This paper presents SNOOPI, a novel framework designed to address these limitations by enhancing the guidance in one-step diffusion models during both training and inference. First, we effectively enhance training stability through Proper Guidance - SwiftBrush (PG-SB), which employs a random-scale classifier-free guidance approach. By varying the guidance scale of both teacher models, we broaden their output distributions, resulting in a more robust VSD loss that enables SB to perform effectively across diverse backbones while maintaining competitive performance. Second, we propose a training-free method called Negative-Away Steer Attention (NASA), which integrates negative prompts into one-step diffusion models via cross-attention to suppress undesired elements in generated images. Our experimental results show that our proposed methods significantly improve baseline models across various metrics. Remarkably, we achieve an HPSv2 score of 31.08, setting a new state-of-the-art benchmark for one-step diffusion models.
NASA GIF Viewer

Quantitative Results

HPSv2 comparisons between our method and previous works. † denotes reported numbers, ‡ denotes our rerun based on the publicly available model checkpoints.
Method Anime Photo Concept Art Paintings Average
Stable Diffusion 1.5-based backbone
SDv1.5‡ 26.51 27.19 26.06 26.12 26.47
InstaFlow-0.9B‡ 26.10 26.62 25.92 25.95 26.15
DMD2‡ 26.39 27.00 25.80 25.83 26.26
PG-SB 27.18 27.58 26.69 26.62 27.02
PG-SB + NASA 27.19 27.59 26.71 26.63 27.03
Stable Diffusion 2.1-based backbone
SDv2.1† 27.48 26.89 26.86 27.46 27.17
SB† 26.91 27.21 26.32 26.37 26.70
SBv2† 27.25 27.62 26.86 26.77 27.13
PG-SB 27.56 27.84 26.97 27.03 27.35
PG-SB + NASA 27.71 27.99 27.14 27.27 27.53
PixArt-α-based backbone
PixArt-α‡ 29.62 29.17 28.79 28.69 29.07
YOSO‡ 28.79 28.09 28.57 28.55 28.50
DMD‡ 29.31 28.67 28.46 28.41 28.71
PG-SB 32.19 29.09 30.39 29.69 30.34
PG-SB + NASA 32.56 29.55 31.24 30.96 31.08

Qualitative Results: Negative Guidance in Inference

NASA Method Demonstration

Qualitative Results: PG-SB for PixArt-α backbone

Qualitative Results: PG-SB for SDv2.1 backbone

Qualitative Results: PG-SB for SDv1.5 backbone

Acknowledgement

We would like to thank the Latent Consistency Model's authors for providing such an excellent webpage, and also express our gratitude to the HuggingFace team for their diffusers framework.