GAS: Improving Discretization of Diffusion ODEs via Generalized Adversarial Solver
Abstract
The Generalized Adversarial Solver improves diffusion model sampling efficiency and quality by combining a simple ODE solver parameterization with adversarial training.
While diffusion models achieve state-of-the-art generation quality, they still suffer from computationally expensive sampling. Recent works address this issue with gradient-based optimization methods that distill a few-step ODE diffusion solver from the full sampling process, reducing the number of function evaluations from dozens to just a few. However, these approaches often rely on intricate training techniques and do not explicitly focus on preserving fine-grained details. In this paper, we introduce the Generalized Solver: a simple parameterization of the ODE sampler that does not require additional training tricks and improves quality over existing approaches. We further combine the original distillation loss with adversarial training, which mitigates artifacts and enhances detail fidelity. We call the resulting method the Generalized Adversarial Solver and demonstrate its superior performance compared to existing solver training methods under similar resource constraints. Code is available at https://github.com/3145tttt/GAS.
Community
We introduce the Generalized Adversarial Solver (GAS) โ a simple yet powerful approach to greatly accelerate diffusion model sampling (up to 5x) without sacrificing generation quality and fine-grained detail
Our method is based on a novel trainable parametrization of a solver, which can adapt to the diffusion model utilized in your generation pipeline, thus, solve the underlying ODE with more precision
More details about method's implementations, experiments, and ablation studies can be found in our preprint
๐ Preprint: https://arxiv.org/abs/2510.17699
๐พ Code: https://github.com/3145tttt/GAS
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Soft-Di[M]O: Improving One-Step Discrete Image Generation with Soft Embeddings (2025)
- POSE: Phased One-Step Adversarial Equilibrium for Video Diffusion Models (2025)
- Universal Inverse Distillation for Matching Models with Real-Data Supervision (No GANs) (2025)
- Large Scale Diffusion Distillation via Score-Regularized Continuous-Time Consistency (2025)
- Score Distillation of Flow Matching Models (2025)
- SSDD: Single-Step Diffusion Decoder for Efficient Image Tokenization (2025)
- Score-based Idempotent Distillation of Diffusion Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper