Manifold Preserving Guided Diffusion

1Carnegie Mellon University, 2Sony AI, 3Sony Group Corporation, 4Stanford University

Equal Contribution

Internship at Sony AI

Our proposed MPGD as a training-free sampling method for both pre-trained pixel-space diffusion models and latent diffusion models in a variety of conditional generation applications. MPGD can be applied to a broad range of tasks with minimal sampling time and high sample quality.


Despite the recent advancements, conditional image generation still faces challenges of cost, generalizability, and the need for task-specific training. In this paper, we propose Manifold Preserving Guided Diffusion (MPGD), a training-free conditional generation framework that leverages pretrained diffusion models and off-the-shelf neural networks with minimal additional inference cost for a broad range of tasks. Specifically, we leverage the manifold hypothesis to refine the guided diffusion steps and introduce a shortcut algorithm in the process. We then propose two methods for on-manifold training-free guidance using pre-trained autoencoders and demonstrate that our shortcut inherently preserves the manifolds when applied to latent diffusion models. Our experiments show that MPGD is efficient and effective for solving a variety of conditional generation applications in low-compute settings, and can consistently offer up to 3.8× speed-ups with the same number of diffusion steps while maintaining high sample quality compared to the baselines.

Proposed Method

A schematic overview of our proposed MPGD and an illustrative comparison with DDIM and DPS.


Comparing different baselines with our proposed MPGD for solving noisy linear inverse problems.

Comparison of facial recognition (FaceID) guidance generation with our MPGD and baselines.

Comparison of style guidance Stable Diffusion generation with our MPGD and baselines.

Examples of MPGD FaceID guidance Stable Diffusion generation combined with different optimization algorithms.