New Stable Diffusion Models Accelerated with NVIDIA TensorRT

- NVIDIA shared at the CES conference that their Stable Diffusion models, including SDXL Turbo, LCM-LoRA, and Stable Video Diffusion, have been accelerated with NVIDIA TensorRT.
- This acceleration allows owners of GeForce RTX GPUs to generate images in real-time and significantly reduce the time needed to generate videos, improving workflows.
- With the help of Tensor Cores and TensorRT optimizations, NVIDIA hardware can produce up to four images per second, enabling real-time SDXL image generation.
- The SDXL Turbo model can be downloaded from Hugging Face and offers accelerated image generation by using only four steps instead of the traditional 50, with some image quality compromise.
- The LCM-LoRA model, also available on Hugging Face, combines the latent consistency model with LoRA and achieves approximately 9x faster speed, thanks to TensorRT optimizations.
- Stability AI's Stable Video Diffusion model, based on their Stable Diffusion image model, will be available for download soon and provides the foundation for generative video.
- Developers can access the Stable Diffusion Web UI TensorRT extension from the NVIDIA/Stable-Diffusion-WebUI-TensorRT GitHub repository, which includes TensorRT acceleration for SDXL, SDXL Turbo, and LCM-LoRA models.
- The NVIDIA Generative AI on RTX PCs Developer Contest encourages developers to create generative AI-powered Windows apps or plugins, with a chance to win prizes including a GeForce RTX 4090 GPU and a conference pass to GTC.