diffusion 썸네일형 리스트형 [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers https://arxiv.org/abs/2406.17343 , CVPR 2025 (Accepted) Q-DiT: Accurate Post-Training Quantization for Diffusion TransformersRecent advancements in diffusion models, particularly the architectural transformation from UNet-based models to Diffusion Transformers (DiTs), significantly improve the quality and scalability of image and video generation. However, despite their impressiarxiv.org Abstrac.. 더보기 [ECCV2024] Post-training Quantization for Text-to-Image Diffusion Models with Progressive Calibration and Activation Relaxing https://arxiv.org/abs/2311.06322 , ECCV 2024 Post-training Quantization for Text-to-Image Diffusion Models with Progressive Calibration and Activation RelaxingHigh computational overhead is a troublesome problem for diffusion models. Recent studies have leveraged post-training quantization (PTQ) to compress diffusion models. However, most of them only focus on unconditional models, leaving the q.. 더보기 이전 1 다음