Computer Vision 썸네일형 리스트형 [ECCV2024] Post-training Quantization for Text-to-Image Diffusion Models with Progressive Calibration and Activation Relaxing https://arxiv.org/abs/2311.06322 , ECCV 2024 Post-training Quantization for Text-to-Image Diffusion Models with Progressive Calibration and Activation RelaxingHigh computational overhead is a troublesome problem for diffusion models. Recent studies have leveraged post-training quantization (PTQ) to compress diffusion models. However, most of them only focus on unconditional models, leaving the q.. 더보기 [AAAI2021] Cross-Layer Distillation with Semantic Calibration Chen, Defang, et al. "Cross-layer distillation with semantic calibration." Proceedings of the AAAI conference on artificial intelligence. Vol. 35. No. 8. 2021. (AAAI 21) https://ojs.aaai.org/index.php/AAAI/article/view/16865 AbstractFeature map을 기반으로 지식을 전이하는 기존 feature distillation은 student model을 효과적으로 training시키는 방식임. 하지만, 의미론적 정보(semantic information)는 다양한 layer에 분포하며 이는 부정적인 regualrization.. 더보기 [CVPR2023] Hard Sample matters a Lot in Zero-Shot Quantization https://openaccess.thecvf.com/content/CVPR2023/html/Li_Hard_Sample_Matters_a_Lot_in_Zero-Shot_Quantization_CVPR_2023_paper.html CVPR 2023 Open Access RepositoryHard Sample Matters a Lot in Zero-Shot Quantization Huantong Li, Xiangmiao Wu, Fanbing Lv, Daihai Liao, Thomas H. Li, Yonggang Zhang, Bo Han, Mingkui Tan; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (.. 더보기 이전 1 다음