Kwon, Se Jung

13 publications

NeurIPS 2025 CodeGEMM: A Codebook-Centric Approach to Efficient GEMM in Quantized LLMs Gunho Park, Jeongin Bae, Byeongwook Kim, Baeseong Park, Jiwon Ryu, Hoseung Kim, Se Jung Kwon, Dongsoo Lee
NeurIPS 2025 Diffusion Adaptive Text Embedding for Text-to-Image Diffusion Models Byeonghu Na, Minsang Park, Gyuwon Sim, Donghyeok Shin, HeeSun Bae, Mina Kang, Se Jung Kwon, Wanmo Kang, Il-chul Moon
NeurIPS 2024 DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation Sunghyeon Woo, Baesung Park, Byeongwook Kim, Minjung Jo, Se Jung Kwon, Dongsuk Jeon, Dongsoo Lee
ICLR 2024 LUT-GEMM: Quantized Matrix Multiplication Based on LUTs for Efficient Inference in Large-Scale Generative Language Models Gunho Park, Baeseong Park, Minsub Kim, Sungjae Lee, Jeonghoon Kim, Beomseok Kwon, Se Jung Kwon, Byeongwook Kim, Youngjoo Lee, Dongsoo Lee
ICLR 2024 Label-Noise Robust Diffusion Models Byeonghu Na, Yeongmin Kim, HeeSun Bae, Jung Hyun Lee, Se Jung Kwon, Wanmo Kang, Il-chul Moon
ICLR 2024 Rethinking Channel Dimensions to Isolate Outliers for Low-Bit Weight Quantization of Large Language Models Jung Hwan Heo, Jeonghoon Kim, Beomseok Kwon, Byeongwook Kim, Se Jung Kwon, Dongsoo Lee
ICML 2023 FlexRound: Learnable Rounding Based on Element-Wise Division for Post-Training Quantization Jung Hyun Lee, Jeonghoon Kim, Se Jung Kwon, Dongsoo Lee
NeurIPS 2023 Memory-Efficient Fine-Tuning of Compressed Large Language Models via Sub-4-Bit Integer Quantization Jeonghoon Kim, Jung Hyun Lee, Sungdong Kim, Joonsuk Park, Kang Min Yoo, Se Jung Kwon, Dongsoo Lee
ICML 2023 Refining Generative Process with Discriminator Guidance in Score-Based Diffusion Models Dongjun Kim, Yeongmin Kim, Se Jung Kwon, Wanmo Kang, Il-Chul Moon
ICLR 2023 Winning Both the Accuracy of Floating Point Activation and the Simplicity of Integer Arithmetic Yulhwa Kim, Jaeyong Jang, Jehun Lee, Jihoon Park, Jeonghoon Kim, Byeongwook Kim, Baeseong Park, Se Jung Kwon, Dongsoo Lee, Jae-Joon Kim
ICLR 2022 Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model Compression Bae Seong Park, Se Jung Kwon, Daehwan Oh, Byeongwook Kim, Dongsoo Lee
NeurIPS 2022 Maximum Likelihood Training of Implicit Nonlinear Diffusion Model Dongjun Kim, Byeonghu Na, Se Jung Kwon, Dongsoo Lee, Wanmo Kang, Il-chul Moon
NeurIPS 2020 FleXOR: Trainable Fractional Quantization Dongsoo Lee, Se Jung Kwon, Byeongwook Kim, Yongkweon Jeon, Baeseong Park, Jeongin Yun