Kim, Byeongwook

7 publications

NeurIPS 2025 CodeGEMM: A Codebook-Centric Approach to Efficient GEMM in Quantized LLMs Gunho Park, Jeongin Bae, Byeongwook Kim, Baeseong Park, Jiwon Ryu, Hoseung Kim, Se Jung Kwon, Dongsoo Lee
NeurIPS 2024 DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation Sunghyeon Woo, Baesung Park, Byeongwook Kim, Minjung Jo, Se Jung Kwon, Dongsuk Jeon, Dongsoo Lee
ICLR 2024 LUT-GEMM: Quantized Matrix Multiplication Based on LUTs for Efficient Inference in Large-Scale Generative Language Models Gunho Park, Baeseong Park, Minsub Kim, Sungjae Lee, Jeonghoon Kim, Beomseok Kwon, Se Jung Kwon, Byeongwook Kim, Youngjoo Lee, Dongsoo Lee
ICLR 2024 Rethinking Channel Dimensions to Isolate Outliers for Low-Bit Weight Quantization of Large Language Models Jung Hwan Heo, Jeonghoon Kim, Beomseok Kwon, Byeongwook Kim, Se Jung Kwon, Dongsoo Lee
ICLR 2023 Winning Both the Accuracy of Floating Point Activation and the Simplicity of Integer Arithmetic Yulhwa Kim, Jaeyong Jang, Jehun Lee, Jihoon Park, Jeonghoon Kim, Byeongwook Kim, Baeseong Park, Se Jung Kwon, Dongsoo Lee, Jae-Joon Kim
ICLR 2022 Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model Compression Bae Seong Park, Se Jung Kwon, Daehwan Oh, Byeongwook Kim, Dongsoo Lee
NeurIPS 2020 FleXOR: Trainable Fractional Quantization Dongsoo Lee, Se Jung Kwon, Byeongwook Kim, Yongkweon Jeon, Baeseong Park, Jeongin Yun