Cho, Jaewoong

21 publications

NeurIPS 2025 Delving into Large Language Models for Effective Time-Series Anomaly Detection Junwoo Park, Kyudan Jung, Dohyun Lee, Hyuck Lee, Daehoon Gwak, ChaeHun Park, Jaegul Choo, Jaewoong Cho
ICLR 2025 DiTTo-TTS: Diffusion Transformers for Scalable Text-to-Speech Without Domain-Specific Factors Keon Lee, Dong Won Kim, Jaehyeon Kim, Seungjun Chung, Jaewoong Cho
NeurIPS 2025 Distilling LLM Agent into Small Models with Retrieval and Code Tools Minki Kang, Jongwon Jeong, Seanie Lee, Jaewoong Cho, Sung Ju Hwang
ICML 2025 Efficient Generative Modeling with Residual Vector Quantization-Based Tokens Jaehyeon Kim, Taehong Moon, Keon Lee, Jaewoong Cho
ICML 2025 Lexico: Extreme KV Cache Compression via Sparse Coding over Universal Dictionaries Junhyuck Kim, Jongho Park, Jaewoong Cho, Dimitris Papailiopoulos
ICLRW 2025 Lexico: Extreme KV Cache Compression via Sparse Coding over Universal Dictionaries Junhyuck Kim, Jongho Park, Jaewoong Cho, Dimitris Papailiopoulos
ICLR 2025 Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance Dongmin Park, Sebin Kim, Taehong Moon, Minkyu Kim, Kangwook Lee, Jaewoong Cho
TMLR 2025 Task Diversity Shortens the In-Context Learning Plateau Jaeyeon Kim, Sehyun Kwon, Joo Young Choi, Jongho Park, Jaewoong Cho, Jason D. Lee, Ernest K. Ryu
ICML 2024 A Simple Early Exiting Framework for Accelerated Sampling in Diffusion Models Taehong Moon, Moonseok Choi, Eunggu Yun, Jongmin Yoon, Gayoung Lee, Jaewoong Cho, Juho Lee
ICLR 2024 CLaM-TTS: Improving Neural Codec Language Model for Zero-Shot Text-to-Speech Jaehyeon Kim, Keon Lee, Seungjun Chung, Jaewoong Cho
ICML 2024 Can Mamba Learn How to Learn? a Comparative Study on In-Context Learning Tasks Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, Kangwook Lee, Dimitris Papailiopoulos
ICLR 2024 Image Clustering Conditioned on Text Criteria Sehyun Kwon, Jaeseung Park, Minkyu Kim, Jaewoong Cho, Ernest K. Ryu, Kangwook Lee
NeurIPS 2024 Latent Paraphrasing: Perturbation on Layers Improves Knowledge Injection in Language Models Minki Kang, Sung Ju Hwang, Gibbeum Lee, Jaewoong Cho
TMLR 2024 Mini-Batch Optimization of Contrastive Loss Jaewoong Cho, Kartik Sreenivasan, Keon Lee, Kyunghoo Mun, Soheun Yi, Jeong-Gwan Lee, Anna Lee, Jy-yong Sohn, Dimitris Papailiopoulos, Kangwook Lee
TMLR 2024 Predictive Pipelined Decoding: A Compute-Latency Trade-Off for Exact LLM Decoding Seongjun Yang, Gibbeum Lee, Jaewoong Cho, Dimitris Papailiopoulos, Kangwook Lee
TMLR 2024 Simple Drop-in LoRA Conditioning on Attention Layers Will Improve Your Diffusion Model Joo Young Choi, Jaesung R. Park, Inkyu Park, Jaewoong Cho, Albert No, Ernest K. Ryu
NeurIPS 2023 Censored Sampling of Diffusion Models Using 3 Minutes of Human Feedback TaeHo Yoon, Kibeom Myoung, Keon Lee, Jaewoong Cho, Albert No, Ernest Ryu
NeurIPSW 2023 Image Clustering Conditioned on Text Criteria Sehyun Kwon, Jaeseung Park, Minkyu Kim, Jaewoong Cho, Ernest K. Ryu, Kangwook Lee
ICLRW 2023 Mini-Batch Optimization of Contrastive Loss Kartik Sreenivasan, Keon Lee, Jeong-Gwan Lee, Anna Lee, Jaewoong Cho, Jy-yong Sohn, Dimitris Papailiopoulos, Kangwook Lee
ICMLW 2023 Predictive Pipelined Decoding: A Compute-Latency Trade-Off for Exact LLM Decoding Seongjun Yang, Gibbeum Lee, Jaewoong Cho, Dimitris Papailiopoulos, Kangwook Lee
NeurIPS 2020 A Fair Classifier Using Kernel Density Estimation Jaewoong Cho, Gyeongjo Hwang, Changho Suh