Sung-Bin, Kim

7 publications

ICLR 2025 AVHBench: A Cross-Modal Hallucination Benchmark for Audio-Visual Large Language Models Kim Sung-Bin, Oh Hyun-Bin, JungMok Lee, Arda Senocak, Joon Son Chung, Tae-Hyun Oh
CVPR 2025 Perceptually Accurate 3D Talking Head Generation: New Definitions, Speech-Mesh Representation, and Evaluation Metrics Lee Chae-Yeon, Oh Hyun-Bin, Han EunGi, Kim Sung-Bin, Suekyeong Nam, Tae-Hyun Oh
AAAI 2025 SoundBrush: Sound as a Brush for Visual Scene Editing Kim Sung-Bin, Kim Jun-Seong, Junseok Ko, Yewon Kim, Tae-Hyun Oh
ICCV 2025 VoiceCraft-Dub: Automated Video Dubbing with Neural Codec Language Models Kim Sung-Bin, Jeongsoo Choi, Puyuan Peng, Joon Son Chung, Tae-Hyun Oh, David Harwath
TMLR 2024 A Large-Scale 3D Face Mesh Video Dataset via Neural Re-Parameterized Optimization Kim Youwang, Lee Hyun, Kim Sung-Bin, Suekyeong Nam, Janghoon Ju, Tae-Hyun Oh
WACV 2024 LaughTalk: Expressive 3D Talking Head Generation with Laughter Kim Sung-Bin, Lee Hyun, Da Hye Hong, Suekyeong Nam, Janghoon Ju, Tae-Hyun Oh
CVPR 2023 Sound to Visual Scene Generation by Audio-to-Visual Latent Alignment Kim Sung-Bin, Arda Senocak, Hyunwoo Ha, Andrew Owens, Tae-Hyun Oh