Senocak, Arda

10 publications

ICLR 2025 AVHBench: A Cross-Modal Hallucination Benchmark for Audio-Visual Large Language Models Kim Sung-Bin, Oh Hyun-Bin, JungMok Lee, Arda Senocak, Joon Son Chung, Tae-Hyun Oh
NeurIPS 2025 Model-Guided Dual-Role Alignment for High-Fidelity Open-Domain Video-to-Audio Generation Kang Zhang, Trung X. Pham, Suyeon Lee, Axi Niu, Arda Senocak, Joon Son Chung
CVPR 2025 Seeing Speech and Sound: Distinguishing and Locating Audio Sources in Visual Scenes Hyeonggon Ryu, Seongyu Kim, Joon Son Chung, Arda Senocak
WACV 2024 Can CLIP Help Sound Source Localization? Sooyoung Park, Arda Senocak, Joon Son Chung
WACV 2023 Event-Specific Audio-Visual Fusion Layers: A Simple and New Perspective on Video Understanding Arda Senocak, Junsik Kim, Tae-Hyun Oh, Dingzeyu Li, In So Kweon
ICCV 2023 Sound Source Localization Is All About Cross-Modal Alignment Arda Senocak, Hyeonggon Ryu, Junsik Kim, Tae-Hyun Oh, Hanspeter Pfister, Joon Son Chung
CVPR 2023 Sound to Visual Scene Generation by Audio-to-Visual Latent Alignment Kim Sung-Bin, Arda Senocak, Hyunwoo Ha, Andrew Owens, Tae-Hyun Oh
WACV 2022 Less Can Be More: Sound Source Localization with a Classification Model Arda Senocak, Hyeonggon Ryu, Junsik Kim, In So Kweon
CVPRW 2018 On Learning Association of Sound Source and Visual Scenes Arda Senocak, Tae-Hyun Oh, Junsik Kim, Ming-Hsuan Yang, In So Kweon
CVPRW 2018 Part-Based Player Identification Using Deep Convolutional Representation and Multi-Scale Pooling Arda Senocak, Tae-Hyun Oh, Junsik Kim, In So Kweon