Kant, Yash

11 publications

CVPR 2025 Pippo: High-Resolution Multi-View Humans from a Single Image Yash Kant, Ethan Weber, Jin Kyu Kim, Rawal Khirodkar, Su Zhaoen, Julieta Martinez, Igor Gilitschenski, Shunsuke Saito, Timur Bagautdinov
ICLR 2025 SG-I2V: Self-Guided Trajectory Control in Image-to-Video Generation Koichi Namekata, Sherwin Bahmani, Ziyi Wu, Yash Kant, Igor Gilitschenski, David B. Lindell
CVPR 2025 Vid2Avatar-Pro: Authentic Avatar from Videos in the Wild via Universal Prior Chen Guo, Junxuan Li, Yash Kant, Yaser Sheikh, Shunsuke Saito, Chen Cao
WACV 2024 AvatarOne: Monocular 3D Human Animation Akash Karthikeyan, Robert Ren, Yash Kant, Igor Gilitschenski
CVPR 2024 SPAD: Spatially Aware Multi-View Diffusers Yash Kant, Aliaksandr Siarohin, Ziyi Wu, Michael Vasilkovsky, Guocheng Qian, Jian Ren, Riza Alp Guler, Bernard Ghanem, Sergey Tulyakov, Igor Gilitschenski
CVPRW 2023 CAMM: Building Category-Agnostic and Animatable 3D Models from Monocular Videos Tianshu Kuai, Akash Karthikeyan, Yash Kant, Ashkan Mirzaei, Igor Gilitschenski
CVPR 2023 Invertible Neural Skinning Yash Kant, Aliaksandr Siarohin, Riza Alp Guler, Menglei Chai, Jian Ren, Sergey Tulyakov, Igor Gilitschenski
ECCV 2022 Housekeep: Tidying Virtual Households Using Commonsense Reasoning Yash Kant, Arun Ramachandran, Sriram Yenamandra, Igor Gilitschenski, Dhruv Batra, Andrew Szot, Harsh Agrawal
ECCV 2022 LaTeRF: Label and Text Driven Object Radiance Fields Ashkan Mirzaei, Yash Kant, Jonathan Kelly, Igor Gilitschenski
ICCV 2021 Contrast and Classify: Training Robust VQA Models Yash Kant, Abhinav Moudgil, Dhruv Batra, Devi Parikh, Harsh Agrawal
ECCV 2020 Spatially Aware Multimodal Transformers for TextVQA Yash Kant, Dhruv Batra, Peter Anderson, Alexander Schwing, Devi Parikh, Jiasen Lu, Harsh Agrawal