Thomason, Jesse

22 publications

CoRL 2025 Efficient Evaluation of Multi-Task Robot Policies with Active Experiment Selection Abrar Anwar, Rohan Gupta, Zain Merchant, Sayan Ghosh, Willie Neiswanger, Jesse Thomason
CoRL 2025 ReWiND: Language-Guided Rewards Teach Robot Policies Without New Demonstrations Jiahui Zhang, Yusen Luo, Abrar Anwar, Sumedh Anand Sontakke, Joseph J Lim, Jesse Thomason, Erdem Biyik, Jesse Zhang
CoRL 2024 Contrast Sets for Evaluating Language-Guided Robot Policies Abrar Anwar, Rohan Gupta, Jesse Thomason
NeurIPSW 2024 Language Models and Symbolic Planners Can Infer Action Semantics Through Environment Feedback Wang Bill Zhu, Ishika Singh, Robin Jia, Jesse Thomason
ICLRW 2024 WinoViz: Probing Visual Properties of Objects Under Different States Woojeong Jin, Tejas Srinivasan, Jesse Thomason, Xiang Ren
CVPRW 2023 Curriculum Learning for Data-Efficient Vision-Language Alignment Tejas Srinivasan, Xiang Ren, Jesse Thomason
CoLLAs 2023 I2I: Initializing Adapters with Improvised Knowledge Tejas Srinivasan, Furong Jia, Mohammad Rostami, Jesse Thomason
CVPR 2023 Iterative Vision-and-Language Navigation Jacob Krantz, Shurjo Banerjee, Wang Zhu, Jason Corso, Peter Anderson, Stefan Lee, Jesse Thomason
NeurIPS 2022 CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks Tejas Srinivasan, Ting-Yun Chang, Leticia Pinto Alva, Georgios Chochlakis, Mohammad Rostami, Jesse Thomason
NeurIPSW 2022 ProgPrompt: Generating Situated Robot Task Plans Using Large Language Models Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, Animesh Garg
AAAI 2022 TEACh: Task-Driven Embodied Agents That Chat Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gökhan Tür, Dilek Hakkani-Tür
CoRL 2021 Language Grounding with 3D Objects Jesse Thomason, Mohit Shridhar, Yonatan Bisk, Chris Paxton, Luke Zettlemoyer
JAIR 2020 Jointly Improving Parsing and Perception for Natural Language Commands Through Human-Robot Dialog Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin W. Hart, Peter Stone, Raymond J. Mooney
CoRL 2020 The RobotSlang Benchmark: Dialog-Guided Robot Localization and Navigation Shurjo Banerjee, Jesse Thomason, Jason Corso
CoRL 2019 Vision-and-Dialog Navigation Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer
AAAI 2018 Guiding Exploratory Behaviors for Multi-Modal Grounding of Linguistic Descriptions Jesse Thomason, Jivko Sinapov, Raymond J. Mooney, Peter Stone
AAAI 2018 Maximum-Variance Total Variation Denoising for Interpretable Spatial Smoothing Wesley Tansey, Jesse Thomason, James G. Scott
IJCAI 2018 Multi-Modal Predicate Identification Using Dynamically Learned Robot Controllers Saeid Amiri, Suhua Wei, Shiqi Zhang, Jivko Sinapov, Jesse Thomason, Peter Stone
IJCAI 2017 Multi-Modal Word Synset Induction Jesse Thomason, Raymond J. Mooney
CoRL 2017 Opportunistic Active Learning for Grounding Natural Language Descriptions Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Justin W. Hart, Peter Stone, Raymond J. Mooney
IJCAI 2016 Learning Multi-Modal Grounded Linguistic Semantics by Playing "i Spy" Jesse Thomason, Jivko Sinapov, Maxwell Svetlik, Peter Stone, Raymond J. Mooney
IJCAI 2015 Learning to Interpret Natural Language Commands Through Human-Robot Dialog Jesse Thomason, Shiqi Zhang, Raymond J. Mooney, Peter Stone