OpenAD: Open-World Autonomous Driving Benchmark for 3D Object Detection
Abstract
Open-world perception aims to develop a model adaptable to novel domains and various sensor configurations and can understand uncommon objects and corner cases. However, current research lacks sufficiently comprehensive open-world 3D perception benchmarks and robust generalizable methodologies. This paper introduces OpenAD, the first real open-world autonomous driving benchmark for 3D object detection. OpenAD is built upon a corner case discovery and annotation pipeline that integrates with a multimodal large language model (MLLM). The proposed pipeline annotates corner case objects in a unified format for five autonomous driving perception datasets with 2000 scenarios. In addition, we devise evaluation methodologies and evaluate various open-world and specialized 2D and 3D models. Moreover, we propose a vision-centric 3D open-world object detection baseline and further introduce an ensemble method by fusing general and specialized models to address the issue of lower precision in existing open-world methods for the OpenAD benchmark. We host an online challenge on EvalAI. Data, toolkit codes, and evaluation codes are available at https://github.com/VDIGPKU/OpenAD.
Cite
Text
Xia et al. "OpenAD: Open-World Autonomous Driving Benchmark for 3D Object Detection." Advances in Neural Information Processing Systems, 2025.Markdown
[Xia et al. "OpenAD: Open-World Autonomous Driving Benchmark for 3D Object Detection." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/xia2025neurips-openad/)BibTeX
@inproceedings{xia2025neurips-openad,
title = {{OpenAD: Open-World Autonomous Driving Benchmark for 3D Object Detection}},
author = {Xia, Zhongyu and Li, Jishuo and Lin, Zhiwei and Wang, Xinhao and Wang, Yongtao and Yang, Ming-Hsuan},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/xia2025neurips-openad/}
}