AnyPlace: Learning Generalizable Object Placement for Robot Manipulation

Abstract

Object placement in robotic tasks is inherently challenging due to the diversity of object geometries and placement configurations. We address this with AnyPlace, a two-stage method trained entirely on synthetic data, capable of predicting a wide range of feasible placement poses for real-world tasks. Our key insight is that by leveraging a Vision-Language Model (VLM) to identify approximate placement locations, we can focus only on the relevant regions for precise local placement, which enables us to train the low-level placement-pose-prediction model to capture multimodal placements efficiently. For training, we generate a fully synthetic dataset comprising 13 categories of randomly generated objects in 5370 different placement poses across three configurations (insertion, stacking, hanging) and train local placement-prediction models. We extensively evaluate our method in high-fidelity simulation and show that it consistently outperforms baseline approaches across all three tasks in terms of success rate, coverage of placement modes, and precision. In real-world experiments, our method achieves an average success and coverage rate of 76% across three tasks, where most baseline methods fail completely. We further validate the generalization of our approach on 16 real-world placement tasks, demonstrating that models trained purely on synthetic data can be directly transferred to the real world in a zero-shot setting. More at: https://anyplace-pnp.github.io.

Cite

Text

Zhao et al. "AnyPlace: Learning Generalizable Object Placement for Robot Manipulation." Proceedings of The 9th Conference on Robot Learning, 2025.

Markdown

[Zhao et al. "AnyPlace: Learning Generalizable Object Placement for Robot Manipulation." Proceedings of The 9th Conference on Robot Learning, 2025.](https://mlanthology.org/corl/2025/zhao2025corl-anyplace/)

BibTeX

@inproceedings{zhao2025corl-anyplace,
  title     = {{AnyPlace: Learning Generalizable Object Placement for Robot Manipulation}},
  author    = {Zhao, Yuchi and Bogdanovic, Miroslav and Luo, Chengyuan and Tohme, Steven and Darvish, Kourosh and Aspuru-Guzik, Alan and Shkurti, Florian and Garg, Animesh},
  booktitle = {Proceedings of The 9th Conference on Robot Learning},
  year      = {2025},
  pages     = {4038-4057},
  volume    = {305},
  url       = {https://mlanthology.org/corl/2025/zhao2025corl-anyplace/}
}