Learning to Coordinate Without Sharing Information

Abstract

Researchers in the field of Distributed Artificial Intelligence (DAI) have been developing efficient mechanisms to coordinate the activities of multiple autonomous agents. The need for coordination arises because agents have to share resources and expertise required to achieve their goals. Previous work in the area includes using sophisticated information exchange protocols, investigating heuristics for negotiation, and developing formal models of possibilities of conflict and cooperation among agent interests. In order to handle the changing requirements of continuous and dynamic environments, we propose learning as a means to provide additional possibilities for effective coordination. We use reinforcement learning techniques on a block pushing problem to show that agents can learn complimentary policies to follow a desired path without any knowledge about each other. We theoretically analyze and experimentally verify the effects of learning rate on system convergence, and demonstrat...

Cite

Text

Sen et al. "Learning to Coordinate Without Sharing Information." AAAI Conference on Artificial Intelligence, 1994.

Markdown

[Sen et al. "Learning to Coordinate Without Sharing Information." AAAI Conference on Artificial Intelligence, 1994.](https://mlanthology.org/aaai/1994/sen1994aaai-learning/)

BibTeX

@inproceedings{sen1994aaai-learning,
  title     = {{Learning to Coordinate Without Sharing Information}},
  author    = {Sen, Sandip and Sekaran, Mahendra and Hale, John},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {1994},
  pages     = {426-431},
  url       = {https://mlanthology.org/aaai/1994/sen1994aaai-learning/}
}