Deep Policies for Online Bipartite Matching: A Reinforcement Learning Approach
Abstract
The challenge in the widely applicable online matching problem lies in making irrevocable assignments while there is uncertainty about future inputs. Most theoretically-grounded policies are myopic or greedy in nature. In real-world applications where the matching process is repeated on a regular basis, the underlying data distribution can be leveraged for better decision-making. We present an end-to-end Reinforcement Learning framework for deriving better matching policies based on trial-and-error on historical data. We devise a set of neural network architectures, design feature representations, and empirically evaluate them across two online matching problems: Edge-Weighted Online Bipartite Matching and Online Submodular Bipartite Matching. We show that most of the learning approaches perform consistently better than classical baseline algorithms on four synthetic and real-world datasets. On average, our proposed models improve the matching quality by 3-10% on a variety of synthetic and real-world datasets.
Cite
Text
Alomrani et al. "Deep Policies for Online Bipartite Matching: A Reinforcement Learning Approach." Transactions on Machine Learning Research, 2022.Markdown
[Alomrani et al. "Deep Policies for Online Bipartite Matching: A Reinforcement Learning Approach." Transactions on Machine Learning Research, 2022.](https://mlanthology.org/tmlr/2022/alomrani2022tmlr-deep/)BibTeX
@article{alomrani2022tmlr-deep,
title = {{Deep Policies for Online Bipartite Matching: A Reinforcement Learning Approach}},
author = {Alomrani, Mohammad Ali and Moravej, Reza and Khalil, Elias Boutros},
journal = {Transactions on Machine Learning Research},
year = {2022},
url = {https://mlanthology.org/tmlr/2022/alomrani2022tmlr-deep/}
}