Reinforcement Learning for Optimized Trade Execution
Abstract
We present the first large-scale empirical application of reinforcement learning to the important problem of optimized trade execution in modern financial markets. Our experiments are based on 1.5 years of millisecond time-scale limit order data from NASDAQ, and demonstrate the promise of reinforcement learning methods to market microstructure problems. Our learning algorithm introduces and exploits a natural low-impact factorization of the state space.
Cite
Text
Nevmyvaka et al. "Reinforcement Learning for Optimized Trade Execution." International Conference on Machine Learning, 2006. doi:10.1145/1143844.1143929Markdown
[Nevmyvaka et al. "Reinforcement Learning for Optimized Trade Execution." International Conference on Machine Learning, 2006.](https://mlanthology.org/icml/2006/nevmyvaka2006icml-reinforcement/) doi:10.1145/1143844.1143929BibTeX
@inproceedings{nevmyvaka2006icml-reinforcement,
title = {{Reinforcement Learning for Optimized Trade Execution}},
author = {Nevmyvaka, Yuriy and Feng, Yi and Kearns, Michael J.},
booktitle = {International Conference on Machine Learning},
year = {2006},
pages = {673-680},
doi = {10.1145/1143844.1143929},
url = {https://mlanthology.org/icml/2006/nevmyvaka2006icml-reinforcement/}
}