Deep Learning for Continuous-Time Stochastic Control with Jumps
Abstract
In this paper, we introduce a model-based deep-learning approach to solve finite-horizon continuous-time stochastic control problems with jumps. We iteratively train two neural networks: one to represent the optimal policy and the other to approximate the value function. Leveraging a continuous-time version of the dynamic programming principle, we derive two different training objectives based on the Hamilton--Jacobi--Bellman equation, ensuring that the networks capture the underlying stochastic dynamics. Empirical evaluations on different problems illustrate the accuracy and scalability of our approach, demonstrating its effectiveness in solving complex high-dimensional stochastic control tasks.
Cite
Text
Cheridito et al. "Deep Learning for Continuous-Time Stochastic Control with Jumps." Advances in Neural Information Processing Systems, 2025.Markdown
[Cheridito et al. "Deep Learning for Continuous-Time Stochastic Control with Jumps." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/cheridito2025neurips-deep/)BibTeX
@inproceedings{cheridito2025neurips-deep,
title = {{Deep Learning for Continuous-Time Stochastic Control with Jumps}},
author = {Cheridito, Patrick and Dupret, Jean-Loup and Hainaut, Donatien},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/cheridito2025neurips-deep/}
}