Learning to Solve Differential Equations Across Initial Conditions

Abstract

Recently, there has been a lot of interest in using neural networks for solving partial differential equations. A number of neural network-based partial differential equation solvers have been formulated which provide performances equivalent, and in some cases even superior, to classical solvers. However, these neural solvers, in general, need to be retrained each time the initial conditions or the domain of the partial differential equation changes. In this work, we posit the problem of approximating the solution of a fixed partial differential equation for any arbitrary initial conditions as learning a conditional probability distribution. We demonstrate the utility of our method on Burger's Equation.

Cite

Text

Malik et al. "Learning to Solve Differential Equations Across Initial Conditions." ICLR 2020 Workshops: DeepDiffEq, 2020.

Markdown

[Malik et al. "Learning to Solve Differential Equations Across Initial Conditions." ICLR 2020 Workshops: DeepDiffEq, 2020.](https://mlanthology.org/iclrw/2020/malik2020iclrw-learning/)

BibTeX

@inproceedings{malik2020iclrw-learning,
  title     = {{Learning to Solve Differential Equations Across Initial Conditions}},
  author    = {Malik, Shehryar and Anwar, Usman and Ahmed, Ali and Aghasi, Alireza},
  booktitle = {ICLR 2020 Workshops: DeepDiffEq},
  year      = {2020},
  url       = {https://mlanthology.org/iclrw/2020/malik2020iclrw-learning/}
}