Deep Ritz Revisited

Abstract

Recently, progress has been made in the application of neural networks to the numerical analysis of stationary and instationary partial differential equations. For example, one can use the variational formulation of the Dirichlet problem in order to obtain an objective function – a penalised Dirichlet energy – for the optimization of the parameters of neural networks with a fixed architecture. Although this approach yields promising empirical results especially in high dimensions it is lacking any convergence guarantees. We use the notion of $\Gamma$-convergence to show that ReLU networks of growing architecture that are trained with respect to suitably penalised Dirichlet energies converge to the solution of the Dirichlet problem. We discuss how our findings generalise to arbitrary variational problems under certain universality assumptions on the neural networks that are used. We see that this covers nonlinear stationary PDEs like the $p$-Laplace.

Cite

Text

Müller and Zeinhofer. "Deep Ritz Revisited." ICLR 2020 Workshops: DeepDiffEq, 2020.

Markdown

[Müller and Zeinhofer. "Deep Ritz Revisited." ICLR 2020 Workshops: DeepDiffEq, 2020.](https://mlanthology.org/iclrw/2020/muller2020iclrw-deep/)

BibTeX

@inproceedings{muller2020iclrw-deep,
  title     = {{Deep Ritz Revisited}},
  author    = {Müller, Johannes and Zeinhofer, Marius},
  booktitle = {ICLR 2020 Workshops: DeepDiffEq},
  year      = {2020},
  url       = {https://mlanthology.org/iclrw/2020/muller2020iclrw-deep/}
}