Scale-Consistent Learning with Neural Operators

Abstract

Data-driven models have emerged as a promising approach for solving partial differential equations (PDEs) in science and engineering. Previous machine learning (ML) models typically cover only a narrow distribution of PDE problems; for example, a trained ML model for the Navier-Stokes equations usually works only for a fixed Reynolds number and domain size. To overcome these limitations, we propose a data augmentation scheme based on scale-consistency properties of PDEs and design a scale-informed neural operator that can model a wide range of scales. Our formulation (i) leverages the fact that many PDEs possess a scale consistency under rescaling of the spatial domain, and (ii) is based on the discretization-convergent property of neural operators, which allows them to be applied across arbitrary resolutions. Our experiments on the 2D Darcy Flow, Helmholtz equation, and Navier-Stokes equations show that the proposed scale-consistency loss helps the scale-informed neural operator model generalize to Reynolds numbers ranging from 250 to 10000. This approach has the potential to significantly improve the efficiency and generalizability of data-driven PDE solvers in various scientific and engineering applications.

Cite

Text

Li et al. "Scale-Consistent Learning with Neural Operators." NeurIPS 2024 Workshops: FM4Science, 2024.

Markdown

[Li et al. "Scale-Consistent Learning with Neural Operators." NeurIPS 2024 Workshops: FM4Science, 2024.](https://mlanthology.org/neuripsw/2024/li2024neuripsw-scaleconsistent/)

BibTeX

@inproceedings{li2024neuripsw-scaleconsistent,
  title     = {{Scale-Consistent Learning with Neural Operators}},
  author    = {Li, Zongyi and Lanthaler, Samuel and Deng, Catherine and Wang, Yixuan and Azizzadenesheli, Kamyar and Anandkumar, Anima},
  booktitle = {NeurIPS 2024 Workshops: FM4Science},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/li2024neuripsw-scaleconsistent/}
}