Number Theoretic Accelerated Learning of Physics-Informed Neural Networks

Abstract

Physics-informed neural networks solve partial differential equations by training neural networks. Since this method approximates infinite-dimensional PDE solutions with finite collocation points, minimizing discretization errors by selecting suitable points is essential for accelerating the learning process. Inspired by number theoretic methods for numerical analysis, we introduce good lattice training and periodization tricks, which ensure the conditions required by the theory. Our experiments demonstrate that GLT requires 2-7 times fewer collocation points, resulting in lower computational cost, while achieving competitive performance compared to typical sampling methods.

Cite

Text

Matsubara and Yaguchi. "Number Theoretic Accelerated Learning of Physics-Informed Neural Networks." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I1.32040

Markdown

[Matsubara and Yaguchi. "Number Theoretic Accelerated Learning of Physics-Informed Neural Networks." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/matsubara2025aaai-number/) doi:10.1609/AAAI.V39I1.32040

BibTeX

@inproceedings{matsubara2025aaai-number,
  title     = {{Number Theoretic Accelerated Learning of Physics-Informed Neural Networks}},
  author    = {Matsubara, Takashi and Yaguchi, Takaharu},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {595-603},
  doi       = {10.1609/AAAI.V39I1.32040},
  url       = {https://mlanthology.org/aaai/2025/matsubara2025aaai-number/}
}