Transformers to Predict the Applicability of Symbolic Integration Routines

Abstract

Symbolic integration is a fundamental problem in mathematics: we consider how machine learning may be used to optimise this task in a Computer Algebra System (CAS). We train transformers that predict whether a particular integration method will be successful, and compare against the existing human-made heuristics (called guards) that perform this task in a leading CAS. We find the transformer can outperform these guards, gaining up to 30\% accuracy and 70\% precision. We further show that the inference time of the transformer is inconsequential which shows that it is well-suited to include as a guard in a CAS. Furthermore, we use Layer Integrated Gradients to interpret the decisions that the transformer is making. If guided by a subject-matter expert, the technique can explain some of the predictions based on the input tokens, which can lead to further optimisations.

Cite

Text

Barket et al. "Transformers to Predict the Applicability of Symbolic Integration Routines." NeurIPS 2024 Workshops: MATH-AI, 2024.

Markdown

[Barket et al. "Transformers to Predict the Applicability of Symbolic Integration Routines." NeurIPS 2024 Workshops: MATH-AI, 2024.](https://mlanthology.org/neuripsw/2024/barket2024neuripsw-transformers/)

BibTeX

@inproceedings{barket2024neuripsw-transformers,
  title     = {{Transformers to Predict the Applicability of Symbolic Integration Routines}},
  author    = {Barket, Rashid and Shafiq, Uzma and England, Matthew and Gerhard, Juergen},
  booktitle = {NeurIPS 2024 Workshops: MATH-AI},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/barket2024neuripsw-transformers/}
}