Give Me a Hint: Can LLMs Take a Hint to Solve Math Problems?

Abstract

While state-of-the-art LLMs have shown poor logical and basic mathematical reasoning, recent works try to improve their problem-solving abilities using prompting techniques. We propose giving "hints" to improve the language model's performance on advanced mathematical problems, taking inspiration from how humans approach math pedagogically. We also test robustness to adversarial hints and demonstrate their sensitivity to them. We demonstrate the effectiveness of our approach by evaluating various diverse LLMs, presenting them with a broad set of problems of different difficulties and topics from the MATH dataset and comparing against techniques such as one-shot, few-shot, and chain of thought prompting. Our code is available at https://github.com/vlgiitr/LLM-Math

Cite

Text

Agrawal et al. "Give Me a Hint: Can LLMs Take a Hint to Solve Math Problems?." NeurIPS 2024 Workshops: MATH-AI, 2024.

Markdown

[Agrawal et al. "Give Me a Hint: Can LLMs Take a Hint to Solve Math Problems?." NeurIPS 2024 Workshops: MATH-AI, 2024.](https://mlanthology.org/neuripsw/2024/agrawal2024neuripsw-give/)

BibTeX

@inproceedings{agrawal2024neuripsw-give,
  title     = {{Give Me a Hint: Can LLMs Take a Hint to Solve Math Problems?}},
  author    = {Agrawal, Vansh and Singla, Pratham and Miglani, Amitoj Singh and Garg, Shivank and Mangal, Ayush},
  booktitle = {NeurIPS 2024 Workshops: MATH-AI},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/agrawal2024neuripsw-give/}
}