Tailoring Self-Rationalizers with Multi-Reward Distillation
Abstract
Large language models (LMs) are capable of generating free-text rationales to aid question answering. However, prior work 1) suggests that useful self-rationalization is emergent only at significant scales (e.g., 175B parameter GPT-3); and 2) focuses largely on downstream performance, ignoring the semantics of the rationales themselves, e.g., are they faithful, true, and helpful for humans? In this work, we enable small-scale LMs (∼200x smaller than GPT-3) to generate rationales that not only improve downstream task performance, but are also more plausible, consistent, and diverse, assessed both by automatic and human evaluation. Our method, MaRio (Multi-rewArd RatIOnalization), is a multi-reward conditioned self-rationalization algorithm that optimizes multiple distinct properties like plausibility, diversity and consistency. Results on five difficult question-answering datasets show that not only does MaRio improve task accuracy, but it also improves the self-rationalization quality of small LMs across the aforementioned axes better than a supervised fine-tuning (SFT) baseline. Extensive human evaluations confirm that MaRio rationales are preferred vs. SFT rationales, as well as qualitative improvements in plausibility and consistency.
Cite
Text
Ramnath et al. "Tailoring Self-Rationalizers with Multi-Reward Distillation." ICLR 2024 Workshops: SeT_LLM, 2024.Markdown
[Ramnath et al. "Tailoring Self-Rationalizers with Multi-Reward Distillation." ICLR 2024 Workshops: SeT_LLM, 2024.](https://mlanthology.org/iclrw/2024/ramnath2024iclrw-tailoring/)BibTeX
@inproceedings{ramnath2024iclrw-tailoring,
title = {{Tailoring Self-Rationalizers with Multi-Reward Distillation}},
author = {Ramnath, Sahana and Joshi, Brihi and Hallinan, Skyler and Lu, Ximing and Li, Liunian Harold and Chan, Aaron and Hessel, Jack and Choi, Yejin and Ren, Xiang},
booktitle = {ICLR 2024 Workshops: SeT_LLM},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/ramnath2024iclrw-tailoring/}
}