Combining Induction and Transduction for Abstract Reasoning
Abstract
When learning an input-output mapping from very few examples, is it better to first infer a latent function that explains the examples, or is it better to directly predict new test outputs, e.g. using a neural network? We study this question on ARC by training neural models for \emph{induction} (inferring latent functions) and \emph{transduction} (directly predicting the test output for a given test input). We train on synthetically generated variations of Python programs that solve ARC training tasks. We find inductive and transductive models solve different kinds of test problems, despite having the same training problems and sharing the same neural architecture: Inductive program synthesis excels at precise computations, and at composing multiple concepts, while transduction succeeds on fuzzier perceptual concepts. Ensembling them approaches human-level performance on ARC.
Cite
Text
Li et al. "Combining Induction and Transduction for Abstract Reasoning." International Conference on Learning Representations, 2025.Markdown
[Li et al. "Combining Induction and Transduction for Abstract Reasoning." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/li2025iclr-combining/)BibTeX
@inproceedings{li2025iclr-combining,
title = {{Combining Induction and Transduction for Abstract Reasoning}},
author = {Li, Wen-Ding and Hu, Keya and Larsen, Carter and Wu, Yuqing and Alford, Simon and Woo, Caleb and Dunn, Spencer M. and Tang, Hao and Zheng, Wei-Long and Pu, Yewen and Ellis, Kevin},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/li2025iclr-combining/}
}