Does Deep Learning Learn to Abstract? a Systematic Probing Framework

Abstract

Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context. At the same time, there is a lack of clear understanding about both the presence and further characteristics of this capability in deep learning models. In this paper, we introduce a systematic probing framework to explore the abstraction capability of deep learning models from a transferability perspective. A set of controlled experiments are conducted based on this framework, providing strong evidence that two probed pre-trained language models (PLMs), T5 and GPT2, have the abstraction capability. We also conduct in-depth analysis, thus shedding further light: (1) the whole training phase exhibits a "memorize-then-abstract" two-stage process; (2) the learned abstract concepts are gathered in a few middle-layer attention heads, rather than being evenly distributed throughout the model; (3) the probed abstraction capabilities exhibit robustness against concept mutations, and are more robust to low-level/source-side mutations than high-level/target-side ones; (4) generic pre-training is critical to the emergence of abstraction capability, and PLMs exhibit better abstraction with larger model sizes and data scales.

Cite

Text

An et al. "Does Deep Learning Learn to Abstract? a Systematic Probing Framework." International Conference on Learning Representations, 2023.

Markdown

[An et al. "Does Deep Learning Learn to Abstract? a Systematic Probing Framework." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/an2023iclr-deep/)

BibTeX

@inproceedings{an2023iclr-deep,
  title     = {{Does Deep Learning Learn to Abstract? a Systematic Probing Framework}},
  author    = {An, Shengnan and Lin, Zeqi and Chen, Bei and Fu, Qiang and Zheng, Nanning and Lou, Jian-Guang},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/an2023iclr-deep/}
}