Interpretability in Action: Exploratory Analysis of VPT, a Minecraft Agent

Abstract

Understanding the mechanisms behind decisions taken by large foundation models in sequential decision making tasks is critical to ensuring that such systems operate transparently and safely. In this work, we perform exploratory analysis on the Video PreTraining (VPT) Minecraft playing agent, one of the largest open-source vision-based agents. We aim to illuminate its reasoning mechanisms by applying various interpretability techniques. First, we analyze the attention mechanism while the agent solves its training task---crafting a diamond pickaxe. The agent pays attention to the last four frames and several key-frames further back in its six-second memory. This is a possible mechanism for maintaining coherence in a task that takes 3--10 minutes, despite the short memory span. Secondly, we perform various interventions, which help us uncover a worrying case of goal misgeneralization: VPT mistakenly identifies a villager wearing brown clothes as a tree trunk when the villager is positioned stationary under green tree leaves, and punches it to death.

Cite

Text

Jucys et al. "Interpretability in Action: Exploratory Analysis of VPT, a Minecraft Agent." ICML 2024 Workshops: MI, 2024.

Markdown

[Jucys et al. "Interpretability in Action: Exploratory Analysis of VPT, a Minecraft Agent." ICML 2024 Workshops: MI, 2024.](https://mlanthology.org/icmlw/2024/jucys2024icmlw-interpretability/)

BibTeX

@inproceedings{jucys2024icmlw-interpretability,
  title     = {{Interpretability in Action: Exploratory Analysis of VPT, a Minecraft Agent}},
  author    = {Jucys, Karolis and Adamopoulos, George and Hamidi, Mehrab and Milani, Stephanie and Samsami, Mohammad Reza and Zholus, Artem and Joseph, Sonia and Richards, Blake Aaron and Rish, Irina and Şimşek, Özgür},
  booktitle = {ICML 2024 Workshops: MI},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/jucys2024icmlw-interpretability/}
}