Finding Visual Task Vectors
Abstract
Visual Prompting is a technique for teaching models to perform a visual task via in-context examples, without any additional training. In this work, we analyze the activations of MAE-VQGAN, a recent Visual Prompting model (Bar et al., 2022), and find Task Vectors, activations that encode task-specific information. We then demonstrate that it is possible to identify the Task Vectors and use them to guide the network towards performing different tasks without having to provide any in-context input-output examples. To find Task Vectors, we compute the mean activations of the attention heads in the model per task and use the REINFORCE (Williams, 1992) algorithm to patch into a subset of them with a new query image. The resulting Task Vectors guide the model with better performance than the original model.
Cite
Text
Hojel et al. "Finding Visual Task Vectors." ICML 2024 Workshops: MI, 2024.Markdown
[Hojel et al. "Finding Visual Task Vectors." ICML 2024 Workshops: MI, 2024.](https://mlanthology.org/icmlw/2024/hojel2024icmlw-finding/)BibTeX
@inproceedings{hojel2024icmlw-finding,
title = {{Finding Visual Task Vectors}},
author = {Hojel, Alberto and Bai, Yutong and Darrell, Trevor and Globerson, Amir and Bar, Amir},
booktitle = {ICML 2024 Workshops: MI},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/hojel2024icmlw-finding/}
}