Efficient Queries Transformer Neural Processes
Abstract
Neural Processes (NPs) are popular methods in meta-learning that can estimate predictive uncertainty on target datapoints by conditioning on a context dataset. Previous state-of-the-art method Transformer Neural Processes (TNPs) achieve strong performance but require quadratic computation with respect to the number of context datapoints per query, limiting its applications. Conversely, existing sub-quadratic NP variants perform significantly worse than that of TNPs. Tackling this issue, we propose Efficient Queries Transformer Neural Processes (EQTNPs), a more computationally efficient NP variant. The model encodes the context dataset into a set of vectors that is linear in the number of context datapoints. When making predictions, the model retrieves higher-order information from the context dataset via multiple cross-attention mechanisms on the context vectors. We empirically show that EQTNPs achieve results competitive with the state-of-the-art.
Cite
Text
Feng et al. "Efficient Queries Transformer Neural Processes." NeurIPS 2022 Workshops: MetaLearn, 2022.Markdown
[Feng et al. "Efficient Queries Transformer Neural Processes." NeurIPS 2022 Workshops: MetaLearn, 2022.](https://mlanthology.org/neuripsw/2022/feng2022neuripsw-efficient/)BibTeX
@inproceedings{feng2022neuripsw-efficient,
title = {{Efficient Queries Transformer Neural Processes}},
author = {Feng, Leo and Hajimirsadeghi, Hossein and Bengio, Yoshua and Ahmed, Mohamed Osama},
booktitle = {NeurIPS 2022 Workshops: MetaLearn},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/feng2022neuripsw-efficient/}
}