Learning Aggregation Functions
Abstract
Learning on sets is increasingly gaining attention in the machine learning community, due to its widespread applicability. Typically, representations over sets are computed by using fixed aggregation functions such as sum or maximum. However, recent results showed that universal function representation by sum- (or max-) decomposition requires either highly discontinuous (and thus poorly learnable) mappings, or a latent dimension equal to the maximum number of elements in the set. To mitigate this problem, we introduce LAF (Learning Aggregation Function), a learnable aggregator for sets of arbitrary cardinality. LAF can approximate several extensively used aggregators (such as average, sum, maximum) as well as more complex functions (e.g. variance and skewness). We report experiments on semi-synthetic and real data showing that LAF outperforms state-of-the-art sum- (max-) decomposition architectures such as DeepSets and library-based architectures like Principal Neighborhood Aggregation, and can be effectively combined with attention-based architectures.
Cite
Text
Pellegrini et al. "Learning Aggregation Functions." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/398Markdown
[Pellegrini et al. "Learning Aggregation Functions." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/pellegrini2021ijcai-learning/) doi:10.24963/IJCAI.2021/398BibTeX
@inproceedings{pellegrini2021ijcai-learning,
title = {{Learning Aggregation Functions}},
author = {Pellegrini, Giovanni and Tibo, Alessandro and Frasconi, Paolo and Passerini, Andrea and Jaeger, Manfred},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2021},
pages = {2892-2898},
doi = {10.24963/IJCAI.2021/398},
url = {https://mlanthology.org/ijcai/2021/pellegrini2021ijcai-learning/}
}