Autoencoder Trees
Abstract
We discuss an autoencoder model in which the encoding and decoding functions are implemented by decision trees. We use the soft decision tree where internal nodes realize soft multivariate splits given by a gating function and the overall output is the average of all leaves weighted by the gating values on their path. The encoder tree takes the input and generates a lower dimensional representation in the leaves and the decoder tree takes this and reconstructs the original input. Exploiting the continuity of the trees, autoencoder trees are trained with stochastic gradient-descent. On handwritten digit and news data, we see that the autoencoder trees yield good reconstruction error compared to traditional autoencoder perceptrons. We also see that the autoencoder tree captures hierarchical representations at different granularities of the data on its different levels and the leaves capture the localities in the input space.
Cite
Text
İrsoy and Alpaydin. "Autoencoder Trees." Proceedings of The 7th Asian Conference on Machine Learning, 2015.Markdown
[İrsoy and Alpaydin. "Autoencoder Trees." Proceedings of The 7th Asian Conference on Machine Learning, 2015.](https://mlanthology.org/acml/2015/irsoy2015acml-autoencoder/)BibTeX
@inproceedings{irsoy2015acml-autoencoder,
title = {{Autoencoder Trees}},
author = {İrsoy, Ozan and Alpaydin, Ethem},
booktitle = {Proceedings of The 7th Asian Conference on Machine Learning},
year = {2015},
pages = {378-390},
volume = {45},
url = {https://mlanthology.org/acml/2015/irsoy2015acml-autoencoder/}
}