Fast Recursive Low-Rank Tensor Learning for Regression
Abstract
In this work, we develop a fast sequential low-rank tensor regression framework, namely recursive higher-order partial least squares (RHOPLS). It addresses the great challenges posed by the limited storage space and fast processing time required by dynamic environments when dealing with large-scale high-speed general tensor sequences. Smartly integrating a low-rank modification strategy of Tucker into a PLS-based framework, we efficiently update the regression coefficients by effectively merging the new data into the previous low-rank approximation of the model at a small-scale factor (feature) level instead of the large raw data (observation) level. Unlike batch models, which require accessing the entire data, RHOPLS conducts a blockwise recursive calculation scheme and thus only a small set of factors is needed to be stored. Our approach is orders of magnitude faster than all other methods while maintaining a highly comparable predictability with the cutting-edge batch methods, as verified on challenging real-life tasks.
Cite
Text
Hou and Chaib-draa. "Fast Recursive Low-Rank Tensor Learning for Regression." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/257Markdown
[Hou and Chaib-draa. "Fast Recursive Low-Rank Tensor Learning for Regression." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/hou2017ijcai-fast/) doi:10.24963/IJCAI.2017/257BibTeX
@inproceedings{hou2017ijcai-fast,
title = {{Fast Recursive Low-Rank Tensor Learning for Regression}},
author = {Hou, Ming and Chaib-draa, Brahim},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2017},
pages = {1851-1857},
doi = {10.24963/IJCAI.2017/257},
url = {https://mlanthology.org/ijcai/2017/hou2017ijcai-fast/}
}