Iterative Splits of Quadratic Bounds for Scalable Binary Tensor Factorization

Abstract

Binary matrices and tensors are popular data structures that need to be efficiently approxi-mated by low-rank representations. A standard approach is to minimize the logistic loss, well suited for binary data. In many cases, the num-ber m of non-zero elements in the tensor is much smaller than the total number n of possible en-tries in the tensor. This creates a problem for large tensors because the computation of the lo-gistic loss has a linear time complexity with n. In this work, we show that an alternative approach is to minimize the quadratic loss (root mean square error) which leads to algorithms with a training time complexity that is reduced from O(n) to O(m), as proposed earlier in the restricted case of alternating least-square algorithms. In addi-tion, we propose and study a greedy algorithm that partitions the tensor into smaller tensors, each approximated by a quadratic upper bound. This technique provides a time-accuracy trade-off between a fast but approximate algorithm and an accurate but slow algorithm. We show that this technique leads to a considerable speedup in learning of real world tensors. 1

Cite

Text

Ermis and Bouchard. "Iterative Splits of Quadratic Bounds for Scalable Binary Tensor Factorization." Conference on Uncertainty in Artificial Intelligence, 2014.

Markdown

[Ermis and Bouchard. "Iterative Splits of Quadratic Bounds for Scalable Binary Tensor Factorization." Conference on Uncertainty in Artificial Intelligence, 2014.](https://mlanthology.org/uai/2014/ermis2014uai-iterative/)

BibTeX

@inproceedings{ermis2014uai-iterative,
  title     = {{Iterative Splits of Quadratic Bounds for Scalable Binary Tensor Factorization}},
  author    = {Ermis, Beyza and Bouchard, Guillaume},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2014},
  pages     = {192-199},
  url       = {https://mlanthology.org/uai/2014/ermis2014uai-iterative/}
}