Accelerating Sinkhorn Algorithm with Sparse Newton Iterations
Abstract
Computing the optimal transport distance between statistical distributions is a fundamental task in machine learning. One remarkable recent advancement is entropic regularization and the Sinkhorn algorithm, which utilizes only matrix scaling and guarantees an approximated solution with near-linear runtime. Despite the success of the Sinkhorn algorithm, its runtime may still be slow due to the potentially large number of iterations needed for convergence. To achieve possibly super-exponential convergence, we introduce Sinkhorn-Newton-Sparse (SNS), an extension to the Sinkhorn algorithm, by introducing early stopping for the matrix scaling steps and a second stage featuring a Newton-type subroutine. Adopting the variational viewpoint that the Sinkhorn algorithm maximizes a concave Lyapunov potential, we offer the insight that the Hessian matrix of the potential function is approximately sparse. Sparsification of the Hessian results in a fast $O(n^2)$ per-iteration complexity, the same as the Sinkhorn algorithm. In terms of total iteration count, we observe that the SNS algorithm converges orders of magnitude faster across a wide range of practical cases, including optimal transportation between empirical distributions and calculating the Wasserstein $W_1, W_2$ distance of discretized continuous densities. The empirical performance is corroborated by a rigorous bound on the approximate sparsity of the Hessian matrix.
Cite
Text
Tang et al. "Accelerating Sinkhorn Algorithm with Sparse Newton Iterations." International Conference on Learning Representations, 2024.Markdown
[Tang et al. "Accelerating Sinkhorn Algorithm with Sparse Newton Iterations." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/tang2024iclr-accelerating/)BibTeX
@inproceedings{tang2024iclr-accelerating,
title = {{Accelerating Sinkhorn Algorithm with Sparse Newton Iterations}},
author = {Tang, Xun and Shavlovsky, Michael and Rahmanian, Holakou and Tardini, Elisa and Thekumparampil, Kiran Koshy and Xiao, Tesi and Ying, Lexing},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/tang2024iclr-accelerating/}
}