Krylov Subspace Recycling for Fast Iterative Least-Squares in Machine Learning
2017
Article
pn
Solving symmetric positive definite linear problems is a fundamental computational task in machine learning. The exact solution, famously, is cubicly expensive in the size of the matrix. To alleviate this problem, several linear-time approximations, such as spectral and inducing-point methods, have been suggested and are now in wide use. These are low-rank approximations that choose the low-rank space a priori and do not refine it over time. While this allows linear cost in the data-set size, it also causes a finite, uncorrected approximation error. Authors from numerical linear algebra have explored ways to iteratively refine such low-rank approximations, at a cost of a small number of matrix-vector multiplications. This idea is particularly interesting in the many situations in machine learning where one has to solve a sequence of related symmetric positive definite linear problems. From the machine learning perspective, such deflation methods can be interpreted as transfer learning of a low-rank approximation across a time-series of numerical tasks. We study the use of such methods for our field. Our empirical results show that, on regression and classification problems of intermediate size, this approach can interpolate between low computational cost and numerical precision.
Author(s): | Filip de Roos and Philipp Hennig |
Journal: | arXiv preprint arXiv:1706.00241 |
Year: | 2017 |
Department(s): | Probabilistic Numerics |
Research Project(s): |
Probabilistic Methods for Linear Algebra
|
Bibtex Type: | Article (article) |
URL: | https://arxiv.org/abs/1706.00241 |
BibTex @article{deroos2017krylov, title = {Krylov Subspace Recycling for Fast Iterative Least-Squares in Machine Learning}, author = {de Roos, Filip and Hennig, Philipp}, journal = {arXiv preprint arXiv:1706.00241}, year = {2017}, doi = {}, url = {https://arxiv.org/abs/1706.00241} } |