Structured low-rank matrix learning

A saddle point approach to structured low-rank matrix learning

Authors

P. Jawanpuria and B. Mishra

Abstract

We propose a novel optimization approach for learning a low-rank matrix which is also constrained to lie in a linear subspace. Exploiting a particular variational characterization of the squared trace norm regularizer, we formulate the structured low-rank matrix learning problem as a rank-constrained saddle point minimax problem. The proposed modeling decouples the low- rank and structural constraints onto separate factors. The optimization problem is formulated on the Riemannian spectrahedron manifold, where the Riemannian framework allows to propose computationally efficient conjugate gradient and trust-region algorithms. Our approach easily accommodates popular non-smooth loss functions, e.g., l1-loss, and our algorithms are scalable to large-scale problem instances. The numerical comparisons show that our proposed algorithms outperform state-of-the-art algorithms in standard and robust matrix completion, stochastic realization, and multi-task feature learning problems on various benchmarks.

Downloads

  • Status: Technical report, 2017.
  • Paper: [arXiv:1704.07352][Presentation].
  • Matlab code: Structured_matrix_learning_19May2017.zip
    • 19 May 2017: added a better optimized code for matrix completion.
    • 06 May 2017: added robust matrix completion code.
    • 27 April 2017: code is online. It contains matrix completion and Hankel matrix learning implementations.

Examples

Optimization framework that decouples low-rank and stuctural constraints on U and {z,s}, respectively.
Robust matrix completion on the Netflix dataset.
Multitask learning on the School dataset.
Stochastic systems realization with low-rank Hankel matrix learning.