Structured low-rank matrix learning

A saddle point approach to structured low-rank matrix learning


P. Jawanpuria and B. Mishra


We propose a novel optimization approach for learning a low-rank matrix which is also constrained to lie in a linear subspace. Exploiting a particular variational characterization of the squared trace norm regularizer, we formulate the structured low-rank matrix learning problem as a rank-constrained saddle point minimax problem. The proposed modeling decouples the low- rank and structural constraints onto separate factors. The optimization problem is formulated on the Riemannian spectrahedron manifold, where the Riemannian framework allows to propose computationally efficient conjugate gradient and trust-region algorithms. Our approach easily accommodates popular non-smooth loss functions, e.g., l1-loss, and our algorithms are scalable to large-scale problem instances. The numerical comparisons show that our proposed algorithms outperform state-of-the-art algorithms in standard and robust matrix completion, stochastic realization, and multi-task feature learning problems on various benchmarks.


  • Status: Technical report, 2017. A shorter version got accepted to 10th NIPS Workshop on Optimization for Machine Learning, 2017.
  • Paper: [arXiv:1704.07352][Presentation].
  • Matlab code:
    • 18 Feb 2018: added non-negative matrix completion code.
    • 19 May 2017: added a better optimized code for matrix completion.
    • 06 May 2017: added robust matrix completion code.
    • 27 April 2017: code is online. It contains matrix completion and Hankel matrix learning implementations.


Optimization framework that decouples low-rank and stuctural constraints on U and {z,s}, respectively.
Robust matrix completion on the Netflix dataset.
Multitask learning on the School dataset.
Stochastic systems realization with low-rank Hankel matrix learning.