A saddle point approach to structured lowrank matrix learning in largescale applications
Authors
P. Jawanpuria and B. Mishra
Abstract
We propose a novel optimization approach for learning a lowrank matrix which is also constrained to be in a given linear subspace. Lowrank constraints are regularly employed in applications such as recommender systems and multitask learning. In addition, several system identification problems require a learning matrix with both lowrank and linear subspace constraints. We model the classical nuclear norm regularized formulation as an equivalent saddle point minimax problem. This decouples the lowrank and the linear subspace constraints onto separate factors. Motivated by largescale problems, we reformulate the minimax problem via a rank constrained nonconvex surrogate. This translates into an optimization problem on the Riemannian spectrahedron manifold. We exploit the Riemannian structure to propose efficient first and secondorder algorithms. The duality theory allows to compute the duality gap for a candidate solution and our approach easily accommodates popular nonsmooth loss functions, e.g., the absolutevalue loss. We effortlessly scale on the Netflix data set on both matrix completion and robust matrix completion problems, obtaining stateoftheart generalization performance. Additionally, we demonstrate the efficacy of our approach in Hankel matrix learning and multitask learning problems.
Downloads
 Status: Technical report, 2017.
 Paper: [Presentation].
 Matlab code: Coming soon.
Examples



