Sparse + low-rank AR identification

Sparse plus low-rank autoregressive identification in neuroimaging time series


R. Liégeois, B. Mishra, M. Zorzi, and R. Sepulchre


This paper considers the problem of identifying multivariate autoregressive (AR) sparse plus low-rank graphical models. Based on the corresponding problem formulation recently presented, we use the alternating direction method of multipliers (ADMM) to efficiently solve it and scale it to sizes encountered in neuroimaging applications. We apply this decomposition on synthetic and real neuroimaging datasets with a specific focus on the information encoded in the low-rank structure of our model. In particular, we illustrate that this information captures the spatio-temporal structure of the original data, generalizing classical component analysis approaches.



Linear data model: Ten observed variables xi, i ∈ [1 . . . 10] with few interactions among them (sparsity) and one latent variable x11 (low-rank structure).
Application on the linear data model. (A) Interaction graphs of estimated models for different values of sparsity and low-rank regularizers using a first order AR model. (B) True interaction graph. (C) Optimal interaction graph of estimated model obtained in the static case.
Application on a real neuroimaging dataset. Three resting state neuronal networks are commonly recovered using component analysis: (A) the visual network, (B) the default mode
network, and (C) the executive control network.
Application on the neuroimaging dataset. By fitting a first order AR model to this dataset we identify latent components corresponding to these networks. (A) The visual network is recovered in 14 out of 17 subjects. (B) The DMN and ECN are coupled into a unique latent component in 12 out of 17 subjects.