GrandHotel-5519789
SLRA 2015
Structured low-rank approximation
Grenoble, 1-2 June 2015
GrandHotel-5793879

Program

 

Monday, 1 June 2015
08:45 - 09:15 Registration
09:15 - 09:30 Laurent Condat and Konstantin Usevich
Opening remarks
09:30 - 10:30 Ivan Markovsky
Low-rank approximation and its applications (slides)
10:30 - 11:00 Coffee break
11:00 - 11:45 Stewart Trickett
Rank-reduction filtering in seismic exploration (slides)
11:45 - 12:30 Nina Golyandina
Singular Spectrum Analysis and low-rank approximation (slides)
12:30 - 13:45 Lunch break (room Mont-Blanc)
13:45 - 14:30 Marcus Carlsson
Complex frequency estimation (slides)
14:30 - 15:15 Pierre-Jean Spaenlehauer
Exact solutions in structured low-rank approximation (slides)
15:15 - 15:45 Coffee break
15:45 - 16:30 Nicolas Gillis
Preconditionings for more robust separable nonnegative matrix factorization (slides)
16:30 - 18:00 Poster session
Tuesday, 2 June 2015
09:00 - 09:45 Jérôme Bolte
A user's guide for KL inequality (slides)
09:45 - 10:30 Jean-Baptiste Hiriart-Urruty
A variational approach to the rank function (slides)
10:30 - 11:00 Coffee break
11:00 - 11:45 Bernard Mourrain
Tensor decomposition, structured low rank matrix completion and applications (slides)
11:45 - 12:30 Bart De Moor
Back to the roots: rooting multivariate polynomials with numerical linear algebra and nD system theory (slides)
12:30 - 13:45 Lunch break (room Mont-Blanc)
13:45 - 14:30 Anatoly Zhigljavsky
Structured low-rank approximation: choice of a norm and computational issues (slides)
14:30 - 15:15 André Uschmajew
A greedy approach to the low-rank basis problem
15:15 - 16:00 Farewell coffee break

 

Monday, 1 June

 

Ivan
Markovsky
Low-rank approximation and its applications
  Mathematical engineering continuously addresses new applications and solves new problems. The expansion of existing methods and applications makes it difficult to maintain a common theoretical framework. This talk shows the potential of the structured low-rank approximation setting to unify problems of data modeling from diverse application areas. An example treated in more details in the presentation is identification of a linear time-invariant system from observed trajectories of the system. We present an optimization method based on the variable projection principle. The method can deal with data with exact as well as missing (unknown) values. Problems with exact and missing values occur in data driven simulation and control—a new trend of model-free methods for system analysis and control.
↑↑
Stewart
Trickett
Rank-reduction filtering in seismic exploration
  Seismic exploration uses generated sound waves to image the earth's interior in order to locate petroleum or other minerals. Before such data is useful, however, it must undergo extensive statistical processing. Over the last 15 years, rank reduction methods on both structured and unstructured matrices have been developed for noise removal and interpolation of seismic data. These methods have primarily been applied on constant-frequency slices, as a key theorem states that the signal should have low rank in this domain. Over the years these filters have been extended to multiple dimensions, robust filtering for erratic noise, source deblending, de-aliasing, tensor rank reduction, coherent noise removal, automatic rank determination, and computational speedups. I will briefly describe each of these developments and show many examples. I will also list some open questions and avenues for future research.
↑↑
Nina
Golyandina
Singular Spectrum Analysis and low-rank approximation
  Singular spectrum analysis (SSA) is a method originally developed for time series analysis, which was later extended to handle multidimensional data. As a model-free method, SSA is able to extract trends, detect periodicities, perform smoothing and noise reduction. On the other hand, the method inherits approximation features from the Singular Value Decomposition. In particular, the Cadzow's iterative algorithm for Hankel low-rank approximation can be viewed as iterative application of SSA. The following questions will be emphasized in the talk: singular spectrum analysis vs low-rank approximation; low-rank approximation vs signal estimation; unweighted time series approximation and its features; convergence vs accuracy.
↑↑
Marcus
Carlsson
Complex frequency estimation
  I will present a recent algorithm for complex frequency estimation, based on low rank Hankel matrices. The algorithm has several benefits such as e.g. the ability to deal with missing data and approximation in weighted spaces. The theoretical foundation of this method is a version of Kronecker's classical theorem on the structure of finite rank Hankel matrices. In the second part of the talk I will present new theoretical results, extending this statement (as well as the Caratheodory-Fejer theorem, used in Pisarenko's method) to the multivariable setting. In particular it applies to block Hankel and Toeplitz matrices.
↑↑
Pierre-Jean
Spaenlehauer
Exact solutions in structured low-rank approximation
  Structured low-rank approximation is the problem of minimizing a weighted Frobenius distance to a given matrix among all matrices of fixed rank in a linear space of matrices. We study the critical points of this optimization problem using algebraic geometry. A particular focus lies on Hankel matrices, Sylvester matrices, and generic linear spaces. Joint work with G. Ottaviani and B. Sturmfels.
↑↑
Nicolas
Gillis
Preconditionings for more robust separable nonnegative matrix factorization
  In this talk, we discuss a recently introduced subclass of nonnegative matrix factorization (NMF) problems, referred to as separable NMF. This problem is equivalent to hyperspectral unmixing, that is, the problem of identifying the constitutive materials (the endmembers) along with their abundances in each pixel in a hyperspectral image, in the presence of pure pixels. Hence separable NMF can be solved using pure-pixel search algorithms (such as N-FINDR or VCA). However, in noisy settings, the performance of a pure-pixel search algorithm usually depends on the conditioning of the endmember matrix. We describe and analyze three data preconditioning methods for mitigating the aforementioned issue: (1) an approach based on semidefinite programming, (2) pre-whitening, and (3) an approach based on a pure-pixel search algorithm itself. The developments are based on a pure-pixel search algorithm called the successive projection algorithm (SPA). Simulations based on synthetic data sets show that preconditioning makes SPA much more robust against noise.
This is joint work with S. Vavasis (U. of Waterloo) and Wing-Kin Ma (The Chinese U. of Hong Kong).
↑↑

 

Tuesday, 2 June

 

Jérôme
Bolte
A user's guide for KL inequality
  We shall explain the necessity/interest of tame/semi-algebraic assumptions in the context of large nonsmooth optimization. We will show in particular the prominent role of KL (Kurdyka-Łojasiewicz) inequalities as a means for obtaining theoretical guarantees of convergence. Several algorithms will be evoked during the course of the talk, showing the flexibility and the scope of our approach. If time allows some specific illustrations to low rank problems will be considered.
↑↑
Jean-Baptiste
Hiriart-Urruty
A variational approach to the rank function
  We survey several useful properties of the rank function of a matrix and add some new ones. We adopt the viewpoint of variational analysis, that is the one considering all the properties useful for optimizing, approximating or regularizing the rank function. Joint work with Hai Yen Le.
↑↑
Bernard
Mourrain
Tensor decomposition, structured low rank matrix completion and applications
  The tensor decomposition problem can be interpreted as the reconstruction of sums of exponential functions from truncated moment sequences. Prony proposed a method for solving this problem in the univariate case. We describe an extension to the multivariate case and show how it reduces to a structured low rank matrix completion problem, based on a flat extension property. Algebraic characterizations of this property are given, providing a decomposition algorithm. For the special case of positive decomposition, we relate the decomposition problem to the computation of extremal points on some sets of semidefinite positive moment matrices. We analyze the properties of the corresponding convex cones and their dual and show how semi-definite programming techniques can be used to find positive decompositions. We illustrate the approach on some examples coming from signal processing, magnetic resonance imaging, sparse interpolation and the construction of cubature formula.
↑↑
Bart
De Moor
Back to the roots: rooting multivariate polynomials with numerical linear algebra and nD system theory
  In this presentation, we will develop a new approach based on numerical linear algebra to compute all the roots of a set of multivariate real polynomials.
We explore several properties of the Macaulay matrix that is constructed with the coefficients of the monomials of these real polynomials, including the determination of the number of solutions via the calculation of a rank, and the computation of the roots via generalized eigenvalue problems, that derive from an nD-realization problem in the null space of the Macaulay matrix.
We show how therefore polynomial optimization problems (such as e.g. Prediction Error Methods in system identification) boil down to solving a (large) eigenvalue problem.
↑↑
Anatoly
Zhigljavsky
Structured low-rank approximation: choice of a norm and computational issues
  In this talk, we investigate the complexity of the numerical construction of the Hankel structured low-rank approximation (HSLRA) problem, and develop a family of algorithms to solve this problem. Briefly, HSLRA is the problem of finding the closest (in some pre-defined norm) rank r approximation of a given Hankel matrix, which is also of Hankel structure. Unlike many other methods described in the literature the family of algorithms we propose has the property of guaranteed convergence.
↑↑
André
Uschmajew
A greedy approach to the low-rank basis problem
  We consider the problem of finding a low-rank basis in a subspace of matrices. In the case of a rank-one basis, the problem can be formulated as a tensor decomposition problem. It follows from matroid theory that a solution can be in principle found by a greedy algorithm. To mimic this abstract algorithm, one needs a procedure to find single low-rank matrices in the subspace. While the nuclear norm convex relaxation is well studied for affine linear constraints (e.g. matrix completion), it does not help in the case of a linear subspace as it leads to the zero solution. Our method for finding a nonzero low-rank matrix in the subspace is based on two different nonconvex variable splittings, which are solved by alternating between singular value thresholding and projection to the subspace. The resulting heuristic greedy algorithm finds low-rank bases quite reliably in many experimental setups without requiring the ranks as an input.
↑↑

 

Poster session

 

Authors Dana Lahat and Christian Jutten
Title Multi-Set Data Analysis and Simultaneous Matrix Block Diagonalization
  Joint independent subspace analysis (JISA) is a new statistical signal processing approach to multi-set data analysis. From an algebraic perspective, JISA may be viewed as a set of coupled block diagonalization problems of covariance and cross-covariance matrices. We evoke linkage to recent results on ISA of non-stationary sources, which can be reformulated as a symmetric joint block diagonalization (JBD) of covariance matrices.
 
Authors Jérémy E. Cohen, Rodrigo Cabral Farias, and Pierre Comon
Title Fast Decomposition of Large Nonnegative Tensors
  In signal processing, tensor decompositions have gained in popularity this last decade. In the meantime, the volume of data to be processed has drastically increased. This calls for novel methods to handle Big Data tensors. Since most of these huge data are issued from physical measurements, which are intrinsically real nonnegative, being able to compress nonnegative tensors has become mandatory. Following recent works on HOSVD compression for Big Data, we detail solutions to decompose a nonnegative tensor into decomposable terms in a compressed domain.
 
Authors Jörg Liesen and Robert Luce
Title Fast Recovery and Approximation of Hidden Cauchy Structure
  We study two approximation problems in the context of Cauchy matrices. For the first problem, we derive a linear time algorithm for recovering the Cauchy parameters from the entries of a given Cauchy matrix. We also show that these parameters are essentially uniquely defined. The second problem under consideration is the fast computation of approximations with Cauchy matrices. We show that such approximations can be computed in linear time of the input. We prove approximation bounds, and put special emphasis on the approximation of noisy Cauchy matrices.
 
Authors Jinane Harmouche, Dominique Fourer, Patrick Flandrin, François Auger, Pierre Borgnat
Title One or Two Components ? The Singular Spectrum Analysis answers
  Conventionally, the singular spectral analysis (SSA) is seen as a technique of decomposition of a signal into periodic components, trend and noise. We are interested in case the components may be non-stationary, intending to study and characterize their separability through the spectral interpretation of the singular spectrum.
Thus, a theoretical curve depending on the setting parameter of the SSA algorithm, namely the embedding dimension L, and parameters of the considered signal model defines the area of good separation of a chirp and a sinusoid. The extraction of these components is performed initially by their prior knowledge, and then using an unsupervised clustering algorithm proposed for this purpose. The unsupervised approach permits to achieve the desired separation performance.
The presented methodology allows the evaluation of SSA ability to analyze nonstationary signals. It could be applied to non-stationarity types other than the chirps studied here.
 
Authors Konstantin Usevich and Pierre Comon
Title Quasi-Hankel low-rank matrix completion: a convex relaxation
  The completion of matrices with missing values under the rank constraint is a non-convex optimization problem. A popular convex relaxation is based on minimization of the nuclear norm (sum of singular values) of the matrix. For this relaxation, an important question is when the two optimization problems lead to the same solution. This question was addressed in the literature mostly in the case of random positions of missing elements and random known elements. In this contribution, we analyze the case of structured matrices with fixed pattern of missing values, in particular, the case of Hankel and quasi-Hankel matrix completion, which appears as a subproblem in the computation of symmetric tensor canonical polyadic decomposition. We extend existing results on completion of rank-one real Hankel matrices to completion of rank-r complex Hankel and quasi-Hankel matrices.
 
Authors Miguel Angel Veganzones, M. Simões, G. Licciardi, N.Yokoya, José M. Bioucas-Dias, J. Chanussot
Title Hyperspectral super-resolution of locally low rank images from complementary multisource data
  Remote sensing hyperspectral images (HSI) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images (MSI) in order to obtain super-resolution HSI, that is, hyperspectral images with high spectral and spatial resolutions.
Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods decrease mainly because the underlying sparse regression problem is severely ill-posed.
We propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSI are locally low rank, that is, pixels acquired from a given spatial neighbourhood span a very low dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough such that the problem is not ill-posed any more.
 
Authors Nicolas Courty, Xing Gong, Jimmy Vandel, Thomas Burger
Title SAGA: Sparse And Geometry-Aware non-negative matrix factorization through non-linear local embedding
  This paper presents a new non-negative matrix factorization technique which (1) allows the decomposition of the original data on multiple latent factors accounting for the geometrical structure of the manifold embedding the data; (2) provides an optimal representation with a controllable level of sparsity; (3) has an overall linear complexity allowing handling in tractable time large and high dimensional datasets. It operates by coding the data with respect to local neighbors with non-linear weights. This locality is obtained as a consequence of the simultaneous sparsity and convexity constraints. Our method is demonstrated over several experiments, including a feature extraction and classification task, where it achieves better performances than the state-of-the-art factorization methods, with a shorter computational time.
 
Authors Laurent Condat
Title A heuristic iterative algorithm for structured low-rank approximation; application to super-resolution of spike trains
  A new heuristic iterative algorithm is proposed for structured low rank approximation (SLRA), based on a recently proposed splitting method for convex nonsmooth optimization. It is applied to super-resolution reconstruction of spike trains from noisy low frequency measurements; this problem amounts to spectral estimation, which in turn can be recast as Toeplitz-SLRA. Although, in absence of convexity, the algorithm come with no convergence proof, it converges in practice to the global solution of the problem, when the noise level is not too high. So, it improves upon Cadzow denoising, another heuristic iterative method, for same ease of implementation and speed. This work has been published in L. Condat and A. Hirabayashi, "Cadzow denoising upgraded: A new projection method for the recovery of Dirac pulses from noisy linear measurements", Sampling Theory in Signal and Image Processing, vol. 14, no. 1, 2015.

↑↑