Ivan Markovsky 
Lowrank approximation and its applications 
Mathematical engineering continuously addresses new applications and solves new problems. The expansion of existing methods and applications makes it difficult to maintain a common theoretical framework. This talk shows the potential of the structured lowrank approximation setting to unify problems of data modeling from diverse application areas. An example treated in more details in the presentation is identification of a linear timeinvariant system from observed trajectories of the system. We present an optimization method based on the variable projection principle. The method can deal with data with exact as well as missing (unknown) values. Problems with exact and missing values occur in data driven simulation and control—a new trend of modelfree methods for system analysis and control. ↑↑ 

Stewart Trickett 
Rankreduction filtering in seismic exploration 
Seismic exploration uses generated sound waves to image the earth's interior
in order to locate petroleum or other minerals. Before such data is useful,
however, it must undergo extensive statistical processing. Over the last 15
years, rank reduction methods on both structured and unstructured matrices
have been developed for noise removal and interpolation of seismic data.
These methods have primarily been applied on constantfrequency slices, as
a key theorem states that the signal should have low rank in this domain.
Over the years these filters have been extended to multiple dimensions,
robust filtering for erratic noise, source deblending, dealiasing, tensor rank
reduction, coherent noise removal, automatic rank determination, and
computational speedups. I will briefly describe each of these developments
and show many examples. I will also list some open questions and avenues
for future research. ↑↑ 

Nina Golyandina 
Singular Spectrum Analysis and lowrank approximation 
Singular spectrum analysis (SSA) is a method originally developed for time series analysis, which was later extended to handle multidimensional data. As a modelfree method, SSA is able to extract trends, detect periodicities, perform smoothing and noise reduction. On the other hand, the method inherits approximation features from the Singular Value Decomposition. In particular, the Cadzow's iterative algorithm for Hankel lowrank approximation can be viewed as iterative application of SSA. The following questions will be emphasized in the talk: singular spectrum analysis vs lowrank approximation; lowrank approximation vs signal estimation; unweighted time series approximation and its features; convergence vs accuracy. ↑↑ 

Marcus Carlsson 
Complex frequency estimation 
I will present a recent algorithm for complex frequency estimation, based on low rank Hankel matrices. The algorithm has several benefits such as e.g. the ability to deal with missing data and approximation in weighted spaces. The theoretical foundation of this method is a version of Kronecker's classical theorem on the structure of finite rank Hankel matrices. In the second part of the talk I will present new theoretical results, extending this statement (as well as the CaratheodoryFejer theorem, used in Pisarenko's method) to the multivariable setting. In particular it applies to block Hankel and Toeplitz matrices.
↑↑ 

PierreJean Spaenlehauer 
Exact solutions in structured lowrank approximation 
Structured lowrank approximation is the problem
of minimizing a weighted Frobenius distance to a
given matrix among all matrices of fixed rank in a
linear space of matrices. We study the critical points of this optimization problem using algebraic geometry. A particular focus
lies on Hankel matrices, Sylvester matrices, and generic linear
spaces. Joint work with G. Ottaviani and B. Sturmfels. ↑↑ 

Nicolas Gillis 
Preconditionings for more robust separable nonnegative matrix factorization 
In this talk, we discuss a recently introduced subclass of nonnegative matrix factorization (NMF) problems, referred to as separable NMF. This problem is equivalent to hyperspectral unmixing, that is, the problem of identifying the constitutive materials (the endmembers) along with their abundances in each pixel in a hyperspectral image, in the presence of pure pixels. Hence separable NMF can be solved using purepixel search algorithms (such as NFINDR or VCA). However, in noisy settings, the performance of a purepixel search algorithm usually depends on the conditioning of the endmember matrix. We describe and analyze three data preconditioning methods for mitigating the aforementioned issue: (1) an approach based on semidefinite programming, (2) prewhitening, and (3) an approach based on a purepixel search algorithm itself. The developments are based on a purepixel search algorithm called the successive projection algorithm (SPA). Simulations based on synthetic data sets show that preconditioning makes SPA much more robust against noise. This is joint work with S. Vavasis (U. of Waterloo) and WingKin Ma (The Chinese U. of Hong Kong). ↑↑ 
Jérôme Bolte 
A user's guide for KL inequality 
We shall explain the necessity/interest of
tame/semialgebraic assumptions in the context of
large nonsmooth optimization. We will show in
particular the prominent role of KL (KurdykaŁojasiewicz) inequalities as a means for obtaining theoretical guarantees of convergence. Several algorithms will be evoked during the course of the talk, showing the flexibility and the scope of our approach. If time allows some specific illustrations to low rank problems will be considered. ↑↑ 

JeanBaptiste HiriartUrruty 
A variational approach to the rank function 
We survey several useful properties of the rank function of a matrix and add some new ones. We adopt the viewpoint of variational analysis, that is the one considering all the properties useful for optimizing, approximating or regularizing the rank function. Joint work with Hai Yen Le. ↑↑ 

Bernard Mourrain 
Tensor decomposition, structured low rank matrix completion and applications 
The tensor decomposition problem can be interpreted as the reconstruction of sums of exponential functions from truncated moment sequences.
Prony proposed a method for solving this problem in the univariate case. We describe an extension to the multivariate case and show how it reduces to a structured low rank matrix completion problem, based on a flat extension property. Algebraic characterizations of this property are given, providing a decomposition algorithm.
For the special case of positive decomposition, we relate the decomposition problem to the computation of extremal points on some sets of semidefinite positive moment matrices. We analyze the properties of the corresponding convex cones and their dual and show how semidefinite programming techniques can be used to find positive decompositions.
We illustrate the approach on some examples coming from signal processing, magnetic resonance imaging, sparse interpolation and the construction of cubature formula. ↑↑ 

Bart De Moor 
Back to the roots: rooting multivariate polynomials with numerical linear algebra and nD system theory 
In this presentation, we will develop a new approach based on numerical linear algebra to compute all the roots of a set of multivariate real polynomials. We explore several properties of the Macaulay matrix that is constructed with the coefficients of the monomials of these real polynomials, including the determination of the number of solutions via the calculation of a rank, and the computation of the roots via generalized eigenvalue problems, that derive from an nDrealization problem in the null space of the Macaulay matrix. We show how therefore polynomial optimization problems (such as e.g. Prediction Error Methods in system identification) boil down to solving a (large) eigenvalue problem. ↑↑ 

Anatoly Zhigljavsky 
Structured lowrank approximation: choice of a norm and computational issues 
In this talk, we investigate the complexity of
the numerical construction of the Hankel structured
lowrank approximation (HSLRA) problem, and develop a family of algorithms to solve this problem. Briefly, HSLRA is the
problem of finding the closest (in some predefined norm) rank r approximation of a given Hankel matrix, which is also of
Hankel structure. Unlike many other methods described in the literature the family of algorithms we propose has the property
of guaranteed convergence. ↑↑ 

André Uschmajew 
A greedy approach to the lowrank basis problem 
We consider the problem of finding a lowrank basis in a subspace of matrices. In the case of a rankone basis, the problem can be formulated as a tensor decomposition problem. It follows from matroid theory that a solution can be in principle found by a greedy algorithm. To mimic this abstract algorithm, one needs a procedure to find single lowrank matrices in the subspace. While the nuclear norm convex relaxation is well studied for affine linear constraints (e.g. matrix completion), it does not help in the case of a linear subspace as it leads to the zero solution. Our method for finding a nonzero lowrank matrix in the subspace is based on two different nonconvex variable splittings, which are solved by alternating between singular value thresholding and projection to the subspace. The resulting heuristic greedy algorithm finds lowrank bases quite reliably in many experimental setups without requiring the ranks as an input. ↑↑ 
Authors  Dana Lahat and Christian Jutten 
Title  MultiSet Data Analysis and Simultaneous Matrix Block Diagonalization 
Joint independent subspace analysis (JISA) is a new statistical signal processing approach to multiset data analysis. From an algebraic perspective, JISA may be viewed as a set of coupled block diagonalization problems of covariance and crosscovariance matrices. We evoke linkage to recent results on ISA of nonstationary sources, which can be reformulated as a symmetric joint block diagonalization (JBD) of covariance matrices.  
Authors  Jérémy E. Cohen, Rodrigo Cabral Farias, and Pierre Comon 
Title  Fast Decomposition of Large Nonnegative Tensors 
In signal processing, tensor decompositions have gained in popularity this last decade. In the meantime, the volume of data to be processed has drastically increased. This calls for novel methods to handle Big Data tensors. Since most of these huge data are issued from physical measurements, which are intrinsically real nonnegative, being able to compress nonnegative tensors has become mandatory. Following recent works on HOSVD compression for Big Data, we detail solutions to decompose a nonnegative tensor into decomposable terms in a compressed domain.  
Authors  Jörg Liesen and Robert Luce 
Title  Fast Recovery and Approximation of Hidden Cauchy Structure 
We study two approximation problems in the context of Cauchy matrices. For the first problem, we derive a linear time algorithm for recovering the Cauchy parameters from the entries of a given Cauchy matrix. We also show that these parameters are essentially uniquely defined. The second problem under consideration is the fast computation of approximations with Cauchy matrices. We show that such approximations can be computed in linear time of the input. We prove approximation bounds, and put special emphasis on the approximation of noisy Cauchy matrices.  
Authors  Jinane Harmouche, Dominique Fourer, Patrick Flandrin, François Auger, Pierre Borgnat 
Title  One or Two Components ? The Singular Spectrum Analysis answers 
Conventionally, the singular spectral analysis (SSA) is seen as a technique of
decomposition of a signal into periodic components, trend and noise. We are interested
in case the components may be nonstationary, intending to study and characterize their
separability through the spectral interpretation of the singular spectrum. Thus, a theoretical curve depending on the setting parameter of the SSA algorithm, namely the embedding dimension L, and parameters of the considered signal model defines the area of good separation of a chirp and a sinusoid. The extraction of these components is performed initially by their prior knowledge, and then using an unsupervised clustering algorithm proposed for this purpose. The unsupervised approach permits to achieve the desired separation performance. The presented methodology allows the evaluation of SSA ability to analyze nonstationary signals. It could be applied to nonstationarity types other than the chirps studied here. 

Authors  Konstantin Usevich and Pierre Comon 
Title  QuasiHankel lowrank matrix completion: a convex relaxation 
The completion of matrices with missing values under the rank constraint is a nonconvex optimization problem. A popular convex relaxation is based on minimization of the nuclear norm (sum of singular values) of the matrix. For this relaxation, an important question is when the two optimization problems lead to the same solution. This question was addressed in the literature mostly in the case of random positions of missing elements and random known elements. In this contribution, we analyze the case of structured matrices with fixed pattern of missing values, in particular, the case of Hankel and quasiHankel matrix completion, which appears as a subproblem in the computation of symmetric tensor canonical polyadic decomposition. We extend existing results on completion of rankone real Hankel matrices to completion of rankr complex Hankel and quasiHankel matrices.  
Authors  Miguel Angel Veganzones, M. Simões, G. Licciardi, N.Yokoya, José M. BioucasDias, J. Chanussot 
Title  Hyperspectral superresolution of locally low rank images from complementary multisource data 
Remote sensing hyperspectral images (HSI) are quite often low rank,
in the sense that the data belong to a low dimensional
subspace/manifold. This has been recently
exploited for the fusion of low spatial resolution HSI with high
spatial resolution multispectral images (MSI) in order to obtain
superresolution HSI, that is, hyperspectral images with high spectral
and spatial resolutions. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to stateoftheart results when the spectral information lies in a low dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods decrease mainly because the underlying sparse regression problem is severely illposed. We propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSI are locally low rank, that is, pixels acquired from a given spatial neighbourhood span a very low dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough such that the problem is not illposed any more. 

Authors  Nicolas Courty, Xing Gong, Jimmy Vandel, Thomas Burger 
Title  SAGA: Sparse And GeometryAware nonnegative matrix factorization through nonlinear local embedding 
This paper presents a new nonnegative matrix factorization technique which (1) allows the decomposition of the original data on multiple latent factors accounting for the geometrical structure of the manifold embedding the data; (2) provides an optimal representation with a controllable level of sparsity; (3) has an overall linear complexity allowing handling in tractable time large and high dimensional datasets. It operates by coding the data with respect to local neighbors with nonlinear weights. This locality is obtained as a consequence of the simultaneous sparsity and convexity constraints. Our method is demonstrated over several experiments, including a feature extraction and classification task, where it achieves better performances than the stateoftheart factorization methods, with a shorter computational time.  
Authors  Laurent Condat 
Title  A heuristic iterative algorithm for structured lowrank approximation; application to superresolution of spike trains 
A new heuristic iterative algorithm is proposed for structured low rank approximation (SLRA), based on a recently proposed splitting method for convex nonsmooth optimization. It is applied to superresolution reconstruction of spike trains from noisy low frequency measurements; this problem amounts to spectral estimation, which in turn can be recast as ToeplitzSLRA. Although, in absence of convexity, the algorithm come with no convergence proof, it converges in practice to the global solution of the problem, when the noise level is not too high. So, it improves upon Cadzow denoising, another heuristic iterative method, for same ease of implementation and speed. This work has been published in L. Condat and A. Hirabayashi, "Cadzow denoising upgraded: A new projection method for the recovery of Dirac pulses from noisy linear measurements", Sampling Theory in Signal and Image Processing, vol. 14, no. 1, 2015. 