Sparse Regularization

Regularization with sparsity constraints is an impressingly effective method for the solution of linear operator equations on Hilbert spaces, where the solution is assumed to have a sparse representation with respect to a given Hilbert space basis. Classical (linear) regularization methods apply a quadratic regularization term, which leads to a large number of insignificant coefficients in the expansion of the solution. In order to put more emphasis on the significant terms, one increases the penalization of small coefficients by applying a regularization term with subquadratic growth near zero. In case the growth is at least linear, one can show that the minimizer of the Tikhonov functional, that is, the regularized solution, is indeed sparse in the sense that at most finitely many coefficients are non-zero.

From the theoretical point of view, one of the most interesting properties of sparse regularization is the virtual absence of bounds on convergence rates. While the convergence rates for quadratic regularization are known to never exceed the order of $\delta^{2/3}$ (apart from trivial cases), one can show that regularization terms with linear or sublinear growth near zero can lead to rates of order $\delta$. In other words, the error of the approximate solution is of the same order as the data error.

Figure 1. Application of sparse regularization in ground penetrating radar.
Left: original data
Middle: reconstruction with Kirchhoff migration.
Right: reconstruction with sparse regularization
Data BP sparse


Markus Grasmair, Markus Haltmeier, Otmar Scherzer



Computational Science Center
University of Vienna

Oskar-Morgenstern-Platz 1
1090 Wien
T: +43-1-4277-23701