### Page Content

### to Navigation

# Sparse Approximation

Approximation theory is a well established and extensively studied area. Its main objective is to approximate complicated functions by simpler functions with control of the resulting error. Of particular importance is the so-called best N-term approximation error, which measures the non-linear approximation error when approximating with a linear combination of N freely chosen simpler functions (from a given system). A criterion for the suitability of such a system with respect to a class of functions to be approximated is the decay rate of the best N-term approximation error as N goes to infinity, i.e., as one is allowed to use more and more simpler functions for the approximation.

Given a class of functions, one main goal is to design a representation system for the approximation which yields the best possible decay rate of the best N-term approximation error among all systems. This is typically termed "optimal sparse approximation". For instance, in the case of wavelets and also shearlets, this is already very well understood and those systems been proven to behave optimally, certainly for specific function classes; in the case of shearlets so-called cartoon-like functions.

The importance of the existence of sparse approximations for a given function class originates in particular from the method of compressed sensing, which also promoted the novel paradigm of sparsity. In fact, to date it is assumed that for any class of data an appropriate system yielding sparse approximations exists, either prespecified from the area of applied harmonic analysis or learnt via dictionary learning.

## Some of our Research Topics

- Analyzing sparse approximation properties of systems such as shearlets and extension of those.
- Solving structured dictionary learning problems with sparse approximation as objective.
- Designing systems for given data with optimal sparse approximation properties.
- Application of sparse approximation properties to diverse real-world problems.