direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

TU Berlin

Page Content

There is no English translation for this web page.

Kolloquium der Arbeitsgruppe Modellierung • Numerik • Differentialgleichungen

Wintersemester 2019/20
Verantwortliche Dozenten:
Alle Professoren der
Arbeitsgruppe Modellierung • Numerik • Differentialgleichungen
Koordination:
Dr. Janusz Ginster
Termine:
Di 16-18 Uhr in MA 313 und nach Vereinbarung
Inhalt:
Vorträge von Gästen und Mitarbeitern zu aktuellen Forschungsthemen
Kontakt:

Terminplanung

Terminplanung / schedule (Abstracts s. unten / Abstracts see below)
Datum
date
Zeit
time
Raum
room
Vortragende(r)
speaker
Einladender
invited by
15.10.2019
16:15
MA 313
22.10.2019
16:15
MA 313
Stefan Heinrich (Uni Kaiserslautern)
G. Bärwolff
29.10.2019
16:15
MA 313
05.11.2019
16:15
MA 313
Federico Poloni (University of Pisa) 
V. Mehrmann
12.11.2019
16:15
MA 313
19.11.2019
16:15
MA 313
26.11.2019
16:15
MA 313
Thomas Strohmer (UC Davis)
G. Kutyniok
03.12.2019
16:15
MA 313
10.12.2019
16:15
MA 313
Matthias Chung (Virginia Tech)
V. Mehrmann
17.12.2019
16:15
MA 313
Roland Duduchava (University of Georgia and Ivane Javakhishvili Tbilisi State University)
R. Schneider
07.01.2020
16:15
MA 313
14.01.2020
16:15
MA 313
21.01.2020
16:15
MA 313
28.01.2020
16:15
MA 313
Evelyn Buckwar (Johannes Kepler University Linz)
G. Bärwolff 
04.02.2020
16:15
MA 313
11.02.2020
16:15
MA 313
Julianne Chung (Virginia Tech)
G. Kutyniok

Abstracts zu den Vorträgen

Stefan Heinrich (Universität Kaiserslautern):  Stochastic integration in various function classes – algorithms and complexity: 
Abstract

Frederico Poloni (University of Pisa):  Inverses of quasidefinite matrices in block-factored form, with an application to control theory (joint work with P. Benner) : 
We describe an algorithm to compute the explicit inverse of a dense quasi-definite matrix, i.e., a symmetric matrix of the form [-B*B^T, A;A^T, C^T*C], with the (1,1) block negative semidefinite and the (2,2) block positive semidefinite. The algorithm is a variant of Gauss-Jordan elimination that works on the low-rank factors B and C directly without ever forming those blocks. The individual elimination steps amount to a transformation called principal pivot transform; it was shown in [Poloni, Strabic 2016] how to perform it by working only on A, B, C, and we rely on that procedure here.
We discuss the stability of the resulting method, and show how the algorithm (and in particular the produced low-rank factors) can be of use in control theory, in the context of the matrix sign iteration, a method used to solve algebraic Riccati equations.

Thomas Strohmer (UC Davis): 

Matthias Chung (Virginia Tech):  Sampled Limited Memory Optimization Methods for Least Squares Problems with Massive Data
Emerging fields such as data analytics, machine learning, and uncertainty quantification heavily rely on efficient computational methods for solving inverse problems. With growing model complexities and ever increasing data volumes, state of the art inference method exceeded their limits of applicability and novel methods are urgently needed. Hence, new inference method need to focus on the scalability to large dimension and to address eventual model complexities.
In this talk, we discuss massive least squares problems where the size of the forward model matrix exceeds the storage capabilities of computer memory or the data is simply not available all at once. We introduce sampled limited memory optimization method for least squares problems, where an approximation of the global curvature of the underlying least squares problem is used to speed up the initial convergence and to improve the accuracy of iterates. Our proposed methods can be applied to ill-posed inverse problem, where we establish sampled regularization parameter selection methods. Numerical experiments on very large superresolution and tomographic reconstruction examples demonstrate the efficiency of these sampled limited memory row-action methods. This is joint work with Julianne Chung, Tanner Slagel, and Luis Tenorio.

Roland Duduchava (University of Georgia and Ivane Javakhishvili Tbilisi State University): 

Evelyn Buckwar (Johannes Kepler University Linz):  Splitting methods in Approximate Bayesian Computation for partially observed diffusion processes : 
Approximate Bayesian Computation (ABC) has become one of the major tools of likelihood-free statistical inference in complex mathematical models. Simultaneously, stochastic differential equations (SDEs) have developed as an established tool for modelling time dependent, real world phenomena with underlying random effects. When applying ABC to stochastic models, two major difficulties arise. First, the derivation of effective summary statistics and proper distances is particularly challenging, since simulations from the stochastic process under the same parameter configuration result in different trajectories. Second, exact simulation schemes to generate trajectories from the stochastic model are rarely available, requiring the derivation of suitable numerical methods for the synthetic data generation. In this talk we consider SDEs having an invariant density and apply measure-preserving splitting schemes for the synthetic data generation. We illustrate the results of the parameter estimation with the corresponding ABC algorithm with simulated data.

Julianne Chung (Virginia Tech):  Advancements in Hybrid Iterative Methods for Inverse Problems : 
In many physical systems, measurements can only be obtained on the exterior of an object (e.g., the human body or the earth's crust), and the goal is to estimate the internal structures. In other systems, signals measured from machines (e.g., cameras) are distorted, and the aim is to recover the original input signal. These are natural examples of inverse problems that arise in fields such as medical imaging, astronomy, geophysics, and molecular biology.
Hybrid iterative methods are increasingly being used to solve large, ill-posed inverse problems, due to their desirable properties of (1) avoiding semi-convergence, whereby later reconstructions are no longer dominated by noise, and (2) enabling adaptive and automatic regularization parameter selection. In this talk, we describe some recent advancements in hybrid iterative methods for computing solutions to large-scale inverse problems. First, we consider a hybrid approach based on the generalized Golub-Kahan bidiagonalization for computing Tikhonov regularized solutions to problems where explicit computation of the square root and inverse of the covariance kernel for the prior covariance matrix is not feasible. This is useful for large-scale problems where covariance kernels are defined on irregular grids or are only available via matrix-vector multiplication. Second, we describe flexible hybrid methods for solving l_p regularized inverse problems, where we approximate the p-norm penalization term as a sequence of 2-norm penalization terms using adaptive regularization matrices, and we exploit flexible preconditioning techniques to efficiently incorporate the weight updates. We introduce a flexible Golub-Kahan approach within a Krylov-Tikhonov hybrid framework, such that our approaches extend to general (non-square) l_p regularized problems. Numerical examples from dynamic photoacoustic tomography, space-time deblurring, and passive seismic tomography demonstrate the range of applicability and effectiveness of these approaches.

Zusatzinformationen / Extras