Note: Simon Bartels has transitioned from the institute (alumni). Explore further information here
I have moved--find most recent information on: https://simonbartels.github.io
Together with my supervisor Philipp Hennig, I am working on probabilistic numerical methods to solve large linear equation systems. We believe that numerical algorithms are actually Bayesian inference algorithms: These algorithms gather data and propose an estimate based on these observations. Different algorithms usually produce different estimates even if given the same data (e.g. FOM and GMRES) as they usually have different underlying assumptions. One of our goals is to make these assumptions explicit using the language of probability in form of a prior belief over the solution.
In addition to residual or worst-case error, such a view provides an uncertainty estimate regarding the quality of the approximation. This uncertainty allows to reason over the average error and it can be propagated to the application that called for the solution. Furthermore, a Bayesian view allows (at least in theory) to incorporate additional prior knowledge leading to more specifically tailored algorithms for specific applications.
A particular application we are interested in are linear equation systems in which the coefficient matrix stems from a distribution. For example in Gaussian process regression it is common to assume that the underlying dataset is independently and identically distributed. This implies that the rows of the kernel matrix are correlated, i.e. a structure that we aim to use to solve such linear equation system faster.