Bath Numerical Analysis Seminar

2019/20 - Semester 2

Fridays at 12.15 in 4W1.7 (also known as the Wolfson Lecture Theatre). Campus maps can be found here.

Everyone is welcome at these talks and don't forget to join us for lunch after the seminar.

Date Speaker Title
7 Feb 2020 Tony Lelievre (Paris, France)
Foundation of first order kinetics: the quasi stationary distribution approach

We are interested in the connection between a metastable continuous state space Markov process (satisfying e.g. the Langevin or overdamped Langevin equation) and a jump Markov process in a discrete state space. More precisely, we use the notion of quasi-stationary distribution within a metastable state for the continuous state space Markov process to parametrize the exit event from the state. This approach is useful to analyze and justify methods which use the jump Markov process underlying a metastable dynamics as a support to efficiently sample the state-to-state dynamics (accelerated dynamics techniques à la Arthur Voter). Moreover, it is possible by this approach to quantify the error on the exit event when the parametrization of the jump Markov model is based on the Eyring-Kramers formula. This therefore provides a mathematical framework to justify the use of transition state theory and the Eyring-Kramers formula to build kinetic Monte Carlo or Markov state models. References:

  • G. Di Gesu, T. Lelièvre, D. Le Peutrec and B. Nectoux, Jump Markov models and transition state theory: the Quasi-Stationary Distribution approach, Faraday Discussion, 195, 469-495, (2016).
  • G. Di Gesu, T. Lelièvre, D. Le Peutrec et B. Nectoux, Sharp asymptotics of the first exit point density, Annals of PDE, 5(1), (2019).

14 Feb 2020 Jakob Jorgensen (Manchester)
Hyperspectral tomography and reconstruction with the CCPi Core Imaging Library

X-ray computed tomography (CT) is one of the most successful and established imaging techniques, yet new opportunities and challenges continue to appear. One particularly exciting avenue is the emergence of novel photon-counting detectors. Traditionally CT imaging has been in black and white, but new detectors with high energy resolution make “colour imaging” possible, so-called spectral tomography. Just like colour images allow features to be distinguished more easily than in grey images, hyperspectral tomography provides information about characteristic absorption peaks, which may allow one to identify individual materials that conventional detectors cannot distinguish. However, due to few counts the tomography data in each energy channel can be extremely noisy, which prevents simple channel-wise reconstruction by conventional methods such as filtered back-projection (FBP). In this talk we present X-ray and neutron time-of-flight hyperspectral tomography and discuss regularization methods to help improve the reconstruction quality. We also give an overview of the Python-based Core Imaging Library (CIL) for tomographic image processing. CIL is designed for a wide range of tomographic data, including parallel- and cone-beam, 2D and 3D cases as well as 4D dynamic and hyperspectral tomography setups. The modular design of CIL allows a variety of customised algorithms to be constructed by the user, in addition to pre-defined commonly used algorithms. In particular, a modular optimisation framework allows simple and fast prototyping of optimisation-based reconstruction algorithms through “mix&match” of different existing or user-defined data fidelities and regularization penalties.

21 Feb 2020 Malena Sabate Landman (Bath)
Iteratively Reweighted Flexible Krylov methods for Sparse Reconstruction

Krylov subspace methods are powerful iterative solvers for large-scale linear inverse problems, such as those arising in image deblurring and computed tomography. In this talk I will present two new algorithms to efficiently solve $\ell_2$-$\ell_1$ regularized problems that enforce sparsity in the solution. The proposed approach is based on building a sequence of quadratic problems approximating $\ell_2$-$\ell_1$ regularization and partially solving them using flexible Krylov-Tikhonov methods. These algorithms are built upon a solid theoretical justification for converge, and have the advantage of building a single (flexible) approximation (Krylov) Subspace that encodes regularization through variable ``preconditioning’. The performance of the algorithms will be shown through a variety of numerical examples. This is a joint work with Silvia Gazzola (University of Bath) and James Nagy (Emory University).

28 Feb 2020 Ben Ashby (Reading)
An adaptive finite element method for a variational inequality arising from variably saturated subsurface flow

Flow in a variably saturated porous medium is governed by a nonlinear generalisation of Darcy’s law. At boundaries between a porous medium and the atmosphere, physical constraints on the pressure of the fluid lead to a variational inequality similar in character to contact problems from elasticity. The nonlinearities in the PDE operator and the a priori unknown boundary conditions as well as multi scale variation in the permeability of the soil as its water content varies leads to a challenging problem for which spatially adaptive methods are well suited. A finite element method is introduced, and the nonlinear solution procedure of the finite element system is combined with an iterative method to solve the variational inequality. An error estimate of dual-weighted residual type is introduced and used as a criterion for goal-based mesh refinement. Numerical results are presented that demonstrate the effectiveness of the error estimate in rapidly reducing the error in the goal quantity as well as providing a sharp upper bound in difficult and realistic problems.

6 Mar 2020 Claire Delplancke (Bath)
A scalable online algorithm for passive seismic tomography in underground mines

Monitoring seismic responses of rock masses to massive mining allows to detect stress variations and hazardous instabilities inside underground mines, and requires accurate estimations of the non-homogenous propagation velocity of microseismic waves. Passive seismic tomography using the first-arrival travel-times of mining-induced microseisms (of unknown hypocenters) constitute a promising tool. However, available methods solving this high-dimensional statistical inverse problem do not scale well with the dataset size. I will present a novel passive seismic tomography method able to dynamically learn the non-homogeneous velocity field from a streaming of noisy first-arrival times, both on-line (in real-time) or from catalogs. Our method introduces a new Bayesian approach that avoids linearizing the forward problem and allows for general 3D velocity models. This is combined with the use of the Stochastic Gradient Descent (SGD) with adaptive stepsize based on ray-path density, which significantly improves the speed of the algorithm. Joint work with Joaquín Fontbona and Jorge Prado.

11 Mar 2020 Tristan Pryer (Bath)
Post-processing in automated error control

Post-Processing techniques are often used in numerical simulations for a variety of reasons from visualisation purposes to designing superconvergent approximations through to becoming fundamental building blocks in constructing numerical schemes. Another application of these operators is that they are a very useful component in automated error control for approximations of partial differential equations (PDEs). The talk is roughly divided into two parts, the first is concerned with finite difference type discretisations and how one can construct appropriate post-processors to allow for the automated error control of well known, well used schemes. In the second part we move onto discontinuous Galerkin schemes. We introduce a class of post-processing operator that “tweaks” a wide variety of existing post-processing techniques to enable efficient and reliable a posteriori bounds to be proven. This ultimately results in optimal error control for all manner of reconstruction operators, including those that superconverge.

16 Mar 2020 Linda Stals (Canberra, Australia)
Extra seminar (6 West 1.1, 10.15) Multilevel Solvers for The Thin-Plate Spline Saddle Point Problem

Data fitting is an integral part of a number of applications including data mining, 3D reconstruction of geometric models, image warping and medical image analysis. A commonly used method for fitting functions to data is the thin-plate spline method. This method is popular because it is not sensitive to noise in the data. We have developed a discrete thin-plate spline approximation technique that uses local basis functions. With this approach the system of equations is sparse and its size depends only on the number of points in the discrete grid, not the number of data points. Nevertheless the resulting system is a saddle point problem that can be ill-conditioned for certain choices of parameters. In this talk I will present an efficient robust solver based on a multilevel approach.

20 Mar 2020

Chupeng Ma (Heidelberg, Germany)

Cancelled
Efficient multiscale algorithms for simulating nonlocal optical response of metallic nanostructure arrays
27 Mar 2020

Andrew Stuart (Caltech)

Cancelled
4 Apr 2020

Gabriele Ciaramella (Konstanz, Germany)

Teams
First ever virtual Bath NA seminar On the Bank-Jimack domain decomposition method

In 2001 Randolph E. Bank and Peter K. Jimack introduced in [1] a new domain decomposition method for the adaptive solution of elliptic partial differential equations. The novel feature of this algorithm is that each of the subproblems is defined over the entire domain, but defined on partially coarse meshes. This method was used only as preconditioner and a first analysis was proposed in [2], where a discrete error analysis is performed in some special case. In this talk, we show that the method of Bank and Jimack is not convergent as a sta- tionary method. To correct this behavior we introduced an appropriate choice of partition of unity, which leads to a convergent iteration [3]. In this corrected setting, we establish some equivalence relations between the Bank and Jimack method and overlapping opti- mized Schwarz methods, namely Schwarz methods with Robin transmission conditions. These equivalences allowed us to obtain two important results. For one-dimensional prob- lems, the Bank-Jimack method is equivalent to an optimal Schwarz method independently of the choice of the coarse mesh. For two-dimensional problems, the Bank-Jimack method is equivalent to an optimized Schwarz method [4]. In this second case the convergence behavior depends on the coarse parts of the subproblem meshes, and in particular on the corresponding number of nodes, on their locations, and on the size of the overlap. By carefully studying this dependence we formulate a conjecture [4], whose proof will prob- ably keep us busy in the near future. References [1] A new parallel domain decomposition method for the adaptive finite element solution of elliptic partial differential equations, R.E. Bank and P.K. Jimack, Concurrency and Computation: Practice and Experience (2001). [2] Convergence analysis of a domain decomposition paradigm, R.E. Bank and P.S. Vassilevski, Computing and Visualization in Science (2008). [3] The domain decomposition method of Bank and Jimack as an optimized Schwarz method, G. Ciaramella, M.J. Gander and P. Mamooler, Proceeding DD25th Conference (2019). [4] The domain decomposition method of Bank and Jimack as an optimized Schwarz method in 2D, G. Ciaramella, M.J. Gander and P. Mamooler, in preparation (2020).

10 Apr 2020

Easter vacation

17 Apr 2020

Easter vacation

24 Apr 2020 Pranav Singh (Bath) Teams
Convergence of Magnus based methods for Schrödinger equations

Magnus expansion based methods are an efficient class of integrators for solving Schrödinger equations that feature time dependent potentials such as lasers. These methods have been found to be highly effective in computational quantum chemistry since the pioneering work of Tal Ezer, Kosloff & Cerjan in the early 90s. The convergence of the Magnus expansion, however, is understood only for ODEs and traditional analysis suggests a much poorer performance of these methods than observed experimentally. It was not till the work of Hochbruck and Lubich in 2003 that a rigorous analysis justifying the application to PDEs with unbounded operators, such as the Schrödinger equation, was presented. In this talk I will extend this analysis to the semiclassical regime, where the highly oscillatory solution conventionally suggests large errors and a requirement for very small time steps.

1 May 2020

Andreas Dedner (Warwick)

Teams
Generic Construction of Virtual Element Space with a focus on forth order problems

The construction of Finite Element (FE) spaces and the generic implementation of Galerkin type methods based on these spaces, has been studied extensively over the last decades. Many software packages are available that provide the user with many different FE spaces and make it fairly straightforward to solve a wide range of PDEs given in variational form. But many of these packages are restricted to lowest order spaces when tasked with solving forth order problems, or as often as not, do not even provide any suitable spaces for this type of problem. Typically, the only available space is the lowest order non-conforming H^2 space (the Morley finite-element) since this space is fairly easy to implement. Higher order spaces or fully conforming spaces of any order are hardly available. One reason for this is that in the construction process of FE spaces two ingredients are required that have to fit together: (i) a set of degrees of freedom (functionals) suitable for constructing a space with the desired properties, i.e., H^2 conforming (ii) a matching piecewise polynomial functions space with the correct dimension and approximation properties For example, it is common to use the full polynomial space P_l on triangles. This restricts the number of degrees of freedom per triangle one can define to be exactly dim P_l. Alternatively, complex polynomial subspaces containing P_l can be used to match a desired set of degrees of freedom, thus making the construction and implementation more complicated. Switching to other element types or from 2D to 3D often requires going through the process of finding suitable degrees of freedom and polynomial spaces from scratch. In this talk we present an alternative approach, based on the Virtual Element Method (VEM). This method was first developed to construct conforming spaces for second order problems on general polygonal meshes. Since its appearance 10 years ago, the method has been shown to be very flexible not only with respect to the underlying mesh elements but also with respect to incorporating additional structure. An issue is that implementing the method is not straightforward and so to my knowledge none of the major PDE software frameworks provide access to VEM spaces. Here we focus on a formulation that lends itself to the generic implementation within a standard software package. The approach is based on ideas developed for solving non-linear second order problems. We extend those to forth order non-linear problems and discuss error estimates for different versions of the spaces. We demonstrate the flexibility of the approach by showing results for perturbation problems and non-linear forth order PDEs.

8 May 2020

None

15 May 2020

Jeremy Budd (Delft, Netherlands)

Teams
Allen-Cahn and MBO on graphs

An emerging technique in clustering, segmentation and classification problems is to consider the dynamics of flows defined on finite graphs. In particular Bertozzi and co-authors considered dynamics related to Allen-Cahn flow (Bertozzi, Flenner, 2012) and the MBO algorithm (Merkurjev, Kostic, Bertozzi, 2013) for this purpose. This talk will exhibit our recent work showing rigorous links between these two flows, explaining why MBO can be used as an alternative to Allen-Cahn.

How to get to Bath See here for instructions how to get to Bath. Please email () and () if you intend to come by car and require a parking permit for Bath University Campus for the day.
Tips for giving talks

Tips for new students on giving talks

Since the audience of the NA seminar contains both PhD students and staff with quite wide interests and backgrounds, the following are some guidelines/hints to make sure people don't give you evil looks at lunch afterwards.

Before too much time passes in your talk, ideally the audience should know the answers to the following 4 questions:

  • What is the problem you're considering?
  • Why do you find this interesting?
  • What has been done before on this problem/what's the background?
  • What is your approach/what are you going to talk about?

There are lots of different ways to communicate this information. One way, if you're doing a slide show, could be for the first 4 slides to cover these 4 questions; although in this case you may want to revisit these points later on in the talk (e.g. to give more detail).

Remember:

  • "vertebrate style" (structure hidden inside - like the skeleton of a vertebrate) = good for detective stories, bad for maths talks.
  • "crustacean style" (structure visible from outside - like the skeleton of a crustacean) = bad for detective stories, good for maths talks.

Updated: