Fridays at 12.15 (online)
Everyone is welcome at these talks.
|5 Feb 2021||Alberto Paganini (Leicester)|| Teams
Automated shape optimization with finite elements
Shape optimization studies how to design a domain such that a shape function is minimized. Ubiquitous in industrial applications, shape optimization is often constrained to partial differential equations (PDEs). In such instances, deriving shape derivative formulas by hand and coupling domain updates with PDE-solvers can be challenging, if not discouraging. In this talk, we give a gentle introduction to shape optimization and illustrate how finite element techniques allow automating shape optimization and hence tackling challenging PDE-constrained problems with ease.
|12 Feb 2021||Ronald Morgan (Baylor University, US)|| Teams
different time: 2:15 pm Solving Large Systems of Linear Equations or Monster Matrices and How to Care for Them
We look at Krylov methods for solving large systems of linear equations. Convergence theory will be given in a hopefully easy way to understand, at least for people who like surfing. Then deflating eigenvalues is mentioned, and deflation is used in a two-grid BiCGStab method. Also, a new stable polynomial preconditioning approach will be given. Next, Krylov methods are developed for rank-one updated systems and maybe there will be time to apply this to singular matrices. Bears will be mentioned along the way including a certain English bear named Winnie. However, no animals will be harmed by this talk except for the feelings of a certain giraffe.
|19 Feb 2021||Leon Bungert (Erlangen, Germany)|| Teams
Continuum Limit for Lipschitz Learning on Graphs
In semi-supervised learning one is confronted with a large set of data points, only very few of which are labelled. The task is to find a labelling function which extends these labels to the whole data set. In order to find useful labelling functions, in graph-based semi-supervised learning one represents the data set as weighted graph and poses a smoothness constraint on the labelling function. In this context p-Laplacian learning has become very popular and consists in finding a p-harmonic function which coincides with labels on the labelled set. However, this method is asymptotically ill-posed if p is smaller than the dimension of the data space, and is not feasible for most applications. In this work, I will therefore speak about Lipschitz-learning which aims to find a Lipschitz-extension of the labels and is well-posed in arbitrary dimension. The main result is a discrete-to-continuum limit of Lipschitz-extensions as the data set grows to a continuum. Our theory uses Gamma-convergence and Hausdorff convergence of the data set. As a by-product we obtain a continuum limit for a nonlinear eigenvalue problem related to geodesic distance functions.
|26 Feb 2021||Rob Scheichl (Heidelberg, Germany)|| Teams
Optimal local approximation spaces for generalized finite element methods
In this talk, I present new optimal local approximation spaces for the generalized finite element method for solving second order elliptic equations with rough coefficients, which are based on local eigenvalue problems involving the partition of unity. In addition to a nearly exponential decay rate of the local approximation error with respect to the dimension of the local spaces, I also give the rate of convergence with respect to the size of the oversampling region. To reduce the computational cost of the method, an efficient and easy-to-implement technique for generating the necessary discrete A-harmonic spaces is proposed. Numerical experiments are presented to support the theoretical analysis and confirm the effectiveness of the method. This is joint work with Chupeng Ma (Heidelberg).
|5 Mar 2021||Dante Kalise (Nottingham)|| Teams
Synthetic data-driven methods for approximating high-dimensional Hamilton-Jacobi PDEs
Hamilton-Jacobi PDEs are central in control and differential games, enabling the computation of optimal control laws expressed in feedback form. High-dimensional HJ PDEs naturally arise in the feedback synthesis for high-dimensional control systems, and their numerical solution must be sought outside the framework provided by standard grid-based discretizations methods. In this talk, I will discuss the construction of causality-free, data-driven methods for the approximation of high-dimensional HJ PDEs. I will address the generation of a synthetic dataset based on the use of representation formulas (such as Lax-Hopf or Pontryagin’s Maximum Principle), which is then fed into a high-dimensional sparse polynomial/ANN model for training. The use of representation formulas providing gradient information is fundamental to increase the data efficiency of the method. I will present applications in nonlinear dynamics, control of PDEs, and agent-based models.
|12 Mar 2021||Eike Mueller (Bath)|| Teams
Multilevel Monte Carlo for quantum mechanics on a lattice
Monte Carlo simulations of quantum field theories on a lattice become increasingly expensive as the continuum limit is approached since the cost per independent sample grows with a high power of the inverse lattice spacing. Simulations on fine lattices suffer from critical slowdown, the rapid growth of autocorrelations in the Markov chain with decreasing lattice spacing a. This causes a strong increase in the number of lattice configurations that have to be generated to obtain statistically significant results. We discuss hierarchical sampling methods to tame this growth in autocorrelations; combined with multilevel variance reduction techniques, this significantly reduces the computational cost of simulations. We recently demonstrated the efficiency of this approach for two non-trivial model systems in quantum mechanics in https://arxiv.org/abs/2008.03090. This includes a topological oscillator, which is badly affected by critical slowdown due to freezing of the topological charge. On fine lattices our methods are several orders of magnitude faster than standard, single level sampling based on Hybrid Monte Carlo. For very high resolutions, multilevel Monte Carlo can be used to accelerate even the cluster algorithm, which is known to be highly efficient for the topological oscillator. Performance is further improved through perturbative matching. This guarantees efficient coupling of theories on the multilevel lattice hierarchy, which have a natural interpretation in terms of effective theories obtained by renormalisation group transformations. At the end I will also present some very recent results on how the methods can be extended to a two dimensional lattice gauge theory, namely the 2d Schwinger model, a simplified toy model for quantum electrodynamics.
|19 Mar 2021||Cancelled|
|26 Mar 2021||Tiangang Cui (Monash University, Australia)|| Teams
Intrinsic subspaces of high-dimensional inverse problems and where to find them
The high-dimensionality is a central challenge faced by many numerical methods for solving large-scale Bayesian inverse problems. In this talk, we will present some old and new developments in the identification of low-dimensional subspaces that offer a viable path to alleviating this dimensionality barrier. Utilising concentration inequalities, we are able to identify the intrinsic subspaces from the solutions of certain eigenvalue problems and derive corresponding dimension-truncation error bounds. The resulting low-dimensional subspace enables the design of inference algorithms that can scale sub-linearly with the apparent dimensionality of the problem.
|16 Apr 2021||James Foster (Oxford)|| Teams
High order numerical simulation of the underdamped Langevin diffusion
The underdamped Langevin diffusion (ULD) is an important model in statistical mechanics and has recently seen applications in data science for sampling from high-dimensional distributions. In this talk, I will present a new approach to the numerical approximation of ULD. Our strategy is to first reduce the underdamped Langevin SDE to a comparable ODE, before then applying an appropriate ODE solver. For strongly convex potentials with Lipschitz continuous derivatives, we show that this ODE approximation is ergodic and obtain non-asymptotic estimates for the 2-Wasserstein error of its stationary distribution. Moreover, by discretizing this ODE using a third order Runge-Kutta method, we obtain a practical method that uses just two additional gradient evaluations per step. When applied to a logistic regression problem, this method empirically shows a third order convergence rate and outperforms other ULD-based sampling algorithms.
|23 Apr 2021||Sergey Dolgov (Bath)|| Teams
Deep tensor decompositions for sampling from high-dimensional distributions
Characterising intractable high-dimensional random variables is one of the fundamental challenges in stochastic computation, for example, in the solution of Bayesian inverse problems. The recent surge of transport maps offers a mathematical foundation and new insights for tackling this challenge by coupling intractable random variables with tractable reference random variables. In this talk I will present a nested coordinate transformation framework inspired by deep neural networks but driven by functional tensor-train approximation of tempered probability density functions instead. This bypasses slow gradient descent optimisation by a direct inverse Rosenblatt transformation. The resulting deep inverse Rosenblatt transport significantly expands the capability of tensor approximations and transport maps to random variables with complicated nonlinear interactions and concentrated density functions. We demonstrate the efficiency of the proposed approach on a range of applications in uncertainty quantification, including parameter estimation for dynamical systems and inverse problems constrained by partial differential equations.
|30 Apr 2021||Yuya Suzuki (NTNU, Norway)|| Teams
Rank-1 and rank-r lattices combined with operator splitting for time-dependent Schrödinger equations
In this talk, we use lattice points combined with operator splitting for numerically solving time-dependent Schrödinger equations. Here “lattice points” refers to a specific point set coming from the context of quasi-Monte Carlo methods. We propose a Fourier pseudospectral method on lattice points combined with operator splitting, and we prove that our method converges with the desired order of splitting method, given that the potential function is in a Korobov space with a certain smoothness which is independent of the dimension of the problem. We conduct numerical experiments in various settings. Comparing with the sparse grid method, our method is shown to be more efficient. Even in higher dimensions and with higher-order splitting schemes, our method shows stable and higher-order convergence numerically. One of the essential tasks here is to compute the Fourier transform and the inverse transform repeatedly in a higher-dimensional space for simulating the time-stepping operator of the time-dependent Schrödinger equation in a stable manner. Our proposed method solves this task efficiently.
|7 May 2021||Jarrod Williams (Bath)|| Teams
You can subscribe to the NA calendar directly from your calendar client, including Outlook, Apple’s iCalendar or Google calendar. The web address of the calendar is this ICS link which you will need to copy.
To subscribe to a calendar in Outlook:
- In Calendar view, select “Add Calendar” (large green +)
- Select “From Internet”
- Copy paste the ICS link, click OK, and click Yes to subscribe.
To subscribe to a calendar in Google Calendar:
- Go to link.
- On the left side go to "Other Calendars" and click on the dropdown.
- Choose "Add by URL".
- Copy paste the ICS link in the URL of the calendar.
- Click on "Add Calendar" and wait for Google to import your events. This creates a calendar with a somewhat unreadable name.
- To give a readable name to the calendar, click on the three vertical dots sign next to the newly created calendar and select Settings.
- Choose a name for the calendar, eg. Numerical Analysis @ Bath, and click back button on top left.
How to get to BathSee here for instructions how to get to Bath. Please email Matthias Ehrhardt (email@example.com) if you intend to come by car and require a parking permit for Bath University Campus for the day.
Tips for giving talks
Tips for new students on giving talks
Since the audience of the NA seminar contains both PhD students and staff with quite wide interests and backgrounds, the following are some guidelines/hints to make sure people don't give you evil looks at lunch afterwards.
Before too much time passes in your talk, ideally the audience should know the answers to the following 4 questions:
- What is the problem you're considering?
- Why do you find this interesting?
- What has been done before on this problem/what's the background?
- What is your approach/what are you going to talk about?
There are lots of different ways to communicate this information. One way, if you're doing a slide show, could be for the first 4 slides to cover these 4 questions; although in this case you may want to revisit these points later on in the talk (e.g. to give more detail).
- "vertebrate style" (structure hidden inside - like the skeleton of a vertebrate) = good for detective stories, bad for maths talks.
- "crustacean style" (structure visible from outside - like the skeleton of a crustacean) = bad for detective stories, good for maths talks.