Bath Numerical Analysis seminar - upcoming

Fridays at 12.15 at Wolfson 4W 1.7. All talks will be broadcast on Zoom.

Link to Zoom meeting

Everyone is welcome at these talks.

Date Speaker Title
7 Feb 2025 Andreas Kyprianou (University of Warwick)
Proton beam de-energisation and the Bragg peak for cancer therapy via jump diffusion stochastic differential equations

Proton beam therapy is a relatively modern way of treating cancer. In short, protons are accelerated to around 2/3 the speed of light and projected into the body in the direction of cancerous tissues. Subatomic interactions slow the protons down causing energy deposition into tissue. The less energy protons have, the greater the number of subatomic interactions and the greater the rate at which energy is deposed. This results in the proton beam having an approximate “end point” where the majority of the initial energy in the beam is deposited. Rather obviously, this needs to be positioned into cancerous tissues and accuracy is essential. In this talk we discuss a mathematical model of the proton beam, grounded in the subatomic physics, taking the form of a (7+1)-dimensional SDE which has both diffusive and jump components. A fundamental modelling question of this class of SDEs pertains to constructing a well-defined meaning of “rate of energy deposition” in the three dimensions of space and how relates to Monte Carlo simulation. Ultimately this boils down to the existence of an occupation density for the SDE, which, in turn requires us to work with fundamental ideas from Malliavin calculus. This work is part of a larger body of research that is currently being undertaken in collaboration with colleagues in the MaThRad programme grant.

14 Feb 2025 Colin Cotter (Imperial College London)
Parallel-in-time methods for atmosphere simulation using time diagonalisation

The goal of parallel-in-time methods is to employ parallelism in the time direction in addition to the space direction, in the hope of obtaining further parallel speedups at the limits of what is possible due to spatial parallelism with domain decomposition alone. Recently diagonalisation techniques have emerged as a way of solving the coupled system for the solution of a differential equation at several timesteps simultaneously. One approach, sometimes referred to as “ParaDiag II” involves preconditioning this “all-at-once” system obtained from time discretisation of a linear constant coefficient ODE (perhaps obtained as the space discretisation of a time dependent PDE) with a nearby system that can be diagonalised in time, allowing the solution of independent blocks in parallel. For nonlinear PDEs this approach can form the basis of a preconditioner within a Newton-Krylov method for the all-at-once system after time averaging the (now generally time dependent) Jacobian system. After some preliminary description of the ParaDiag II approach, I will present results from our investigation of ParaDiag II applied to some testcases from the hierarchy of models used in the development of dry dynamical cores for atmosphere models, including performance benchmarks. Using these results I will identify the key challenges in obtaining further speedups and identify some directions to address these.

21 Feb 2025 Eike Mueller (University of Bath)
High-order IMEX-HDG timestepping methods for the incompressible Euler equations

Discretisation methods that are of high order in space and time are highly desirable when solving the equations of fluid dynamics since they show superlinear convergence and make efficient use of modern computer hardware. We develop an efficient timestepping method for the incompressible Euler equations based on combining implicit-explicit (IMEX) time-integrators, high-order Hybridised Discontinuous Galerkin (HDG) discretisations and splitting methods. The two computational bottlenecks are the calculation of a tentative velocity and the solution of a velocity-pressure system to enforce incompressibility. The second solve is preconditioned with a non-nested hybridised multigrid preconditioner as proposed by Cockburn, Dubois, Gopalakrishnan and Tan for the Schur-complement system that arises from static condensation to eliminate the pressure/velocity unknowns in favour of variables on the mesh-facets. The tentative velocity solve is preconditioned with ILU0. With this choice of preconditioners, the number of solver iterations depends only weakly on the polynomial degree and the grid-spacing; in other words, empirically the method is approximately h- and p-robust. As a consequence, the cost of a single timestep is roughly proportional to the total number of unknowns. At high order this allows the computation of highly accurate solutions at a low computational cost. The code has been implemented in Firedrake, using static condensation technology and PETSc to construct a fairly complex solver/preconditioner. Numerical results are presented for the simulation of a two-dimensional Taylor-Green vertex.

21 Feb 2025 Sam McCallum (University of Bath)
Efficient gradients for Neural ODEs

Training Neural ODEs requires backpropagating through an ODE solve. The state-of-the-art backpropagation algorithm is recursive checkpointing that balances recomputation with memory cost. In this talk, I will introduce a class of algebraically reversible ODE solvers that significantly improve upon both the time and memory cost of recursive checkpointing.

28 Feb 2025 Henry Lockyer (University of Bath)
Learning splitting and composition methods

Splitting and composition methods are ubiquitous in the solution of ordinary and partial differential equations since it is often easier to solve the differential equation under separate flows. This may be due to the availability of exact solutions under sub-flows or due to their efficient approximations that exploit, for instance, the favourable structure, sparsity or smaller spectral radius of the resulting numerical linear algebra problems. Traditionally, splitting and composition methods have been derived using analytic and algebraic techniques. This normally means truncated Taylor series or its Lie algebraic analogue – the Baker-Campbell-Hausdorff formula. Very high order splitting and composition methods have been derived in this way, which achieve high accuracy for small time-steps, and have exact or near conservation of physical properties such as mass, unitarity, and energy. However, in many applications of practical interest a moderate accuracy is acceptable while the limited computational resources necessitate the use of larger time-steps. For these large time-steps, the asymptotically derived high-order splittings typically fail to achieve adequate accuracy. In contrast, machine learning approaches for solving differential equation are often efficient, but come with no convergence guarantees, and often do not conserve physical properties such as energy. In this work we outline a hybrid approach for designing machine learned splitting and composition methods that work effectively for large time steps, and have provably convergence and conservation guarantees.

28 Feb 2025 Sergey Dolgov (University of Bath)
Smoothed risk-averse optimisation

We develop an algorithm to solve high-dimensional risk-averse optimization problems governed by differential equations (ODEs and/or PDEs) under uncertainty. As an example, we focus on the so-called Conditional Value at Risk (CVaR), but the approach is equally applicable to other coherent risk measures. To avoid non-smoothness of the objective function underpinning the CVaR, we propose an adaptive strategy to select the width parameter of the smoothed CVaR to balance the smoothing and approximation errors. This enables both Newton’s method for asymptotically quadratic convergence in the number of iterations, and higher-order (such as spectral tensor) approximation of the high-dimensional random fields for exponential convergence in the number of degrees of freedom.

7 Mar 2025 Eric Hester (University of Bath)
Towards optimal complexity solvers for multiphase systems: A Cahn-Hilliard case study

In this talk I’ll tie together various themes of my research with a new take on a classic problem: how to efficiently solve the Cahn-Hilliard equation. It’s a perfect archetype for the complexities of multiphase problems:

1. The system naturally develops interesting geometries separated by narrow boundary layers.
2. The regions evolve dynamically based on coupled nonlinear bulk-boundary physics.
3. Realistic scenarios involve extreme separation of time and length scales.

The first method of attack will use sparse spectral methods implemented in the Dedalus framework. We discretise using tensor products of Fourier series and Chebyshev polynomials, and iterate using standard implicit-explicit time steppers. By careful choice of weighted inner product and test functions, informed by the algebra of orthogonal Jacobi polynomials, we can implement most PDE operators using sparse banded matrices and fast fourier transforms. Boundary conditions are subtle but can be implemented efficiently via the tau method. This parallelisable approach achieves optimal scaling for initial times. But realistic regimes develop stiff spatial and temporal scales that this approach struggles to solve. Better performance needs more detailed understanding of the system behaviour. This calls for some classic applied mathematics!

We’ll start with a better coordinate system to understand the boundary layer, based on the elegant differential geometry of the signed distance function. A matched asymptotic expansion then reveals the leading order behaviour of the system. Shifting into coordinates based on the leading order solution allows further progress, diagonalising our linear operators (again using Jacobi polynomials!), and allowing formal solutions to arbitrary order. We implement these calculations using Mathematica and reproduce the Mullins-Sekerka problem as the leading order behaviour, with harmonic behaviour in the bulk domains, coupled by curvature dependent Dirichlet conditions, and velocity dependent Neumann conditions. But we can then calculate higher order corrections to these laws! This knowledge equips us to design better solvers. The key is to move to a spectral element discretisation divided into bulk and boundary regions that use smart coordinate systems informed by our asymptotic analysis. I’ll wrap up with an outline of these coordinates and appropriate preconditioners to use for these solvers. It’s probably too much for 20 minutes, but there should be some interesting sights along the way!

7 Mar 2025 Michael Murray (University of Bath)
How does the sample-to-feature ratio affect overfitting outcomes: benign, tempered, or harmful?

Traditional machine learning wisdom suggests that training a highly flexible model on noisy data without explicit regularization will likely result in harmful overfitting. However, experimentally it has been observed that neural networks trained in this context can generalize well despite achieving near perfect training loss. This unexpected behaviour is referred to as benign or tempered overfitting, depending on whether the overfitting causes a marginal or proportional (to the noise level) drop in test accuracy. In this talk I’ll provide a high-level overview of this topic before presenting a couple of recent results pertaining to the role of the sample-to-feature ratio in driving different training outcomes.

14 Mar 2025 Sam Flynn (National Physical Laboratory)
Advanced Numerical Techniques in Primary Standard Calorimetry at the UK's National Standards Institute

Achieving optimal patient outcomes in radiation therapy depends on precise radiation dose measurements. The most accurate form of radiation detector is the calorimeter, which in the UK serve as the primary standard for calibrating clinical dosimetry systems. However, accurate dose determination requires a series of correction factors and conversions, derived from a variety of numerical techniques. This seminar will explore the computational methods underpinning primary standard calorimetry, including thermal modelling and Monte Carlo simulations, to reduce the uncertainty in radiation dose measurements and maximise patient quality of life.

21 Mar 2025 Jemma Shipton (University of Exeter)
Time-parallel spectral deferred correction schemes for numerical weather prediction

In numerical weather prediction and climate modelling, semi-implicit time-stepping methods have long been favoured for their ability to take large time steps without excessively damping slow-moving waves. Fast-Wave Slow-Wave Spectral Deferred Correction (FWSW-SDC) methods offer an attractive alternative, achieving arbitrary-order accuracy by iteratively solving a collocation problem, akin to implicit Runge-Kutta methods. Similar to semi-implicit methods, FWSW-SDC improves stability compared to fully explicit schemes, enabling large time steps while retaining accuracy. Additionally, the rich literature on parallel-in-time SDC methods presents opportunities for both parallelization within the correction process and across time steps.

In this talk I will present recent work in collaboration with Alex Brown, Joscha Fregin and Daniel Ruprecht that extends prior work with FWSW-SDC from linear systems to the nonlinear compressible Euler equations, evaluating its potential for numerical weather prediction and climate modelling applications. We apply SDC methods to standard dynamical core test cases, including the non-hydrostatic gravity wave, moist rising bubble, and baroclinic wave tests, to assess stability, accuracy, and computational performance. Finally, we explore the implementation of parallelisable SDC preconditioners in the FWSW framework.

28 Mar 2025 Poppy Nikou (UCL Hospitals NHS Foundation Trust)
Modelling anatomical changes of head and neck cancer patients during radiotherapy treatment

Progressive anatomical changes are often observed in head and neck cancer patients over the course of radiotherapy. To create proton therapy treatment plans that are robust against these changes, these potential anatomical changes must be known in advance. In this work we present a novel methodology for developing statistical population models that capture progressive anatomical changes and their variability. The models are continuous in time, and can be built from a population of patients with a varying number and frequency of images over treatment. The statistical population models are based on principal component analysis of the progressive anatomical changes, and can generate scenarios of many potential anatomical changes. In this work two principal components were used to generate nine scenarios of potential anatomical change. The evaluation found, on average, one scenario that matched their progressive anatomical changes better than by just considering the average change.

28 Mar 2025 Pablo Arratia López (University of Bath)
Dynamic CT image reconstruction from highly undersampled measurements using neural fields

Dynamic inverse problems present unique challenges: as the quantity to be discovered evolves over time, the motion of the object during acquisition of measurements together with hardware limitations results in highly undersampled measurements in space. Not accounting for the dynamics of the process leads to a poor reconstruction with non-realistic motion and lack of temporal coherence. Traditional variational approaches, which employ grid-based discretisations and penalize time evolution, have been proposed to address these issues.

Recently, neural fields – such as neural radiance field (NeRF) for view synthesis and physics-informed neural networks (PINNs) for solving PDEs – have emerged as a novel way to parameterise the quantity of interest with a neural network with low-dimensional input, benefiting from being lightweight, continuous, and biased towards smoothness representations. The inductive bias has been exploited to enforce time regularity for dynamic inverse problems resulting in neural fields optimised by minimising a data-fidelity term only. In this talk I investigate and show the benefits of introducing explicit PDE-based motion regularisers, namely, the optical flow equation, for the optimisation of neural fields. We centre our study in 2D+time computed tomography with only one projection per time instance. We also compare neural fields against a grid-based solver and show that the former outperforms the latter in capturing time-evolving structures.

4 Apr 2025 Cristopher Salvi (Imperial College London)
TBC

TBC

4 Apr 2025 Maud Lemercier (University of Oxford)
TBC

TBC

25 Apr 2025 Ben Tapley (SINTEF, AI and Analytics group)
TBC

TBC

2 May 2025 Patrick Fahy (University of Bath)
TBC

TBC

2 May 2025 Max Scott (University of Bath)
TBC

TBC

Subscribe to seminar calendar

You can subscribe to the NA calendar directly from your calendar client, including Outlook, Apple’s iCalendar or Google calendar. The web address of the calendar is this ICS link which you will need to copy.

To subscribe to a calendar in Outlook:

  1. In Calendar view, select “Add Calendar” (large green +)
  2. Select “From Internet”
  3. Copy paste the ICS link, click OK, and click Yes to subscribe.

To subscribe to a calendar in iCalendar, please follow these instructions. Copy paste the ICS link in “web address”.

To subscribe to a calendar in Google Calendar:

  1. Go to link.
  2. On the left side go to "Other Calendars" and click on the dropdown.
  3. Choose "Add by URL".
  4. Copy paste the ICS link in the URL of the calendar.
  5. Click on "Add Calendar" and wait for Google to import your events. This creates a calendar with a somewhat unreadable name.
  6. To give a readable name to the calendar, click on the three vertical dots sign next to the newly created calendar and select Settings.
  7. Choose a name for the calendar, eg. Numerical Analysis @ Bath, and click back button on top left.

How to get to Bath See here for instructions how to get to Bath. Please email James Foster (jmf68@bath.ac.uk) and Aaron Pim (arp46@bath.ac.uk) if you intend to come by car and require a parking permit for Bath University Campus for the day.
Tips for giving talks

Tips for new students on giving talks

Since the audience of the NA seminar contains both PhD students and staff with quite wide interests and backgrounds, the following are some guidelines/hints to make sure people don't give you evil looks at lunch afterwards.

Before too much time passes in your talk, ideally the audience should know the answers to the following 4 questions:

  • What is the problem you're considering?
  • Why do you find this interesting?
  • What has been done before on this problem/what's the background?
  • What is your approach/what are you going to talk about?

There are lots of different ways to communicate this information. One way, if you're doing a slide show, could be for the first 4 slides to cover these 4 questions; although in this case you may want to revisit these points later on in the talk (e.g. to give more detail).

Remember:

  • "vertebrate style" (structure hidden inside - like the skeleton of a vertebrate) = good for detective stories, bad for maths talks.
  • "crustacean style" (structure visible from outside - like the skeleton of a crustacean) = bad for detective stories, good for maths talks.