Bath Numerical Analysis Seminar
Fridays at 12.15 at Wolfson 4W 1.7. All talks will be broadcast on Zoom.
Everyone is welcome at these talks.
Date | Speaker | Title |
27 Sep 2024 | Katie MacKenzie (Strathclyde) |
Using Firedrake to develop Machine Learning models for atmospheric fluid dynamics
Recently, there has been a lot of interest in machine learning based surrogate models for atmospheric fluid dynamics. Successful approaches such as GraphCast [Lam et al, Science, 382(6677), 2023] use an encoder-processor-decoder architecture where the processor describes the propagation of information on a Graph Neural Network (GNN) via message passing. Since the dynamics is approximated on a low-dimensional space, these models are much faster than traditional methods that solve the underlying PDE numerically on a fine mesh. We explore a variation of this architecture where the processor is replaced by the solution of a time-dependent differential equation in a low-dimensional latent space. Compared to GNNs, this allows the application of standard techniques from numerical analysis, thus potentially improving stability and interpretability for example by exactly enforcing conservation laws. To implement this, we use the recent Firedrake/PyTorch interface [Bouziani & Ham, 2023] to train and solve time-dependent PDE surrogate models on the sphere. The encoder in our model combines the interpolation of the initial condition to the latent space on a vertex-only-mesh with a learnable embedding; the decoder has a similar structure based on the adjoint of the interpolation. Our model is trained on numerical solutions of PDE which have been calculated a-priori using Firedrake. The aim of our work is to combine the reliability of Finite Elements with the efficiency of Neural Networks surrogates to produce a competitive model which has the potential to be applied to time-critical applications in weather forecasting. |
27 Sep 2024 | James Foster (Bath) |
On the convergence of adaptive approximations for SDEs
When using ordinary differential equations (ODEs), numerical solutions are often approximated and propagated in time via discrete step sizes. For a large variety of ODE problems, performance can be improved by making these step sizes “adaptive” – that is, adaptively changed based on the state of system. However, for stochastic differential equations (SDEs), adaptive numerical methods can be difficult to study and even fail to converge due to the rough nature of Brownian motion. In this talk, we will show that convergence does indeed occur, provided the underlying Brownian motion is discretized in an adaptive but “martingale-like” fashion. Whilst this prevents adaptive steps from skipping over time points (which we show can prevent convergence), we believe our convergence theory is the first that is applicable to standard SDE solvers. We will discuss the key ingredients in this analysis – including martingale convergence, rough path theory and the approximation of Brownian motion by polynomials. Based on our theory, we also modify an adaptive “Proportional-Integral” (PI) step size controller for use in the SDE setting. Unlike those used for ODEs, this new PI controller is designed to revisit time points where the Brownian motion was previously sampled. Finally, we conclude with a numerical experiment showing that SDE solvers can achieve an order of magnitude more accuracy with adaptive step sizes than with constant step sizes. (joint work with Andraž Jelinčič) |
4 Oct 2024 | Malena Sabaté Landman (Oxford) |
Inner product free Krylov methods for large-scale inverse problems
Inverse problems focus on reconstructing hidden objects from indirect, often noisy measurements, and are prevalent in numerous scientific and engineering disciplines. These reconstructions are typically highly sensitive to perturbations such as measurement errors, making regularization essential. In this presentation, I will discuss Krylov subspace methods that avoid inner-product computations and are specifically designed to efficiently address large-scale linear inverse problems. In particular, I will highlight their regularization capabilities and present computational results that demonstrate the effectiveness of these methods in different scenarios. |
11 Oct 2024 | Subhayan Roy Moulik (Cambridge) |
Quantum algorithms for sampling eigenstates with Spectral Sieve
I would like to illustrate a method for sampling eigenstates of a class of Hermitian matrices, called k-local Hamiltonians, using a quantum computer. The algorithm will be presented in a framework utilising numerical integral transformations, to implement a spectral projector. A sequence of parameterised spectral projectors will be then shown to drive any state vector to some desired eigenspace. Numerical quantum computational examples and complexity theoretic consequences of the algorithm will be discussed in conclusion. |
18 Oct 2024 | Johannes Hertrich (UCL) |
Fast Kernel Summation via Slicing and Fourier Transforms
The fast computation of large kernel sums is a challenging task which arises as a subproblem in any kernel method. Initially, this problem has complexity O(N2), where N is the number of considered data points. In this talk, we propose an approxmation algorithm which reduces this complexity to O(N). Our approach is based on two ideas. First, we prove that under mild assumptions radial kernels can be represented as sliced version of some one-dimensional kernel and derive an analytic formula for the one-dimensional counterpart. Hence, we can reduce the d-dimensional kernel summation to a one-dimensional setting. Second, for solving these one-dimensional problems efficiently, we apply fast Fourier summations on non-equispaced data or a sorting algorithm. We prove bounds for the slicing error, employ quasi-Monte Carlo methods for improved error rates and demonstrate the advantages of our approach by numerical examples. Finally, we present an application in generative modelling. |
25 Oct 2024 | Xiaocheng Shang (Birmingham) |
Accurate and Efficient Numerical Methods for Molecular Dynamics and Data Science Using Adaptive Thermostats
I will discuss the design of state-of-the-art numerical methods for sampling probability measures in high dimension where the underlying model is only approximately identified with a gradient system. Extended stochastic dynamical methods, known as adaptive thermostats that automatically correct thermodynamic averages using a negative feedback loop, are discussed which have application to molecular dynamics and Bayesian sampling techniques arising in emerging machine learning applications. I will also discuss the characteristics of different algorithms, including the convergence of averages and the accuracy of numerical discretizations. |
1 Nov 2024 | Francesco Tudisco (Edinburgh) |
Exploiting low-rank geometry in deep learning for training and fine-tuning
As models and datasets grow, modern AI faces significant challenges related to timing, costs, energy consumption, and accessibility. To address these, there has been a surge of interest in network compression and parameter-efficient fine-tuning (PEFT) techniques to reduce computational overhead while maintaining model performance. In terms of compression, the majority of the existing methods focus on post-training pruning to reduce inference costs. However, an important subset tackles the reduction of training overhead, with layer factorization emerging as a key approach both for training and fine-tuning. In fact, recent empirical and theoretical findings indicate that deep networks exhibit an implicit low-rank bias, suggesting the existence of highly effective low-rank subnetworks. In this talk, I will present our recent work on analyzing and leveraging this low-rank bias for efficient model compression and fine-tuning. By exploiting the Riemannian geometry of low-rank spaces, we propose a geometry-aware variant of stochastic gradient descent that trains small, factorized layers while dynamically adjusting their rank. We provide theoretical guarantees on convergence and approximation, alongside experimental results demonstrating competitive performance across various state-of-the-art network architectures both in terms of pre-training and fine-tuning. |
8 Nov 2024 |
SAMBa personal research project (PRP) talks |
TBC
TBC |
15 Nov 2024 | Megan Griffin-Pickering (UCL) |
Kinetic-Type Mean Field Games
Mean Field Games describe the Nash equilibria of large many-player differential games. Much of the literature has previously focused on games in which players have full control over the derivative of their state variable. However, this assumption may not be appropriate in applications, for example, if players control their acceleration rather than their velocity. Another key factor is the properties of the Hamiltonian: whether the dependence on the measure variable is local or non-local, and whether the Hamiltonian is additively separable. Such conditions are mathematically useful but unrealistic in many applications. I will discuss recent work, in which we obtain well-posedness results for certain classes of kinetic Mean Field Games with local Hamiltonians, including: deterministic games with variational structure; and games with degenerate diffusion and non-separable Hamiltonians. Based on joint works with David Ambrose (Drexel University) and Alpár Mészaros (Durham University). |
22 Nov 2024 | Abdalaziz Hamdan (Bath) |
Mixed finite-element methods for smectic A liquid crystals
In recent years, energy-minimization finite-element methods have been proposed for the computational modelling of equilibrium states of several types of liquid crystals. Here, we present a four-field formulation for models of smectic A liquid crystals, based on the free-energy functionals proposed by Pevnyi, Selinger, and Sluckin, and by Xia et al. The Euler-Lagrange equations for these models include fourth-order terms acting on the smectic order parameter (or density variation of the LC). While H2 conforming or C0 interior penalty methods can be used to discretize such terms, we investigate introducing the gradient of the smectic order parameter as an explicit variable, and constraining its value using a Lagrange multiplier. Numerical results are obtained using Firedrake for the finite element discretization and PETSc for the nonlinear and linear solvers. |
22 Nov 2024 | Jenny Power (Bath) |
TBC
TBC |
29 Nov 2024 | Veronika Chronholm (Bath) |
TBC
TBC |
29 Nov 2024 | William Warren (Bath) |
Recombination for efficient cubature formulae
TBC |
6 Dec 2024 |
MMath Year Long Project (YLP) talks |
TBC
TBC |
13 Dec 2024 | Allen Paul (Bath) |
TBC
TBC |
13 Dec 2024 | Hok Shing Wong (Bath) |
TBC
TBC |
Subscribe to seminar calendar
You can subscribe to the NA calendar directly from your calendar client, including Outlook, Apple’s iCalendar or Google calendar. The web address of the calendar is this ICS link which you will need to copy.
To subscribe to a calendar in Outlook:
- In Calendar view, select “Add Calendar” (large green +)
- Select “From Internet”
- Copy paste the ICS link, click OK, and click Yes to subscribe.
To subscribe to a calendar in iCalendar, please follow these instructions. Copy paste the ICS link in “web address”.
To subscribe to a calendar in Google Calendar:
- Go to link.
- On the left side go to "Other Calendars" and click on the dropdown.
- Choose "Add by URL".
- Copy paste the ICS link in the URL of the calendar.
- Click on "Add Calendar" and wait for Google to import your events. This creates a calendar with a somewhat unreadable name.
- To give a readable name to the calendar, click on the three vertical dots sign next to the newly created calendar and select Settings.
- Choose a name for the calendar, eg. Numerical Analysis @ Bath, and click back button on top left.
How to get to Bath
See here for instructions how to get to Bath. Please email James Foster (jmf68@bath.ac.uk) and Aaron Pim (arp46@bath.ac.uk) if you intend to come by car and require a parking permit for Bath University Campus for the day.Tips for giving talks
Tips for new students on giving talks
Since the audience of the NA seminar contains both PhD students and staff with quite wide interests and backgrounds, the following are some guidelines/hints to make sure people don't give you evil looks at lunch afterwards.
Before too much time passes in your talk, ideally the audience should know the answers to the following 4 questions:
- What is the problem you're considering?
- Why do you find this interesting?
- What has been done before on this problem/what's the background?
- What is your approach/what are you going to talk about?
There are lots of different ways to communicate this information. One way, if you're doing a slide show, could be for the first 4 slides to cover these 4 questions; although in this case you may want to revisit these points later on in the talk (e.g. to give more detail).
Remember:
- "vertebrate style" (structure hidden inside - like the skeleton of a vertebrate) = good for detective stories, bad for maths talks.
- "crustacean style" (structure visible from outside - like the skeleton of a crustacean) = bad for detective stories, good for maths talks.