Fridays at 12.15 (online)
Everyone is welcome at these talks.
|8 Oct 2021||Tobias Hartung (Bath)||
Dimensional Expressivity Analysis for Parametric Quantum Circuits
A standard tool in quantum computing are Variational Quantum Simulations (VQS) which form a class of hybrid quantum-classical algorithms for solving optimization problems. For example, the objective may be to find the ground state of a Hamiltonian by minimizing the energy. As such, VQS use parametric quantum circuit designs to generate a family of quantum states (e.g., states obeying physical symmetries) and efficiently evaluate a cost function for the given set of variational parameters (e.g., energy of the current quantum state) on a quantum device. The optimization is then performed using a classical feedback loop based on the measurement outcomes of the quantum device. In the case of energy minimization, the optimal parameter set therefore encodes the ground state corresponding to the given Hamiltonian provided that the parametric quantum circuit is able to encode the ground state. Hence, the design of parametric quantum circuits is subject to two competing drivers. On one hand, the set of states, that can be generated by the parametric quantum circuit, has to be large enough to contain the ground state. On the other hand, the circuit should contain as few parametric quantum gates as possible to minimize noise from the quantum device. In other words, when designing a parametric quantum circuit we want to ensure that there are no redundant parameters. Thus, in this talk, I will introduce the dimensional expressivity analysis as a means of analyzing a given parametric design in order to remove redundant parameters as well as any unwanted symmetries. Time permitting, we may also discuss best-approximation errors for non-maximally expressive parametric quantum circuits or how to custom design parametric quantum circuits for specific physical applications in which physical states are restricted by a class of symmetries.
|15 Oct 2021||Kristian Bredies (University of Graz, Austria)||
Dynamic inverse problems in spaces of measures with optimal-transport regularization
We discuss the solution of dynamic inverse problems in which for each time point, a time-dependent linear forward operator mapping the space of measures to a time-dependent Hilbert space has to be inverted. These problems are regularized with dynamic optimal-transport energies that base on the continuity equation as well as convex functionals of Benamou-Brenier-type. Well-posedness of respective Tikhonov minimization is discussed in detail. Further, for the purpose of deriving properties of the solutions as well as numerical algorithms, we present sparsity results for general inverse problems that are connected with the extremal points of the Benamou-Brenier energy subject to the continuity equation. For the latter, it is proven that the extremal points are realized by point masses moving along curves with Sobolev regularity. This result will be employed in numerical optimization algorithms of generalized conditional gradient type. We present instances of this algorithm that are tailored towards dynamic inverse problems associated with point tracking. Finally, the application and numerical performance of the method is demonstrated for sparse dynamic superresolution. This is joint work with Marcello Carioni, Silvio Fanzon and Francisco Romero. References:  Kristian Bredies, Silvio Fanzon. An optimal transport approach for solving dynamic inverse problems in spaces of measures. ESAIM: Mathematical Modelling and Numerical Analysis, 54(6):2351-2382, 2020.  Kristian Bredies, Marcello Carioni. Sparsity of solutions for variational inverse problems with finite-dimensional data. Calculus of Variations and Partial Differential Equations, 59:14, 2020.  Kristian Bredies, Marcello Carioni, Silvio Fanzon, Francisco Romero. On the extremal points of the ball of the Benamou-Brenier energy. Bulletin of the London Mathematical Society, 2021.  Kristian Bredies, Marcello Carioni, Silvio Fanzon and Francisco Romero. A generalized conditional gradient method for dynamic inverse problems with optimal transport regularization. arXiv:2012.11706, 2020.
|22 Oct 2021||Jingwei Liang (Shanghai Jiao Tong University, China)||
A framework for analyzing variance reduced stochastic gradient methods and a new one
Over the past years, variance reduced stochastic gradient methods have become increasingly popular, not only in the machine learning community, but also other areas including inverse problems and mathematical imaging to name a few. However, despite the varieties of variance reduced stochastic gradient descent methods, their analysis varies from each other. In this talk, I will first present a unified framework, under which we manage to abstract different variance reduced stochastic gradient methods into one. Then I will introduce a new stochastic method for composed optimization problems, and illustrate its performance via several imaging problems.
|29 Oct 2021||Sebastian Banert (Lund, Sweden)||
Deep Learning for convex optimisation (and beyond)
We present ideas how to create neural networks which are especially adapted to solve convex optimisation problems. The idea is that (contrary to classical approaches) we are less interested in worst-case performance than to adapt an algorithm to a specific class of problems which will be solved multiple times and each instance has a limited computational budget. Such problems typically appear in, e.g., the field of inverse problems. The first part of the presentation will deal with how standard first-order algorithms can be interpreted as neural networks and how deep learning can help in finding a robust choice for the parameters for the method. The second part will present an approach how to enlarge the space of parameters by introducing deviations with the intention to allow to get first-order information at points where it is as useful at possible. Both approaches have in common that we will restrict the parameter space so that the resulting algorithm will be convergent regardless of the outcome of the deep learning. This aims to give robustness in case that the training data does not exactly match the real data and to avoid overfitting. The last part will be an outlook to some unpublished work on monotone inclusions (which generalise convex optimisation problems) and on the combination of deterministically accelerated methods in the spirit of Nesterov with the aforementioned deep learning techniques. The presentation includes joint work with Jonas Adler, Pontus Giselsson, Martin Morin, Ozan Öktem, Jevgenija Rudzusika, and Hamed Sadeghi.
|5 Nov 2021||Jemma Shipton (Exeter)||
Compatible finite element methods and parallel-in-time schemes for numerical weather prediction
I will describe Gusto, a dynamical core toolkit built on top of the Firedrake finite element library; present recent results from a range of test cases and outline our plans for the development of time-parallel algorithms. Gusto uses compatible finite element methods, a form of mixed finite ele- ment method (meaning that different finite element spaces are used for different fields) that allow the exact representation of the standard vector calculus identities div-curl=0 and curl-grad=0. The popularity of these methods for numerical weather prediction is due to the flexibility to run on non-orthogonal grids, thus avoiding the communication bottleneck at the poles, while retaining the necessary convergence and wave propagation properties required for accuracy. Although the flexibility of the compatible finite element spatial discretisation improves the parallel scalability of the model, it does not solve the parallel scalability problem inherent in the sequential timestepping: we need to find a way to perform parallel computations in the time domain. Exponential integrators, approximated by a near-optimal rational expansion, offer a way to take large timesteps and form the basis for parallel timestepping schemes based on wave averaging. I will describe the progress we have made towards implementing these schemes in Gusto.
|12 Nov 2021||Tony Shardlow (Bath)||
in-person (4 West Wolfson 1.7) Contaminant dispersal, numerical simulation, and stochastic PDEs
Atmospheric dispersal of ash and other contaminants are modelled by stochastic differential equations coupled to a large-scale weather model. We develop this model as used by the UK Met Office and discuss its numerical approximation. We show how a splitting method can be used to substantially improve the numerical simulation and justify this approach with a backward error analysis. We conclude the talk with a stochastic PDE description of the large-scale behaviour of such particle models, known as the Dean Kawasaki model, and show how this stochastic PDE can be approximated numerically.
|19 Nov 2021||Alex Bespalov (Birmingham)||
Hierarchical a posteriori error estimators and adaptive stochastic finite element methods
We present a framework for a posteriori error estimation in the FEM-based approximations of partial differential equations (PDEs) with parametric or uncertain inputs. The underlying hierarchical a posteriori error estimates are useful not only for assessing approximation errors, they also provide practical error indicators for guiding adaptive refinement strategies. I will start by introducing the framework and the associated adaptive algorithm in the context of multilevel stochastic Galerkin finite element methods, where approximations are represented as finite (sparse) generalised polynomial chaos expansions with spatial coefficients residing in finite element spaces (this representation allows for independent local refinement of finite element approximations for different spatial coefficients). We will discuss the convergence and rate optimality properties of the proposed adaptive algorithm and demonstrate its performance in numerical experiments for PDE problems with affine-parametric coefficients. I will then show how this framework of error estimation and adaptivity can be applied in a non-Galerkin setting of stochastic collocation FEM (a sampling technique based on multivariable interpolation of the discrete solutions sampled at nodes of a sparse grid), in particular, for PDE problems with non-affine parametric coefficient dependence.
|26 Nov 2021||Lisa Maria Kreusser (Bath)||
Wasserstein GANs Work Because They Fail (to Approximate the Wasserstein Distance)
Wasserstein GANs are based on the idea of minimising the Wasserstein distance between a real and a generated distribution. After an introduction to the Wasserstein distance and Wasserstein GANs, I will present both theoretical and empirical evidence that the Wasserstein GAN loss is not a meaningful approximation of the Wasserstein distance. Moreover, the Wasserstein distance may not even be a desirable loss function for deep generative models, and the success of Wasserstein GANs can be attributed to a failure to approximate the Wasserstein distance. This is joint work with Jan Stanczuk, Christian Etmann and Carola-Bibiane Schönlieb.
|3 Dec 2021||Matthew Griffith (Bath)||
hybrid, in-person (4 West Wolfson 1.7) and zoom Accelerating climate- and weather-forecasts with faster multigrid solvers
Successful operational weather forecasting at the Met Office relies on obtaining an accurate solution to a very large system of equations in a timely manner. It is therefore crucial that the solver algorithm is fast and efficient, as this can account for 25 - 50% of model runtime. For its next-generation forecast model – codenamed LFRic – the Met Office is investigating a so called “hybridised” solver algorithm, which shows its full potential when combined with multigrid techniques. I will introduce both the hybridisation and multigrid techniques on simplified problems, comparing and contrasting these with the current solver algorithm used in the Met Office model. I will talk about how this is generalised to the full model and present results from this comparing several solver algorithm configurations. Finally, I will discuss possible optimisations which can be made to decrease the time to solution for the hybridised multigrid solver algorithm.
|10 Dec 2021||Jacob Byrne, Sam Cook, Alexandros Gonos, Danny Goodacre, Tom Ryan||
in-person (4 West Wolfson 1.7) Year-Long-Project student presentatations
|17 Dec 2021||
Subscribe to seminar calendar
You can subscribe to the NA calendar directly from your calendar client, including Outlook, Apple’s iCalendar or Google calendar. The web address of the calendar is this ICS link which you will need to copy.
To subscribe to a calendar in Outlook:
- In Calendar view, select “Add Calendar” (large green +)
- Select “From Internet”
- Copy paste the ICS link, click OK, and click Yes to subscribe.
To subscribe to a calendar in Google Calendar:
- Go to link.
- On the left side go to "Other Calendars" and click on the dropdown.
- Choose "Add by URL".
- Copy paste the ICS link in the URL of the calendar.
- Click on "Add Calendar" and wait for Google to import your events. This creates a calendar with a somewhat unreadable name.
- To give a readable name to the calendar, click on the three vertical dots sign next to the newly created calendar and select Settings.
- Choose a name for the calendar, eg. Numerical Analysis @ Bath, and click back button on top left.
How to get to BathSee here for instructions how to get to Bath. Please email Matthias Ehrhardt (email@example.com) if you intend to come by car and require a parking permit for Bath University Campus for the day.
Tips for giving talks
Tips for new students on giving talks
Since the audience of the NA seminar contains both PhD students and staff with quite wide interests and backgrounds, the following are some guidelines/hints to make sure people don't give you evil looks at lunch afterwards.
Before too much time passes in your talk, ideally the audience should know the answers to the following 4 questions:
- What is the problem you're considering?
- Why do you find this interesting?
- What has been done before on this problem/what's the background?
- What is your approach/what are you going to talk about?
There are lots of different ways to communicate this information. One way, if you're doing a slide show, could be for the first 4 slides to cover these 4 questions; although in this case you may want to revisit these points later on in the talk (e.g. to give more detail).
- "vertebrate style" (structure hidden inside - like the skeleton of a vertebrate) = good for detective stories, bad for maths talks.
- "crustacean style" (structure visible from outside - like the skeleton of a crustacean) = bad for detective stories, good for maths talks.