Fridays at 12.15 (online)
Everyone is welcome at these talks.
|2 Oct 2020||Pan Kessel (TU Berlin, Germany)|| Teams
Deep Generative Models in Lattice Field Theories
Recent work has taken promising first steps towards an application of deep generative models in lattice field theories. In these approaches, deep generative models are used to generate field configurations. These configurations can then be used to estimate physical observables. In this talk, I will give a (hopefully) pedagogical overview of the subject. In particular, I will outline how these methods can be used to estimate observables which cannot directly be measured by MCMC methods.
|9 Oct 2020||Luca Zanetti (Bath)|| Teams
Spectral methods for graph clustering, old and new
Graph clustering is one of the fundamental problems of data science and network analysis. Informally, the aim of graph clustering is to partition a network into subsets (clusters) so that nodes belonging to the same cluster are better connected to one another than nodes belonging to different clusters. A popular approach to graph clustering is to use the eigenvectors of the graph Laplacian to embed the nodes of the network in Euclidean space, and then use a geometric clustering scheme, such as k-means, to partition these embedded nodes. This approach is called Spectral Clustering. In this talk I will survey my work on Spectral Clustering. In particular, I would like to highlight my contributions in bridging the gap between our theoretical and practical understanding of Spectral Clustering. Finally, I will introduce recent work aiming to generalise spectral techniques from undirected to directed networks.
|16 Oct 2020||Yuji Nakatsukasa (Oxford)|| Teams
Fast and stable randomized low-rank matrix approximation
Randomized SVD has become an extremely successful approach for efficiently computing a low-rank approximation of matrices. In particular the paper by Halko, Martinsson, and Tropp (SIREV 2011) contains extensive analysis, and has made it a very popular method. The typical complexity for a rank-r approximation of mxn matrices is O(mnlog n+(m+n)r^2) for dense matrices. The classical Nystrom method is much faster, but applicable only to positive semidefinite matrices. This work studies a generalization of Nystrom’s method applicable to general matrices, and shows that (i) it has near-optimal approximation quality comparable to competing methods, (ii) the computational cost is the near-optimal O(mnlog n+r^3) for dense matrices, with small hidden constants, and (iii) crucially, it can be implemented in a numerically stable fashion despite the presence of an ill-conditioned pseudoinverse. Numerical experiments illustrate that generalized Nystrom can significantly outperform state-of-the-art methods, especially when r»1, achieving up to a 10-fold speedup. The method is also well suited to updating and downdating the matrix.
|23 Oct 2020||Constantino Antonio García Martínez (University of Santiago de Compostela, Spain)|| Teams
Applications of Stochastic Differential Equations in Machine Learning
Stochastic Differential Equations (SDEs, also known as Langevin equations) have been extensively used in many fields, including physics or finance, due to their ability to describe complex phenomena with physically interpretable equations. Intuitively, a SDE couples a deterministic trend with noisy fluctuations that introduce uncertainty in the evolution of the system, hence generalizing ordinary differential equations to the stochastic framework. Recently, the machine learning community has also recognized the modeling capabilites that SDEs offer, resulting in new exciting research. In this talk we will discuss two applications of SDEs to machine learning. First, we will discuss how SDEs can be applied to system identification problems where the observed number of variables is less than the degrees of freedom of the dynamics (which is linked to the physical problem of phase state reconstruction). Then, we will introduce approaches aiming to construct stochastic processes that can be used as distributions on functions.
|30 Oct 2020||Christoph Brune (Twente, Netherlands)|| Teams
Learned SVD - Solving Inverse Problems via Hybrid Autoencoding
Our world is full of model-driven data where effective mappings between data manifolds are desired. There is an increasing demand for understanding combined model- and data-driven methods. For example in data assimilation model discovery and learned model-order reduction gain importance. We propose a nonlinear, learned singular value decomposition (L-SVD), which combines autoencoders that simultaneously learn and connect latent codes for desired signals and given measurements. We provide a convergence analysis for a specifically structured L-SVD that acts as a regularisation method. In a more general setting, we investigate the topic of model reduction via data dimensionality reduction to obtain a regularised inversion. We present a promising direction for solving inverse problems in data-driven cases where the underlying models are not fully available or have complex structure. We show that the building blocks of learned inversion maps can be obtained automatically, with improved performance upon classical methods and better interpretability than black-box methods. Keywords: representation learning, manifold learning, inverse problems, regularization theory, model-order reduction, model learning, error estimation
|6 Nov 2020||Silvia Gazzola (Bath)|| Teams
Krylov Methods for Low-Rank Regularisation
In this talk we introduce new solvers for the computation of low-rank approximate solutions to large-scale linear problems, with a particular focus on the regularisation of linear inverse problems. By adopting an iteratively reweighted norm approach, the nuclear norm regularised problem is reformulated as a sequence of quadratic problems, which can then be efficiently solved using Krylov methods, giving rise to an inner-outer iteration scheme. Numerical experiments show that our new solvers are competitive with other state-of-the-art solvers for low-rank problems, and deliver reconstructions of increased quality with respect to other classical Krylov methods. This is a joint work with James Nagy and Chang Meng, both at Emory University, USA.
|13 Nov 2020||Andreas Langer (Coventry)|| Teams
Domain Decomposition for Total Variation Minimization
Total variation regularisation is an important tool to solve inverse imaging problems. In particular, in the last decades, in the literature, there have been introduced many different approaches and algorithms for minimizing the total variation. These standard techniques are iterative-sequentially formulated and therefore not able to solve large scale simulations in acceptable computational time. For such large problems we need to address methods that allow us to reduce the problem to a finite sequence of subproblems of a more manageable size, perhaps computed by one of the standard techniques. With this aim, we introduce domain decomposition methods for total variation minimization. The main idea of domain decompo- sition is to split the space of the initial problem into several smaller subspaces. By restricting the function to be minimized to the subspaces, a sequence of local problems, which may be solved easier and faster than the original problem, is constituted. Then the solution of the in- itial problem is obtained via the solutions of the local subproblems by glueing them together. In the case of domain decomposition for the non-smooth and non-additive total variation the crucial difficulty is the correct treatment of the interfaces of the domain decomposition patches. Due to the non-smoothness and non-additivity, one encounters additional difficulties in showing convergence of more general subspace correction strategies to global minimizers. In particular there do exist counterexamples indicating failure of splitting techniques. Nevert- heless, in this talk we propose overlapping domain decomposition algorithms for the total variation minimization problem with the guarantee of convergence to a minimizer of the ori- ginal functional. The analysis is based on the relation between the primal (original) total variation minimization problem and its (pre-)dual formulation. To the best of our knowledge, this is the first successful approach of a domain decomposition strategy for total variation mi- nimization with a rigorous convergent analysis in an infinite dimensional setting. We provide numerical experiments, showing the successful application of the algorithms.
|20 Nov 2020||Olivier Fercoq (Télécom Paris, France)|| Teams
Random extrapolation for primal-dual coordinate descent
We introduce a randomly extrapolated primal-dual coordinate descent method that adapts to sparsity of the data matrix and the favorable structures of the objective function. Our method updates only a subset of primal and dual variables with sparse data, and it uses large step sizes with dense data, retaining the benefits of the specific methods designed for each case. In addition to adapting to sparsity, our method attains fast convergence guarantees in favorable cases without any modifications. In particular, we prove linear convergence under metric subregularity, which applies to strongly convex-strongly concave problems and piecewise linear quadratic functions. We show almost sure convergence of the sequence and optimal sublinear convergence rates for the primal-dual gap and objective values, in the general convex-concave case. Numerical evidence demonstrates the state-of-the-art empirical performance of our method in sparse and dense settings, matching and improving the existing methods.
|27 Nov 2020||Konrad Polthier (FU Berlin, Germany)|| Teams
New Shapes, New Materials and New Processes
The classic geometric view on smooth surfaces hardly fits to the complex and often multiscale physical shapes in nature and, nowadays, in industrial applications. In this talk we will highlight new classes of multi-layered surface shapes derived from recent algorithms in geometry processing and related to classic complex analysis. Multivalued functions and differential forms naturally lead to the concept of branched covering surfaces and more generally of branched covering manifolds in the spirit of Hermann Weyl’s book The Idea of a Riemann Surface’ from 1913. We will illustrate and discretize basic concepts of branched (simplicial) covering surfaces starting from complex analysis and surface theory up to their recent appearance in geometry processing algorithms and artistic mathematical designs. Applications will touch discrete and differential surface modeling, image and geometry retargeting, optimal surfaces, and novel weaved shape representations.
|4 Dec 2020||Tobias Weinzierl (Durham)|| Teams
ExaHyPE enclaves explained - about the ideas, vision and some algorithms behind the ExaHyPE software
This talk orbits around the ExaHyPE software; originally funded by an EU H2020 grant, used in several H2020/ERC projects, and today driven mainly by EPSRC’s ExCALIBUR programme. ExaHyPE is an engine to write solvers for hyperbolic differential equations given in first order. Similar to graphics engines in computer games offering particular graphics types, it requires users to buy into a certain compute-n-feel: adaptive Cartesian meshes, ADER-DG with arbitrary high order, and particular algorithmic workflows. We call this the Hollywood principle - don’t call us, we call you - as the user delegates the ownership of what computations are done when where to the engine. In return, ExaHyPE offers a clear separation of concerns, and allows users that prefer to focus on numerical modelling or the application domain to prototype massively parallel solvers with fancy computer science ingredients quickly. This talk introduces the vision behind ExaHyPE and its key ingredients, before it introduces the idea of enclave tasking. Enclave tasking is a novel tasking workhorse which allows the engine - under the hood, i.e. totally hidden from the user - to obtain high single-node concurrency, to overlap expensive and low concurrency steps such as AMR or MPI data exchange with computations, to balance workload lightly between ranks or to realise a resilient code setup. We close the discussion with an outlook on upcoming features, shortcomings of the present software and a discussion of things we urgently miss within our core team and thus search for collaborators.
|11 Dec 2020||
Joseph Friar, Tom Sales, Roxani Sofia, Darrah Kirkpatrick, Patrick Lavelle
MMath YLP presentations
Joseph Friar: Topics in landslide prediction, Tom Sales: Topics in Geometric Numerical Integration, Roxani Sofia, Darrah Kirkpatrick, Patrick Lavelle: Machine Learning and Differential Equations
You can subscribe to the NA calendar directly from your calendar client, including Outlook, Apple’s iCalendar or Google calendar. The web address of the calendar is this ICS link which you will need to copy.
To subscribe to a calendar in Outlook:
- In Calendar view, select “Add Calendar” (large green +)
- Select “From Internet”
- Copy paste the ICS link, click OK, and click Yes to subscribe.
To subscribe to a calendar in Google Calendar:
- Go to link.
- On the left side go to "Other Calendars" and click on the dropdown.
- Choose "Add by URL".
- Copy paste the ICS link in the URL of the calendar.
- Click on "Add Calendar" and wait for Google to import your events. This creates a calendar with a somewhat unreadable name.
- To give a readable name to the calendar, click on the three vertical dots sign next to the newly created calendar and select Settings.
- Choose a name for the calendar, eg. Numerical Analysis @ Bath, and click back button on top left.
How to get to BathSee here for instructions how to get to Bath. Please email Matthias Ehrhardt (firstname.lastname@example.org) if you intend to come by car and require a parking permit for Bath University Campus for the day.
Tips for giving talks
Tips for new students on giving talks
Since the audience of the NA seminar contains both PhD students and staff with quite wide interests and backgrounds, the following are some guidelines/hints to make sure people don't give you evil looks at lunch afterwards.
Before too much time passes in your talk, ideally the audience should know the answers to the following 4 questions:
- What is the problem you're considering?
- Why do you find this interesting?
- What has been done before on this problem/what's the background?
- What is your approach/what are you going to talk about?
There are lots of different ways to communicate this information. One way, if you're doing a slide show, could be for the first 4 slides to cover these 4 questions; although in this case you may want to revisit these points later on in the talk (e.g. to give more detail).
- "vertebrate style" (structure hidden inside - like the skeleton of a vertebrate) = good for detective stories, bad for maths talks.
- "crustacean style" (structure visible from outside - like the skeleton of a crustacean) = bad for detective stories, good for maths talks.