Skip to main content Skip to navigation

Algorithms & Computationally Intensive Inference seminars

The seminars will take place on Fridays 1 pm UK time in room MB0.08 in a hybrid format.

2021-2022 Organisers: Alice Corbella and Lyudmila Grigoryeva

If you would like to speak, or you want to be included in any emails, please contact one of the organisers.

Website URL:

Mailing List Sign-Up:

Mailing List: (NB - only approved members can post)

2021/22 Term 1

The list of firmly confirmed speakers.

Date Speaker Title F2F Slides Video
Week 1 08/10 Lorenzo Pacchiardi

Score Matched Conditional Exponential Families for Likelihood-Free Inference


Abstract: To perform Bayesian inference for stochastic simulator models for which the likelihood is not accessible, Likelihood-Free Inference (LFI) relies on simulations from the model. Standard LFI methods can be split according to how these simulations are used: to build an explicit Surrogate Likelihood, or to accept/reject parameter values according to a measure of distance from the observations (Approximate Bayesian Computation, ABC). In both cases, simulations are adaptively tailored to the value of the observation. Here, we generate parameter-simulation pairs from the model independently on the observation, and use them to learn a conditional exponential family likelihood approximation; to parametrize it, we use Neural Networks whose weights are tuned with Score Matching. With our likelihood approximation, we can employ MCMC for doubly intractable distributions to draw samples from the posterior for any number of observations without additional model simulations, with performance competitive to comparable approaches. Further, the sufficient statistics of the exponential family can be used as summaries in ABC, outperforming the state-of-the-art method in five different models with known likelihood. Finally, we apply our method to a challenging model from meteorology.

Week 2 15/10 Internal jam session: 2-5 minutes brief intro to the ongoing research projects or some new ideas
Week 3 22/10 Petros Dellaportas Negligible-cost Variance Reduction for Metropolis-Hastings Chains    

Abstract: We provide a general methodology to construct control variates for any discrete time random walk Metropolis and Metropolis-adjusted Langevin algorithm Markov chains that can achieve, in a post-processing manner and with a negligible additional computational cost, impressive variance reduction when compared to the standard MCMC ergodic averages. Our proposed estimators are based on an approximate solution of the Poisson equation for a multivariate Gaussian target densities of any dimension.

Week 4 29/10 Sanmitra Ghosh Variational Inference for Nonlinear ODEs      

Abstract: Complex biological systems are often modelled using nonlinear ordinary differential equations (ODE) which provide a rich framework for describing the dynamic behaviour of many interacting physical variables representing quantities of biological importance. Bayesian inference of unknown quantities in such models is carried out using MCMC. However, MCMC incurs a significant computational cost as it requires repeated evaluation of various iterative algorithms that seek the numerical solution of a nonlinear ODE. Variational inference, as an optimisation based alternative to MCMC, has the potential to expedite Bayesian inference for ODEs. Despite its potential usefulness in ODE inference problems, variational inference in its classical formulation can only be applied to conjugate models. We thus apply the "reparameterisation trick", popularised recently in machine learning applications, to obtain a "black-box" reformulation of variational inference for ODEs. Our proposed variational formulation does not depend on any emulation of the ODE solution and only requires the extension of automatic differentiation to an ODE. We achieve this through a novel and holistic approach that uses both forward and adjoint sensitivity analysis techniques. Consequently, this approach can cater to both small and large ODE models efficiently. Furthermore, ODEs can be used to approximate diffusion processes and stochastic kinetic systems. We show how our variational formulation can be used to carry out inference for such stochastic dynamical systems. We empirically evaluate the proposed inference method on some widely used mechanistic models. The proposed inference method produced a reliable approximation to the posterior distribution, with a significant reduction in execution time, in comparison to MCMC.

Week 5 05/11 Azadeh Khaleghi On Some Probabilities and Limitations of Restless Multi Armed Bandits    

Abstract: The Multi-Armed Bandit (MAB) problem is one of the most central instances of sequential decision making under uncertainty, which plays a key role in online learning and optimization. MABs arise in a variety of modern real-world applications, such as online advertisement, Internet routing, and sequential portfolio selection, only to name a few. In this problem, a forecaster aims to maximize the expected sum of the rewards actively collected from unknown processes. Stochastic MABs are typically studied under the assumption that the rewards are i.i.d.. However, this assumption does not necessarily hold in practice. In this talk I will discuss some possibilities and limitations of a more challenging, yet more realistic (restless) MAB setting, where the reward distributions may exhibit long-range dependencies.


Week 6 12/11 Sam Power Accelerated Sampling on Discrete Spaces with Non-Reversible Markov Jump Processes    

Abstract: In Bayesian inference problems and elsewhere, Markov Chain Monte Carlo (MCMC) algorithms are an indispensable tool for sampling from complex probability distributions. On continuous state-spaces, there has been a great deal of successful work on how to construct efficient MCMC dynamics which can converge quickly, under very general circumstances. Much of this success has stemmed from identifying continuous-time dynamical processes (ODEs, SDEs, PDMPs) which admit the desired invariant measure, and then discretising those processes to form tractable discrete-time chains. This approach has apparently seen less use in the setting of discrete spaces.

In this work, we aim to bridge this gap by identifying `canonical’ Markov processes (both reversible and non-reversible) on structured discrete spaces which admit a given invariant measure, and then use them to derive new algorithms for efficient sampling on discrete spaces. The algorithms are parameter-free (no tuning is required) and can be simulated directly in continuous time, easily and without discretisation error. We provide theoretical results supporting the use of non-reversible dynamics, and a range of numerical experiments demonstrate the practical benefits of our algorithms.

This is joint work with Jacob Vorstrup Goldman (Cambridge).

Week 7 19/11 Marta Catalano A Wasserstein Index of Dependence for Bayesian Nonparametric modeling    

Abstract: Optimal transport (OT) methods and Wasserstein distances are flourishing in many scientific fields as an effective means for comparing and connecting different random structures. In this talk we describe the first use of an OT distance between Lévy measures with infinite mass to solve a statistical problem. Complex phenomena often yield data from different but related sources, which are ideally suited to Bayesian modeling because of its inherent borrowing of information. In a nonparametric setting, this is regulated by the dependence between random measures: we derive a general Wasserstein index for a principled quantification of the dependence gaining insight into the models’ deep structure. It also allows for an informed prior elicitation and provides a fair ground for model comparison. Our analysis unravels many key properties of the OT distance between Lévy measures, whose interest goes beyond Bayesian statistics, spanning to the theory of partial differential equations and of Lévy processes.

Week 8 26/11 Edward Ionides Bagging and Blocking: Inference via Particle Filters for Interacting Dynamic Systems      

Abstract: Infectious disease transmission is a nonlinear partially observed stochastic dynamic system with topical interest. For low-dimensional systems, models can be fitted to time series data using Monte Carlo particle filter methods. As dimension increases, for example when analyzing epidemics among multiple spatially coupled populations, basic particle filter methods rapidly degenerate. A collection of independent Monte Carlo calculations can be combined to give a global filtering solution with favorable theoretical scaling properties. The independent Monte Carlo calculations are called bootstrap replicates, and their aggregation is called a bagged filter. Bagged filtering is effective at likelihood evaluation for a model of measles transmission within and between cities. A blocked particle filter also works well at this task. Bagged and blocked particle filters can both be coerced into carrying out likelihood maximization by iterative application to an extension of the model that has stochastically perturbed parameters. Numerical results are carried out using the R package spatPomp.

Week 9 03/12

Xiaocheng Shang

Accurate and Efficient Numerical Methods for Molecular Dynamics and Data Science Using Adaptive Thermostats    

Abstract: I will discuss the design of state-of-the-art numerical methods for sampling probability measures in high dimension where the underlying model is only approximately identified with a gradient system. Extended stochastic dynamical methods, known as adaptive thermostats that automatically correct thermodynamic averages using a negative feedback loop, are discussed which have application to molecular dynamics and Bayesian sampling techniques arising in emerging machine learning applications. I will also discuss the characteristics of different algorithms, including the convergence of averages and the accuracy of numerical discretizations.

Week 10 10/12 Lionel Riou-Durand

Metropolis Adjusted Underdamped Langevin Trajectories

(jointly with Jure Vogrinc (University of Warwick))


Abstract: Sampling approximations for high dimensional statistical models often rely on so-called gradient-based MCMC algorithms. It is now well established that these samplers scale better with the dimension than other state of the art MCMC samplers, but are also more sensitive to tuning [5]. Among these, Hamiltonian Monte Carlo is a widely used sampling method shown to achieve gold standard d^{1/4} scaling with respect to the dimension [1]. However it is also known that its efficiency is quite sensible to the choice of integration time, see e.g. [4], [2]. This problem is related to periodicity in the autocorrelations induced by the deterministic trajectories of Hamiltonian dynamics. To tackle this issue, we develop a robust alternative to HMC built upon underdamped Langevin (namely Metropolis Adjusted Underdamped Langevin Trajectories, or MAULT), inducing randomness in the trajectories through a continuous refreshment of the velocities. We study the optimal scaling problem for MAULT and recover the d^{1/4} scaling of HMC proven in [1] without additional assumptions. Furthermore we highlight the fact that autocorrelations for MAULT can be controlled by a uniform and monotonous bound thanks to the randomness induced in the trajectories, and therefore achieves robustness to tuning. Finally, we compare our approach to Randomized HMC ([2], [3]) and establish quantitative contraction rates for the 2-Wasserstein distance that support the choice of underdamped Langevin dynamics.

2021/22 Term 2

The list of firmly confirmed speakers.

Date Speaker Title F2F Slides Video

Week 1


Ryan Martin Data-driven Calibration of Generalized Posterior Distributions      

Abstract: Bayesian inference based on a well-specified likelihood is (modulo regularity conditions) approximately calibrated in the sense that credible regions are approximate confidence regions. But well-specified likelihoods are the exception, not the norm, so Bayesian inference generally comes with no calibration guarantees. To overcome this, the statistician can consider a more flexible generalized posterior distribution and use data-driven methods to ensure that their corresponding generalized posterior inference is approximately calibrated. In this talk, I'll focus on cases where the generalized posterior involves a free learning rate parameter and present a bootstrap-based algorithm designed specifically to choose that learning rate such that the posterior inference is approximately calibrated. I'll also present an extension of this calibration strategy designed to deal directly with prediction under misspecification.

Week 2


No seminar

Week 3


Filippo Pagani Numerical Zig-Zag and Perturbation Bounds on Numerical Error    

Abstract: This talk is about NuZZ (Numerical Zig-Zag), and some soon-to-be arXived improvements on the initial version.

Piecewise Deterministic Markov Processess (PDMPs) are a new kind of stochastic process that can be used at the heart of MCMC algorithms to explore the state space. PDMPs are irreversible processes (roughly, they tend to go further than diffusions in the same time interval, have lower asymptotic variance, etc.) whose properties allow exact subsampling of the data (important for large datasets).

The Zig-Zag sampler is a promising new PDMP-based MCMC algorithm that combines these two properties to achieve interesting results. However, The Zig-Zag dynamics is difficult to simulate as it requires certain cdf's to be invertible, or bounds on the gradient of the target distribution (hopefully tight). The Numerical Zig-Zag inverts those cdf's numerically, which makes the Zig-Zag applicable to a vast class of models, at the cost of losing exactness.

The talk will introduce the Zig-Zag sampler and NuZZ, skim through some numerical results, and concentrate slightly more on the new results on perturbation theory, where we bound the discrepancy on ergodic averages from exact and approximate samples in terms of the numerical error tolerances.

Week 4


Emilia Pompe        

Week 5


Benedict Leimkuhler      

Week 6


Laura Guzmán Rincón      

Week 7


Martyn Plummer      

Week 8



Week 9


Ioannis Kosmidis        

Week 10



Previous Years: