This page will be updated as more detais become available
Alexandros Beskos, University College London, UK
Hamiltonian Monte-Carlo in high-dimensions
We discuss the scalability of HMC in high dimensions. We focus on simple target distribution and look at results characterizing the optimal acceptance probability in high-dimensions. Also, we look at application of HMC for targets distribution defined as change of measures from Gaussian laws in high dimensions, and re-define HMC to give mesh-free mixing times.
Michael Betancourt, University of Warwick, UK
Exploring Probability Distributions with Measure-Preserving Flows
By coherently exploring even high-dimensional target distributions, Hamiltonian Monte Carlo has proven an empirical success. Only recently, however, have we identified the source of the algorithm’s efficacy: measure-preserving flows. In this talk I’ll discuss how the non-reversible nature of measure-preserving flows ensures efficient exploration and how the structure of probabilistic systems allows us to utilize these flows in practice.
Chii-Ruey Hwang, Institute of Mathematics, Academia Sinica, Taiwan
Non-reversibility Accelerates Convergence
Non-reversible Markov processes/chains have better convergence properties under various comparison criteria, e.g. asymptotic variance, spectral gap, convergence exponent in variational norm etc. The worst-case analysis, the average-case analysis, uniform comparison, antisymmetric perturbations are considered. This is a survey talk of our work.
Tony Lelièvre, Ecole des Ponts ParisTech, France
On the optimality of non-reversible overdamped Langevin dynamics
I will present results concerning the sampling efficiency of non-reversible overdamped Langevin dynamics. The efficiency can be measured in various ways. We will focus on two criteria: the speed of convergence to equilibrium and the asymptotic variance. Practical aspects will also be discussed.
- T. Lelièvre, F. Nier and G. Pavliotis, Optimal non-reversible linear drift for the convergence to equilibrium of a diffusion, Journal of Statistical Physics, 152(2), 237-274, (2013).
- A.B. Duncan, T. Lelièvre and G. Pavliotis, Variance reduction using nonreversible Langevin samplers, in preparation
Ravi Montenegro, Mathematical Sciences, UML, USA
Tools for studying convergence of finite non-reversible Markov chains
A wide range of tools exist for studying finite reversible Markov chains: spectral methods, conductance, canonical paths, coupling, strong stationary times, etc. We discuss extensions of these to the non-reversible setting. Particular attention is given to our recent work on (self)-collision of non-reversible Markov processes related to birthday attacks on discrete logarithm.
Michela Ottobre, Heriot Watt University, Edinburgh
A Function Space HMC Algorithm with second order Langevin diffusion limit
We describe a new MCMC method optimized for the sampling of probability measures on Hilbert space which have a density with respect to a Gaussian; such measures arise in the Bayesian approach to inverse problems, and in conditioned di ffusions. Our algorithm is based on two key design principles: (i) algorithms which are well-defined in in finite dimensions result in methods which do not suffer from the curse of dimensionality when they are applied to approximations of the in finite dimensional target measure on R^N; (ii) non-reversible algorithms can have better ergodic properties compared to their reversible counterparts. The method we introduce is based on the hybrid Monte Carlo algorithm, tailored to incorporate these two design principles. Joint work with N. Pillai, F. Pinski and A. Stuart.
G. A. Pavliotis, Imperial College, London, UK
Designing Optimal Langevin samplers
A standard approach to computing expectations with respect to a given probability measure, known up to the normalization constant, is to introduce an appropriate Langevin dynamics that is ergodic with respect to the distribution from which we want to sample. It is by now well understood that breaking detailed balance, i.e. considering nonreversible Langevin dynamics, can speed up convergence to the target distribution and reduce the asymptotic variance. In this talk we will consider a family of underdamped (hypoelliptic) Langevin diffusions that can be used in order to sample from the target distribution. We will address the issue of how we can choose the optimal dynamics, in the sense of minimizing the computational cost. We will also explain how the choice of the optimal dynamics can be combined with appropriate variance reduction techniques. This is joint work with A. Duncan and N. Nusken.
Luc Rey-Bellet, University of Massachusetts, USA
Information theoretic and large deviation tools for the analysis and design of Monte-Carlo algorithms
In this lecture we present several examples on how concepts from information theory (in particular relative entropy) and from large deviation theory can be used to quantify error in numerical schemes as well as to design improved Monte-Carlo sampling algorithms.
Marija Vucelja, University of Virginia, USA
Application of lifted non-reversible MCMC methods to spin systems
Markov Chain Monte Carlo algorithms are widely used in science for sampling stationary properties of physical systems. Most implementations of such algorithms employ Markov Chains that obey detailed balance, even though this is not a necessary requirement for convergence to a particular steady state. I will show how to alter a Metropolis-Hastings algorithm, to utilize nonreversible Markov Chains and demonstrate on the Curie-Weiss model, that such a modification can lead to dramatic improvement of sampling. This alteration modifies transition rates keeping the structure of transitions intact. Finally, I will pose some open questions and discuss attempts to use nonequilibrium dynamics for efficient sampling.