Skip to main content Skip to navigation

Algorithms & Computationally Intensive Inference seminars

The seminars will take place on Fridays 1 pm UK time in room MB0.08.

2025-2026 Organiser: Adrien Corenflos and Shishen Lin

If you would like to speak, or you want to be included in any emails, please contact the organiser.

Website URL: www.warwick.ac.uk/compstat

Mailing List Sign-Up: https://ivory-cuscus.lnx.warwick.ac.uk/mailman3/lists/algorithmseminar.newlistserv.warwick.ac.uk/ (only working on campus at the moment)

Mailing List: algorithmseminar@listserv.csv.warwick.ac.uk (NB - only approved members can post)

Upcoming (see below for future and past speakers):
10/10 Yingzhen Li (Imperial College)
Neural Flow Samplers: Improved Training and Architectures

Note:

Hybrid

Abstract:

Sampling from unnormalized densities, either for continuous or discrete variables, presents a fundamental challenge with wide-ranging applications, from posterior inference to molecular dynamics simulations and combinatorial optimisation. Continuous or discrete flow-based neural samplers offer a promising approach, learning a velocity field for continuous variable densities (or rate matrix for a continuous-time Markov chain (CTMC) built for discrete variable densities) that satisfies key principles of marginal density evolution (e.g., the continuity equation for continuous variable case and the Kolmogorov equation for discrete variable case) to generate samples. However, this learning procedure requires accurate estimation of intractable terms linked to the computationally challenging partition function, for which existing estimators often suffer from high variance or low accuracy. To overcome this, we introduce an improved estimator for these challenging quantities, employing an improved Sequential Monte Carlo method enhanced with control variates. We further introduce specific adjustments and advances for the trained sampler tailored for continuous variable or discrete variable case. In both scenarios, our proposed Neural Flow Samplers empirically outperforms existing flow-based neural samplers on both synthetic datasets and complex targets motivated by real-world applications.

Term 1:

Date

Speaker

Title

14/11 Alistair Benford (Birmingham)
TBC

Note:

Hybrid

Abstract:

TBC

07/11 Michela Ottobre (TBC, Edinburgh)
TBC

Note:

Hybrid

Abstract:

TBC

31/10 Yuga Iguchi (Lancaster)
TBC

Note:

Hybrid

Abstract:

TBC

24/10 Shishen Lin (University of Warwick)

Learning in Games: Overcoming Binary Adversarial Optimisation with Competitive Coevolution

Note:

Hybrid

Abstract:

Co-evolutionary algorithms (CoEAs), which pair candidate designs with test cases, are frequently used in adversarial optimisation, particularly for binary test-based problems where designs and tests yield binary outcomes. The effectiveness of designs is determined by their performance against tests, and the value of tests is based on their ability to identify failing designs, often leading to more sophisticated tests and improved designs. However, CoEAs can exhibit complex, sometimes pathological behaviours like disengagement. Through runtime analysis, we aim to rigorously analyse whether CoEAs can efficiently solve test-based adversarial optimisation problems in an expected polynomial runtime. This paper carries out the first rigorous runtime analysis of (1, λ)-CoEA for binary test-based adversarial optimisation problems. In particular, we introduce a binary test-based benchmark problem called the Diagonal problem and initiate the first runtime analysis of competitive CoEA on this problem. The mathematical analysis shows that the (1, λ)-CoEA can efficiently find an ε approximation to the optimal solution of the Diagonal problem, i.e. in expected polynomial runtime assuming sufficiently low mutation rates and large offspring population size. On the other hand, the standard (1, λ)-EA fails to find an ε approximation to the optimal solution of the Diagonal problem in polynomial runtime. This illustrates the potential of coevolution for solving binary adversarial optimisation problems.

Arxiv link: https://arxiv.org/pdf/2407.17875.

17/10 Marina Riabiz (King's College London)
Extrapolation (and smoothing) of Tempered Posteriors Expectations

Note:

Hybrid

Abstract:

Tempering is a popular tool in Bayesian computation, being used to transform a posterior distribution into a reference distribution that is more easily approximated. Several algorithms exist that start by approximating the reference distribution and proceed through a sequence of intermediate distributions until an approximation to the posterior is obtained. Our contribution reveals that high-quality approximation of terms up to the posterior is not essential, as knowledge of the intermediate distributions enables posterior quantities of interest to be extrapolated. Specifically, we establish conditions under which posterior expectations are determined by their associated tempered expectations on any non-empty interval. Harnessing this result, we propose novel methodology for approximating posterior expectations based on extrapolation and smoothing of tempered expectations, which we implement as a post-processing variance-reduction tool for sequential Monte Carlo.

Arxiv: https://arxiv.org/abs/2509.12173

10/10
Yingzhen Li (Imperial College)
Neural Flow Samplers: Improved Training and Architectures

Note:

Hybrid

Abstract:

Sampling from unnormalized densities, either for continuous or discrete variables, presents a fundamental challenge with wide-ranging applications, from posterior inference to molecular dynamics simulations and combinatorial optimisation. Continuous or discrete flow-based neural samplers offer a promising approach, learning a velocity field for continuous variable densities (or rate matrix for a continuous-time Markov chain (CTMC) built for discrete variable densities) that satisfies key principles of marginal density evolution (e.g., the continuity equation for continuous variable case and the Kolmogorov equation for discrete variable case) to generate samples. However, this learning procedure requires accurate estimation of intractable terms linked to the computationally challenging partition function, for which existing estimators often suffer from high variance or low accuracy. To overcome this, we introduce an improved estimator for these challenging quantities, employing an improved Sequential Monte Carlo method enhanced with control variates. We further introduce specific adjustments and advances for the trained sampler tailored for continuous variable or discrete variable case. In both scenarios, our proposed Neural Flow Samplers empirically outperforms existing flow-based neural samplers on both synthetic datasets and complex targets motivated by real-world applications.

03/10 Yvann Le Fay (ENSAE Paris)
Least squares variational inference

Note:

Hybrid

Abstract:

Variational inference seeks the best approximation of a target distribution within a chosen family, where "best" means minimising Kullback-Leibler divergence. When the approximation family is exponential, the optimal approximation satisfies a fixed-point equation. We introduce LSVI (Least Squares Variational Inference), a gradient-free, Monte Carlo-based scheme for the fixed-point recursion, where each iteration boils down to performing ordinary least squares regression on tempered log-target evaluations under the variational approximation. We show that LSVI is equivalent to biased stochastic natural gradient descent and use this to derive convergence rates with respect to the numbers of samples and iterations. When the approximation family is Gaussian, LSVI involves inverting the Fisher information matrix, whose size grows quadratically with dimension d. We exploit the regression formulation to eliminate the need for this inversion, yielding O(d³) complexity in the full-covariance case and O(d) in the mean-field case. Finally, we numerically demonstrate LSVI’s performance on various tasks, including logistic regression, discrete variable selection, and Bayesian synthetic likelihood, showing competitive results with state-of-the-art methods, even when gradients are unavailable.

Arxiv: https://arxiv.org/abs/2502.18475

Let us know you agree to cookies