Algorithms & Computationally Intensive Inference seminars
The seminars will take place on Fridays 1 pm UK time in room MB0.08.
2025-2026 Organiser: Adrien Corenflos
If you would like to speak, or you want to be included in any emails, please contact the organiser.
Website URL: www.warwick.ac.uk/compstat
Mailing List Sign-Up: https://ivory-cuscus.lnx.warwick.ac.uk/mailman3/lists/algorithmseminar.newlistserv.warwick.ac.uk/ (only working on campus at the moment)
Mailing List: algorithmseminar@listserv.csv.warwick.ac.uk (NB - only approved members can post)
Upcoming (see below for future and past speakers):
03/10 | Yvann Le Fay (ENSAE Paris) |
Least squares variational inference
|
Note: Hybrid |
Abstract: Variational inference seeks the best approximation of a target distribution within a chosen family, where "best" means minimising Kullback-Leibler divergence. When the approximation family is exponential, the optimal approximation satisfies a fixed-point equation. We introduce LSVI (Least Squares Variational Inference), a gradient-free, Monte Carlo-based scheme for the fixed-point recursion, where each iteration boils down to performing ordinary least squares regression on tempered log-target evaluations under the variational approximation. We show that LSVI is equivalent to biased stochastic natural gradient descent and use this to derive convergence rates with respect to the numbers of samples and iterations. When the approximation family is Gaussian, LSVI involves inverting the Fisher information matrix, whose size grows quadratically with dimension d. We exploit the regression formulation to eliminate the need for this inversion, yielding O(d³) complexity in the full-covariance case and O(d) in the mean-field case. Finally, we numerically demonstrate LSVI’s performance on various tasks, including logistic regression, discrete variable selection, and Bayesian synthetic likelihood, showing competitive results with state-of-the-art methods, even when gradients are unavailable. |
Term 1:
Date |
Speaker |
Title |
07/11 | Michela Ottobre (TBC, Edinburgh) |
TBC
|
Note: Hybrid |
Abstract: TBC |
|
31/10 | Yuga Iguchi (Lancaster) |
TBC
|
Note: Hybrid |
Abstract: TBC |
|
24/10 | Shishen Lin (University of Warwick) |
TBC
|
Note: Hybrid |
Abstract: TBC |
|
17/10 | Marina Riabiz (King's College London) |
Extrapolation of Tempered Posteriors
|
Note: Hybrid |
Abstract: Tempering is a popular tool in Bayesian computation, being used to transform a posterior distribution p1 into a reference distribution p0 that is more easily approximated. Several algorithms exist that start by approximating p0 and proceed through a sequence of intermediate distributions pt until an approximation to p1 is obtained. Our contribution reveals that high-quality approximation of terms up to p1 is not essential, as knowledge of the intermediate distributions enables posterior quantities of interest to be extrapolated. Specifically, we establish conditions under which posterior expectations are determined by their associated tempered expectations on any non-empty t interval. Harnessing this result, we propose novel methodology for approximating posterior expectations based on extrapolation and smoothing of tempered expectations, which we implement as a post-processing variance-reduction tool for sequential Monte Carlo. |
|
10/10 |
Yingzhen Li (Imperial College)
|
Test-Time Alignment of Discrete Diffusion Models with Sequential Monte Carlo |
Note: Hybrid |
Abstract: Discrete diffusion models have become highly effective across various domains. However, real-world applications often require the generative process to adhere to certain constraints but without task-specific fine-tuning. To this end, we propose a training-free method based on Sequential Monte Carlo (SMC) to sample from the reward-aligned target distribution at the test time. Our approach leverages twisted SMC with an approximate locally optimal proposal, obtained via a first-order Taylor expansion of the reward function. To address the challenge of ill-defined gradients in discrete spaces, we incorporate a Gumbel-Softmax relaxation, enabling efficient gradient-based approximation within the discrete generative framework. Empirical results on both synthetic datasets and image modelling validate the effectiveness of our approach. |
|
03/10 | Yvann Le Fay (ENSAE Paris) |
Least squares variational inference
|
Note: Hybrid |
Abstract: Variational inference seeks the best approximation of a target distribution within a chosen family, where "best" means minimising Kullback-Leibler divergence. When the approximation family is exponential, the optimal approximation satisfies a fixed-point equation. We introduce LSVI (Least Squares Variational Inference), a gradient-free, Monte Carlo-based scheme for the fixed-point recursion, where each iteration boils down to performing ordinary least squares regression on tempered log-target evaluations under the variational approximation. We show that LSVI is equivalent to biased stochastic natural gradient descent and use this to derive convergence rates with respect to the numbers of samples and iterations. When the approximation family is Gaussian, LSVI involves inverting the Fisher information matrix, whose size grows quadratically with dimension d. We exploit the regression formulation to eliminate the need for this inversion, yielding O(d³) complexity in the full-covariance case and O(d) in the mean-field case. Finally, we numerically demonstrate LSVI’s performance on various tasks, including logistic regression, discrete variable selection, and Bayesian synthetic likelihood, showing competitive results with state-of-the-art methods, even when gradients are unavailable. |