Skip to main content Skip to navigation

Events

Show all calendar items

OxWaSP Seminar

- Export as iCalendar
Location: B2.02 (Sci Conc)

David Rossell (University of Warwick)

The model separation principle for Bayesian model choice

Abstract: Given a collection of candidate probability models for an observed data y, a fundamental statistical task is to evaluate which models are more likely to have generated y. Tackling this problem within a Bayesian framework requires one to complement the probability model for y (likelihood) with a prior probability model on the parameters (which could be infinitely-dimensional) describing each of the candidate models, as well as to specify model prior probabilities and possibly a utility function. The model separation principle states that the models under consideration should be minimally different from each other, else it becomes hard for us to distinguish them on the bases of the observed y. In the common setting where some of the models are nested this principle is violated, as say Model 1 is a particular case of Model 2 and thus these models are not well separated. We shall review a class of prior distributions called non-local priors (NLPs) as a way to enforce the model separation principle and some of the NLP properties, focusing on parsimony and accelerated convergence rates in high-dimensional inference. We shall illustrate their use in ongoing work related to regression, robust regression and mixture models.
 

Darren Wilkinson (Newcastle University)

Bayesian inference for partially observed Markov processes

Abstract: A number of interesting statistical applications require the estimation of parameters underlying a nonlinear multivariate continuous time Markov process model, using partial and noisy discrete time observations of the system state. Bayesian inference for this problem is difficult due to the fact that the discrete time transition density of the Markov process is typically intractable and computationally intensive to approximate. Nevertheless, it is possible to develop particle MCMC algorithms which are exact, provided that one can simulate exact realisations of the process forwards in time. Such algorithms, often termed "likelihood free" or "plug-and-play" are very attractive, as they allow separation of the problem of model development and simulation implementation from the development of inferential algorithms. Such techniques break down in the case of perfect observation or high-dimensional data, but more efficient algorithms can be developed if one is prepared to deviate from the likelihood free paradigm, at least in the case of diffusion processes. The methods will be illustrated using examples from population dynamics and stochastic biochemical network dynamics.

14.00 - 15.00: David Rossell, “The model separation principle for Bayesian model choice.”
15.00 - 15.30: Coffee break
15.30 - 16.30: Darren Wilkinson, “Bayesian inference for partially observed Markov processes.”

Tags: Seminars

Show all calendar items