Workshop programme
Monday 16/03/2009 
Tuesday 17/03/2009 17/03/2009 
Wednesday 18/03/2009 
Thursday 19/03/2009 
Friday 20/03/2009 

9:00  9:30 

Keynote Lecture 
Keynote Lecture

Keynote Lecture


9:30  10:00 

Theodore Kypraios  
10:00  10:30 

10:30  11:00 
Registration 
Coffee 
Coffee 
Coffee 

11:00  11:30  
11:30  12:00  Yaming Yu 

12:00  12:30  Giorgos Sermaidis  
12:30  1:00  Flavio Goncalves 

1:00  2:00  Lunch 
Lunch 
Lunch 
Lunch 
Lunch 
2:00  2:30 


2:30  3:00 
Eric Moulines 
Antonietta Mira  
3:00  3:30 

3:30  4:00  Coffee 
Coffee 
Coffee 

4:00  4:30 

4:30  5:00 
Ruth King 

5:00  5:30 


5:30  Wine Reception 




6:30  Conference Dinner 
 Christophe Andrieu: Particle MCMC
Markov chain Monte Carlo (MCMC) and Sequential Monte Carlo (SMC) methods have emerged as the two main tools to sample from highdimensional probability distributions. Although asymptotic convergence of MCMC algorithms is ensured under weak assumptions, the performance of these latters is unreliable when the proposal distributions used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. We show here how it is possible to build efficient highdimensional proposal distributions using SMC methods. This allows us not only to improve over standard MCMC schemes but also to make Bayesian inference feasible for a large class of statistical models where this was not previously the case. We demonstrate these algorithms on various nonlinear nonGaussian statespace models, a stochastic kinetic model and Dirichlet process mixtures.  Yves Atchade: Central limit theorem for some adaptive MCMC schemes
(Based partly on joint work with Gersende Fort)
This talk will report on some old and new results on central limit theorems for adaptive MCMC algorithms. I will contrast the case where all the transition kernels of the algorithm have the same invariant distribution (as in the adaptive Metropolis algorithm) with the case of varying invariant distributions (as in the EquiEnergy sampler).
 Witold Bednorz: Exponential type inequalities for Markov chains.
There are several known results on the exponential type inequalities for functionals of independent random variables. Such inequalities are used to describe the concentration property i.e. that sums of r.v.'s cannot deviate much from their expectations. The natural question arises which of the results can be reproved for functionals of Markovian variables. Obviously to have a chance for showing exponentially fast concentration we have to assume that the Markov chain is geometrically ergodic. In this talk we discuss more general case and show that whenever geometric regularity condition holds some form of the concentration behavior can be observed.
 Kasper Berthelsen: Perfect posterior inference for mixture models.
We consider a standard Bayesian analysis of the unknown mixture weights for a mixture of known densities. It is well known how to approximately sample the posterior weights by Gibbs sampling using the duality principle, i.e. introducing auxiliary allocation variables. In this talk we formulate an algorithm for perfect simulation of the posterior weights. The algorithm is an example of Wilson's readonce coupling from the past algorithm (readonce CFTP). The key ingredient of this algorithm is constructing compounds of random maps so that stationarity is preserved and there is a positive probability of detecting coalescence, i.e. concluding that the compound map maps the entire state space into a single point.  Alex Beskos: Diffusion limits for MCMC paths.
MCMC trajectories resemble diffusion paths. Such an interpretation has been substantiated in earlier works in the literature when it has been proven that, when the dimensionality of the state space increases, the MCMC trajectory converges to a particular diffusion process. Such a result, though proven in simplified scenaria of iid targets, provides insight in the behavior of MCMC algorithms in high dimensions. We examine the case of the socalled "hybrid MonteCarlo" MCMC algorithm, invoking Hamiltonian dynamics, employed by physicists in molecular dynamics applications and elsewhere. Bridging the machinery employed above with tools from numerical analysis we show that the MCMC trajectory of the hybrid algorithm converges (when appropriately rescaled) to a hypoelliptic SDE. Such a result provides a complete characterization of the efficiency of the algorithm: we conclude that the hybrid algorithm should be scaled as 1/n^{1/6} (n being the dimensionality) , with optimal acceptance probability 0.743.  Leonardo Bottolo: Hierarchical Evolutionary Stochastic Search with Adaptation
(Joint work with Sylvia Richardson and Enrico Petretto)
Multivariate regression models with multiple responses have attracted the attention of the statistical community in very recent years, as numerous case studies are aiming in genetics/genomics. A
notable example is the paradigm of eQTL analysis, where thousands of transcripts measurements are regressed versus (hundred of) thousands of markers. In this context the usual problem of multimodality of the posterior distribution, when p>>n, is further exacerbated by the dimension of the response matrix, usually q>>n. In this "large p&q,
small n" framework, a sparse representation is also imposed, i.e. parsimonious models containing only a few predictors are sought to gain interpretability. In this talk we introduce a new searching algorithm called Hierarchical Evolutionary Stochastic Search (HESS) where the responses are linked in a hierarchical way. To reduce the computational burden, most of the regression parameters are integrated out. A novel sampling strategy based on Evolutionary Monte Carlo has been designed to efficiently sample from the huge parametric space. Moreover, inspired by adaptation techniques, we implemented a version of our algorithm that continuously adapts to focus exploration on a set of responses where there is more model uncertainty. Simulated and real data sets are analysed to demonstrate the performance of the proposed algorithm when p and q are both larger than n.
 Roberto Casarin: Sequential Monte Carlo for complex sampling problems (joint with Christophe Andrieu)
Sequential Monte Carlo (SMC) represents a class of general sampling methods which allow us to design good samplers for difficult sampling problems. With respect to the classical Markov Chain Monte Carlo (MCMC) algorithms the kernels of the SMC need not to be reversible or even Markov. Moreover for inference on highdimensional static models a strategy based on the combination of a long tempering sequence with a SMC sampling strategy could overcome the degeneracy problem of the SMC sampling and performs as well as a MCMC algorithms or even better when the last exhibits the traditional problem of slow mixing. Note however that the recent literature on population of MCMC chains provides an alternative and promising way to solve the problem of exploration of the space and to improve on the classical MCMC. In this work we follow the SMC framework, but suggest some new algorithms which result from the combination of SMC with MCMC and could be particularly useful in the context of simulation from multimodal distributions and from highdimensional distributions. The performance of the proposed algorithms will be demonstrated on some challenging sampling problems.
 Nicolas Chopin: Beyond MCMC: two cases studies and a few thoughts
(joint work with 1.) Gabriel Stoltz, Tony Lelièvre and 2.) Judith Rousseau, Brunero Liseo) In this talk, I'd like to present two of my current projects, which have the common objective to go "beyond MCMC", i.e to propose alternative sampling schemes that overcome some limitations of MCMC.
 Adaptive biased sampling and mixture modelling (with G. Stoltz, T. Lelièvre). In the molecular physics literature, the phrase Adaptive biased sampling refers to a set of techniques designed for "metastable potential functions" (essentially multimodal probability densities), in which one adaptively estimates a bias function B(xi) such that the biased target q(theta) \propto p(theta) * B(xi) has an uniform margin in the xi direction. I'll discuss how well these methods may be adapted to the Bayesian estimation of mixture posterior models.
 Bayesian nonparametric inference for long memory Gaussian processes (with J. Rousseau and B. Liseo): In Rousseau et al. (2008), we proposed a novel method for the Bayesian nonparametric estimation of the spectral density of a Gaussian longmemory process. I'll discuss how to solve the two main computational challenges of this problem:(a) the likelihood function involves the inverse of a possibly big Toeplitz matrix, and (b) the posterior distribution is transdimensional. With respect to (a), we consider a simple approximation of the likelihood, which may be used either directly or as a tool for building an importance sampling proposal. With respect to (b), we compare different methods, focusing on reversible jump MCMC, and population/sequential monte carlo methods. In particular, we explain how the likelihood function may be computed recursively, which makes the use of sequential Monte Carlo particularly interesting, especially if the dataset is large.
 Simon Cotter: Data Assimilation for a Viscous Incompressible Fluid (in collaboration with Masoumeh Dashti, James Robinson and Andrew Stuart)
We study the inverse problem of determining the initial state, and possibly the forcing, of a viscous incompressible fluid observed directly or indirectly over a period of time. We will formulate this as a Bayesian inverse problem, giving rise to a probability measure on function space for the initial vector field, and the forcing (or model error). We will describe effective MCMC methods that allow us to sample from such a distribution, and present some numerical results.
 Dan Crisan: MonteCarlo approximations of FeynmanKac representations
FeynmanKac representations are probabilistic expressions of solutions of linear/nonlinear PDEs and stochastic PDEs. Over the last ten years, these type of representations spawned
a variety of Monte Carlo methods for approximating numerically solutions of certain classes of linear/nonlinear PDEs and SPDEs. The aim of the talk is to present a survey of MonteCarlo
approximations of FeynmanKac representations and their applications to filtering and finance. The talk is based on joint work with Manolarakis and Ghazali.
 Andreas Eberle: Quantitative approximations of evolving probability measures and sequential Markov Chain Monte Carlo methods
We study approximations of evolving probability measures by an interacting particle system. The particle system dynamics is a combination of independent Markov chain moves and importance sampling/resampling steps. Under global regularity conditions, we derive nonasymptotic error bounds for the particle system approximation. The main motivation are applications to sequential MCMC methods for Monte Carlo integral estimation.
 Yalchin Efendiev: Uncertainty quantification with multiscale models in porous media flow applications
In this talk, I will describe the use of coarsescale models in uncertainty quantification. The problem under consideration is posed as a sampling problem from a posterior distribution. The main goal is to develop an efficient sampling technique within the framework of Markov chain Monte Carlo methods that uses coarsescale models and gradients of the target distribution. The purpose is to reduce the computational cost of Langevin algorithms for dynamic data integration problems. We propose to use inexpensive coarsescale solutions in calculating the proposals of Langevin algorithms.To guarantee the correct and efficient sampling of the proposed algorithm, we also intend to test the proposals with coarsescale solutions. Comparing with the direct Langevin algorithm based on finescale solutions the proposed method generates a modified Markov chain by incorporating the coarsescale information of the problem. Under some mild technical conditions we show that the modified Markov chain converges to the correct posterior distribution. Our numerical examples show that the proposed coarsegradient Langevin algorithms are much faster than the direct Langevin algorithms but have similar acceptance rates. Numerical results for multiphase flow and transport are presented. Both Gaussian and nonGaussian prior models are studied.
 Chris L. Farmer: Optimal control and data assimilation: principles and approximations
There are four basic types of activity involving combinations of uncertainty and optimisation: Uncertainty propagation where the problem is to predict the probabilistic behaviour of an uncertain system, with uncertainty in the initial or boundary conditions or in the static properties.
 Data assimilation, also known as "history matching", "system identification" or inverse problems.
 Decision making. Here a choice must be made between competing courses of action. For each choice of action the outcome is uncertain.
 Optimal control of an uncertain system. A system is only known in a probabilistic way. One has to design a control policy that optimises the system. The problem is particularly difficult when optimisation of the measurement system is included in the problem.
 Nial Friel: Classification using distance nearest neighbours
There has been some interest in the literature recently in probabilistic classification methods, which extend the widely used deterministic knearest neighbours algorithm, most notably, (Holmes and Adams 2002, 2003) and (Cucula et al 2009). This talk presents an related approach but where neighbours are defined based on distances. Inference for this model relies on an MCMC auxiliary variable scheme, which has some relation to approximate Bayesian computation.
 Flavio Goncalves: Monte Carlo inference for JumpDiffusion Proesses
This work proposes a Monte Carlo ExpectationMaximisation algorithm for inference in discretely observed jumpdiffusion processes. The most challenging step of the algorithm is to sample from the jumpdiffusion conditional on the observations. Such step is performed using the Conditional Jump Exact Algorithm which is also proposed in this work. The algorithm draws exact samples from the conditional jumpdiffusion via retrospective rejection sampling.
 Simon Godsill: Dynamic Estimation of Group Objects Using Sequential MCMC
(joint work with Sze Kim Pang and Francois Septier)
I will describe new methods for modelling and inference about stochastic group behaviour. The models rely on an interacting multivariate diffusion for objects displaying similar behavioural patterns and the task is to determine how many independent groupings are present, and to track the objects within each group. We use a sequential Monte Carlo method that applies MCMC directly to the filtering distribution at each time step in order to infer quantities in the highdimensional and complex statespace. Applications are found in tracking of moving vehicles and in financial modelling of stocks and shares.
 Radu Herbei: Hybrid samplers for illposed inverse problems
In the Bayesian approach to illposed inverse problems, regularization is imposed by specifying a prior distribution on the parameters of interest and MCMC samplers are used to extract information about its
posterior distribution. The aim of this work is to investigate the convergence properties of the randomscan random walk Metropolis (RSM) algorithm for posterior distributions in illposed inverse problems. We provide an accessible set of sufficient conditions, in terms of the observational model and the prior, to ensure geometric ergodicity of RSM samplers of the posterior distribution. We illustrate how these conditions can be checked in an application to the inversion of oceanographic tracer data.  Mark Huber: Perfect simulation of repulsive point processes
Repulsive spatial data arise whenever objects such as trees or towns are competing for scarce resources. Many different models have been created to deal with such data. In this talk I will explain improved perfect simulation methods for two of these models that generate Monte Carlo variates exactly from the desired distributions. For density based models such as the Strauss process, simple birthdeath chains can be augmented with a swapping move to speed up convergence. Then continuous time bounding chains can be employed to obtain
perfect samples.
Another class of models from Matérn requires a different approach. Here construction of a Metropolis Markov chain can be accomplished with auxiliary Poisson processes. Fortunately, bounding chains can also be used with these new Metropolis chains to also obtain perfect samples. Finally, I will show how a product estimator method for these models can be created for precise approximation without the need to know sample variance.
for simulating from a probability measure B&P 2 P(E). Nonlinear Markov kernels (e.g. [10, 11]) K : P(E) B!_ E B"* P(E) can be constructed to, in some sense, improve over MCMC methods. However, such nonlinear kernels cannot be simulated exactly, so approximations of the nonlinear kernels are constructed using auxiliary or selfinteracting chains. Several nonlinear kernels are presented and it is demonstrated that, under some conditions, the associated selfinteracting approximations exhibit a strong law of large numbers; our proof technique is via the Poisson equation and FosterLyapunov conditions. We investigate the performance of our approximations with some simulations.
 Adam Johansen: Monte Carlo Filtering of Piecewise Deterministic Processes
(joint work with Nick Whiteley and Simon Godsill)
We present efficient Monte Carlo algorithms for performing online Bayesian inference in a broad class of models: those in which the distributions of interest may be represented by time marginals of certain continuoustime jump processes conditioned upon a realisation of some noisy, discrete observation sequence. Two existing schemes can be interpreted as particular cases of the proposed method. Examples will be provided to illustrate the significant performance improvements which the proposed approach can provide.
 Galin Jones: Output Analysis for Markov Chain Monte Carlo
Markov chain Monte Carlo is a method of producing a correlated sample from a target distribution. Features of the target distribution are then estimated using this sample. Thus a fundamental question in MCMC
is: When should the sampling stop? That is, when have we achieved good estimates? I will introduce a method that stops the simulation when the width of a confidence interval is less than a userspecified value. Hence calculating Monte Carlo standard errors is a critical step in assessing the output of the simulation. In this talk I will give an overview of fixedwidth methodology and methods for calculating Monte Carlo standard errors in the univariate case and discuss extending this methodology to highdimensional settings. The main results will be illustrated in several examples.
 Theodore Kypraios: Bayesian Inference and Model Choice for Nonlinear Stochastic Processes: Applications to stochastic epidemic modelling.
Analysing infectious disease data is a nonstandard problem. In general,inference problems for disease outbreak data are complicated by the facts that (i) the data are inherently dependent and (ii) the data are usually incomplete in the sense that the actual process of infection is not observed. However,it is often possible to formulate
simple stochastic models which describe the key features of epidemic spread. Markov Chain Monte Carlo methods play a vital role in drawing efficiently Bayesian inference for the parameters of interest which govern transmission. Although standard data augmentation MCMC algorithms perform well in specific circumstances, it is often the case that their
performance deteriorates due the high dimension of the missing data and the dependence between them and the model parameters.
In this talk we will discuss a class of noncentered MCMC algorithm for stochastic epidemic models and an application of these to model the 2001 UK FMD outbreak. Furthermore, we will present a range of different models to model the spread of hospital infections such as MRSA in wards and how MCMC allow us to estimate the parameters of interest. Finally, we show how ReversibleJump MCMC algorithms and algorithms based on the developed methodology of power posteriors (Friel and Pettit, 2008) can be used for Bayesian model choice in stochastic epidemics.
I suggest that when a Markov Monte Carlo method gives great accuracy for a multivariate integral it is because the dynamics of the variables are "weakly dependent" and the observables are of "Dobrushin class". Then a large deviation estimate gives exponentially small probability for the time average of the observable to fail to be within a prescribed tolerance of its integral, with respect to number of iterations. Comments from people more expert than me are welcome!
 XiaoLi Meng and Yaming Yu : To Center or Not to Center: That is Not the Question: An AncillaritySufficiency Interweaving Strategy (ASIS) for Boosting MCMC Efficiency
We propose a general purpose variance reduction technique for Markov Chain Monte Carlo estimators based on the zerovariance principle introduced in the physics literature. We show how it is possible, for a large class of interesting Bayesian statistical model , to generate moments estimators with ideally null variance. The variance reduction achieved in practice is substantial as we show with some toy examples and a real application to Bayesian inference for credit risk estimation.
This talk is based on the papers:
 J. Møller and K. Helisova. Power diagrams and interaction processes for unions of discs. Advances in Applied Probility, 40, 321347.
 J. Møller and K. Helisova. Likelihood inference for unions of interacting discs. Research Report R200818, Department of Mathematical Sciences, Aalborg University. (Submitted for publication.)
The first paper studies a flexible class of finite disc process models with interaction between the discs. We let U denote the random set given by the union of discs, and use for the disc process an exponential family density with the canonical sufficient statistic only depending on geometric properties of U such as the area, perimeter, EulerPoincar'echaracteristic, and number of holes. This includes the quermassinteraction process and the continuum random cluster model as special cases. Viewing our model as a connected component Markov point process, and thereby establish local and spatial Markov properties, becomes useful for handling the problem of edge effects when only U is observed within a bounded observation window.The power tessellation and its dual graph become major tools when establishing inclusionexclusion formulae, formulae for computing geometric characteristics of U, and stability properties of the underlying disc process density.
Algorithms for constructing the power tessellation of U and for simulating the disc process are discussed.
 Eric Moulines: On interacting particle approximations of the smoothing distribution in general state spaces models
A longlasting problem in general statespace models is the approximation of the smoothing distribution of a given state, or a sequence of states, conditional on the observations from the past, the present, and the future. The aim of this talk is to provide a rigorous foundation for the calculation, or approximation, of such smoothed distributions, and to analyze in a common unifying framework different schemes to reach this goal. Through a cohesive and generic exposition of the scientific literature we offer several novel extensions allowing to approximate and sample from the joint smoothing distribution.
Dirichlet process mixtures (MDPs), are now standard in semiparametric modelling. Posterior inference for such models is typically performed using Markov chain Monte Carlo methods, which can be roughly categorised into marginal and conditional methods. The former integrate out analytically the infinitedimensional component of the hierarchical model and sample from the marginal distribution of the remaining variables using the Gibbs sampler. Conditional methods impute the Dirichlet process and update it as a component of the Gibbs sampler. Since this requires imputation of an infinitedimensional process, implementation of the conditional method has relied on finite approximations.In the first part of the talk we show how to avoid such approximations by novel Gibbs sampling algorithms which sample from the exact posterior distribution of quantities of interest. The approximations are avoided by the technique of retrospective sampling. Motivated by the modelling of copy number variation (CNVs) in the human genome we have developed
hidden Markov models where the likelihood is given by an MDP. We term the resulting model an HMMMDP model. Thus, we deal with a model with two levels of clustering for the observed data, a temporally persisting (local) clustering induced by the HMM and a global clustering induced by the Dirichlet process. The second part of the talk shows how to design efficient conditional methods for fitting these models elaborating on the methods developed on the first part of the talk and on dynamic programming techniques.
 Yvo Pokern: Nonparametric Bayesian Drift Estimation for SDEs
(joint work with O. Papaspiliopoulos, G. Roberts, A. Stuart)
A Bayesian framework for estimation of the drift function in an SDE given highfrequency discrete time observation is presented which operates on function spaces. For 1D diffusions on the circle, it is shown that based on observing the local time and using Gaussian priors the procedure is welldefined and that the posterior enjoys robustness against small deviations of the local time. Complemented by a finite element implementation this enables errorcontrol for a fixed random sample all the way from highfrequency discrete observation to the numerical computation of the posterior mean and covariance. Some numerical experiments extend our observations to subsets of the real line other than circles and exhibit more probabilistic convergence properties such as rates of posterior contraction.
 Christian Robert: Computational approaches to Bayesian model choice (Joint work with JeanMichel MARIN, INRIA Futurs, Universite Paris Sud Orsay, and Nicolas CHOPIN, CREST.)
In this talk, we will survey the flurry of recent results on the ABC algorithm that have appeared in the literature, including our own on the ABCPMC version of the ABC algorithm and on the use of "exact" ABC algorithms for the selection of Ising models.
 Judith Rousseau: Use of Importance Sampling for Repeated MCMC
The idea of using importance sampling (IS) with Markov Chain Monte Carlo (MCMC) has been around for over a decade. It has been used in Bayesian statistics to assess prior sensitivity and to carry out crossvalidation
and in maximum likelihood estimation when MCMC is required to evaluate the likelihood function. While there have been a number of successful applications, some properties of IS when used with MCMC have had relatively little investigation. It is the purpose of this paper to further investigate the efficency of IS with MCMC.
 Giovanni Sebastiani: Markov chain analysis of Ant Colony Optimization
We describe some theoretical results of a study on the expected time needed by a class of Ant Colony Optimization algorithms to solve combinatorial optimization problems. The algorithm is described by means of a suitable Markov chain with absorbing states in set of maximizers. First, we present some general results on the expected runtime of the considered class of algorithms. These results are then specialized to the case of some pseudoBoolean functions. The results obtained for these functions are also compared to those from the wellinvestigated (1+1)Evolutionary Algorithm.
 Giorgos Sermaidis: Exact inference for discretely observed diffusions
The aim of this work is to make Bayesian inference for discretely observed diffusions, the challenge being that the transition density of the process is typically unavailable. Competing methods rely on augmenting the data with the missing paths since there exists an analytical expression for the completedata likelihood. Such implementations require a rather fine discretization of the imputed path leading to convergence issues and computationally expensive algorithms.
Our method is based on exact simulation of diffusions (Beskos et al 2006) and has the advantage that there is no discretization error. We present a Gibbs sampler for sampling from the posterior distribution of the parameters and discuss how to increase its efficiency using reparametrizations of the augmentation scheme (Papaspiliopoulos et al 2007 ).
 Geir Storvik: On the flexibility of acceptance probabilities in auxiliary variable MetropolisHastings algorithms
Use of auxiliary variables for generating proposal variables within a MetropolisHastings setting has been suggested in many different settings. This has in particular been of interest for simulation from complex distributions such as multimodal distributions or in transdimensional approaches. For many of these approaches, the acceptance probabilities that are used turn up somewhat magic and different proofs for their validity have been given in each case.
In this talk I will present a general framework for construction of acceptance probabilities in auxiliary variable proposal generation. In addition to demonstrate the similarities between many of the proposed algorithms in the literature, the framework also demonstrate that there is a great flexibility in how to construct such acceptance probabilities, in addition to the flexibility in how to construct the proposals. With this flexibility, alternative acceptance probabilities are suggested. Some numerical experiments will also be reported.
A wide variety of inverse problems in PDEs can be formulated in a common mathematical framework, by adopting the perspective of Bayesian statistics on a function space. This common mathematical structure can then be exploited to construct robust MCMC algorithms to gain insight into the structure of the inverse problem.
The common mathematical structure is that the posterior measure has density with respect to a Gaussian reference measure, with log density satisfying certain natural bounds and Lipschitz continuity properties. The ideas will be illustrated on a number of applications including oceanography and nuclear waste management.
In this presentation we limit the attention to binary MRFs defined on a regular lattice and propose an approximate forwardbackward algorithm for such models. The forward part of the (exact) forwardbackward algorithm computes a series of joint marginal distributions by summing out each of the variables in turn. We represent these joint marginal distributions by interactions of different orders. We develop recursive formulas for these interaction parameters, first computing first order interactions, then the second order interactions and so on. The approximation is defined by approximating to zero all interaction parameters that are sufficiently close to zero. In addition, an interaction parameter is set to zero whenever all associated lower level interactions are (approximated to) zero. For the approximative algorithm to be computationally feasible for large lattices the number of interactions that are approximated to zero must be sufficiently large. We illustrate the performance of the resulting algorithm by a number of examples. For many models the algorithm is computationally feasible even for large lattices. This includes the Ising model, for which the computational cost of the approximative algorithm is linear in the number of nodes. We also present examples of models where the number of remaining nonzero interaction parameters is too high, so that also the approximative algorithm is computationally infeasible. Authors: Simon Wilson (Trinity College Dublin)  speaker; Ercan Kuruoglu (CNR Pisa, Italy); Alicia Quiros Carretero (University Rey Juan Carlos, Madrid)
We consider a factor analysis problem where the data are a set of images of the same scene observed at different frequencies. The goal is to separate out the different factors (or sources) of light that contributed to the scene. The motivating example for this work is data on extraterrestrial microwaves observed at 5 channels, for which it is desired to separate the observed intensities into their sources, such as galactic dust, cosmic microwave background, etc.
There is a lot of prior information on these sources, which can be reasonably modelled as Gaussian mixtures. Within and between pixel correlations between sources suggest mixtures of multivariate Gaussians or even mixtures of multivariate Gaussian Markov random ields as desirable factor priors. We discuss MCMC schemes to implement such an inference and in particular the computational difficulties that arise when these prior dependencies are incorporated.