abstracts
- Roberto Casarin: Sequential Monte Carlo for complex sampling problems (joint with Christophe Andrieu)
Sequential Monte Carlo (SMC) represents a class of general sampling methods which allow us to design good samplers for difficult sampling problems. With respect to the classical Markov Chain Monte Carlo (MCMC) algorithms the kernels of the SMC need not to be reversible or even Markov. Moreover for inference on high-dimensional static models a strategy based on the combination of a long tempering sequence with a SMC sampling strategy could overcome the degeneracy problem of the SMC sampling and performs as well as a MCMC algorithms or even better when the last exhibits the traditional problem of slow mixing. Note however that the recent literature on population of MCMC chains provides an alternative and promising way to solve the problem of exploration of the space and to improve on the classical MCMC. In this work we follow the SMC framework, but suggest some new algorithms which result from the combination of SMC with MCMC and could be particularly useful in the context of simulation from multimodal distributions and from high-dimensional distributions. The performance of the proposed algorithms will be demonstrated on some challenging sampling problems.
- Dan Crisan: Monte-Carlo approximations of Feynman-Kac representations
Feynman-Kac representations are probabilistic expressions of solutions of linear/nonlinear PDEs and stochastic PDEs. Over the last ten years, these type of representations spawned a variety of Monte Carlo methods for approximating numerically solutions of certain classes of linear/nonlinear PDEs and SPDEs. The aim of the talk is to present a survey of Monte-Carlo approximations of Feynman-Kac representations and their applications to filtering and finance. The talk is based on joint work with Manolarakis and Ghazali.
- Chris L. Farmer: Optimal control and data assimilation: principles and approximations
There are four basic types of activity involving combinations of uncertainty and optimisation: Uncertainty propagation where the problem is to predict the probabilistic behaviour of an uncertain system, with uncertainty in the initial or boundary conditions or in the static properties. Data assimilation, also known as "history matching", "system identification" or inverse problems. Decision making. Here a choice must be made between competing courses of action. For each choice of action the outcome is uncertain. Optimal control of an uncertain system. A system is only known in a probabilistic way. One has to design a control policy that optimises the system. The problem is particularly difficult when optimisation of the measurement system is included in the problem
- Robert McKay: Why Markov Monte Carlo Methods give such good answers for high-dimensional integrals.
I suggest that when a Markov Monte Carlo method gives great accuracy for a multivariate integral it is because the dynamics of the variables are "weakly dependent" and the observables are of "Dobrushin class". Then a large deviation estimate gives exponentially small probability for the time average of the observable to fail to be within a prescribed tolerance of its integral, with respect to number of iterations. Comments from people more expert than me are welcome!
- Xiao-Li Meng and Yaming Yu : To Center or Not to Center: That is Not the Question: An Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Efficiency
Seeking a good parameterization/transformation has been a well established strategy for efficient MCMC implementation. For a broad class of multi-level models, there exist two well-known competing parameterizations, the so called centered parameterization (CP) and non-centered parameterization (NCP), and much literature has been devoted to the question of when to use which and how to compromise between them via partial CP/NCP, a task that often requires data-dependent tuning. In this paper we provide both theoretical justifications and empirical evidences to show that there exists a surprisingly general and powerful strategy for boosting MCMC efficiency via simply interweaving---but not alternating---the two parameterizations. The interweaving strategy is so powerful that it generally guarantees geometric convergence of the resulting algorithm even if the CP chain or the NCP chain is not geometrically convergent.
Part I of this talk (given by Meng) presents ASIS and its theoretical results, which show that the interweaving strategy achieves this seemingly magical property by taking advantage of the discordance of the two parameterizations, namely, the sufficiency of CP and the ancillarity of NCP, to substantially reduce the Markov dependence. The ancillarity-sufficiency reformulation of the CP-NCP dichotomy allows us to borrow insight from the well-known Basu's theorem on the independence of (complete) sufficient and ancillary statistics, albeit a full Bayesian version of Basu's theorem is currently lacking. Part II of this talk (given by Yu) demonstrates the competitiveness of ASIS for real-world Bayesian analysis, via 1) a probit model for predicating latent membranous lupus nephritis; 2) an interval-censored normal model for studying the lifetime of fluorescent lights; and 3) a Cox process model for detecting changes in source intensity for Possion photon counts form Chandra X-ray telescope for a (candidate) neutron/quark star, which was the problem that motivated the ASIS strategy.- Giovanni Sebastiani: Markov chain analysis of Ant Colony Optimization (joint work with W. Gutjahr)
We describe some theoretical results of a study on the expected time needed by a class of Ant Colony Optimization algorithms to solve combinatorial optimization problems. The algorithm is described by means of a suitable Markov chain with absorbing states in set of maximizers. First, we present some general results on the expected runtime of the considered class of algorithms. These results are then specialized to the case of some pseudo-Boolean functions. The results obtained for these functions are also compared to those from the well-investigated (1+1)-Evolutionary Algorithm.
The presentation was based on the following paper which is available online:
W.J. Gutjahr and G. Sebastiani, "Runtime analysis of Ant Colony optimization with best-so-far reinforcement'', Methodology and Computing in Applied Probability. vol. 10, pp. 409-433, 2008. DOI 10.1007/s11009-007-9047-1 (online)