CRiSM seminars since July 2007
Mon 24 Jun, '24- |
Statistics SeminarMB0.07, MSB |
|
Mon 17 Jun, '24- |
Statistics SeminarMB0.07, MSB |
|
Mon 10 Jun, '24- |
Statistics SeminarMB0.07, MSB |
|
Mon 3 Jun, '24- |
Statistics SeminarMB0.07, MSB |
|
Mon 27 May, '24- |
Statistics SeminarMB0.07, MSB |
|
Mon 20 May, '24- |
Statistics SeminarMB0.07, MSB |
|
Mon 13 May, '24- |
Statistics SeminarMB0.07, MSB |
|
Mon 6 May, '24- |
Statistics SeminarMB0.07, MSB |
|
Mon 22 Apr, '24- |
Statistics SeminarMS.03, Zeeman |
|
Mon 11 Mar, '24- |
Statistics Seminar: From kernel methods to neural networks: double descent, function spaces, and learnabilityStatistics common roomAbstract: In this talk, I will discuss the relationship between kernel methods and (two-layer) neural networks for generalization, which aims to theoretically understand the separation from the perspective of function spaces. First, I will start with random features models (a typical two-layer neural network, also a kernel method) from under-parameterized regime to over-parameterized regime, which recovers the double descent and demonstrates the benefits behind over-parameterization. Second, I will compare kernel methods and neural networks via random features from reproducing kernel Hilbert space (RKHS) to Barron space, which leaves an open question: what is the suitable function space of neural networks? |
|
Mon 19 Feb, '24- |
Statistics Seminar - TBDStats Common Roomby Professor Terry Lyons, University of Oxford |
|
Mon 22 Jan, '24- |
Stats Seminar - TBDStats Common Room |
|
Mon 4 Dec, '23- |
Statistics Seminar - TBDStats Common Roomby Professor Geoff Nicholls, University of Oxford |
|
Mon 20 Nov, '23- |
Statistics Seminar - Graphical Models of Intelligent CauseStats Common Room |
|
Mon 6 Nov, '23- |
Statistics Seminar - Statistical learning in biological neural networksStats Common RoomProfessor Johannes Schmidt-Hieber, University of Twente |
|
Mon 23 Oct, '23- |
Statistics Seminar - Bayesian FusionStats Common Room |
|
Mon 9 Oct, '23- |
Statistics SeminarStats Common Room |
|
Wed 21 Jun, '23- |
CRiSM SeminarMB0.07 |
|
Wed 7 Jun, '23- |
CRiSM SeminarMB0.07 |
|
Wed 24 May, '23- |
CRiSM SeminarMB0.07 |
|
Wed 17 May, '23- |
CRiSM SeminarMB0.07 |
|
Wed 10 May, '23- |
CRiSM SeminarMB0.07 |
|
Wed 3 May, '23- |
CRiSM SeminarMB0.07 |
|
Wed 26 Apr, '23- |
CRiSM SeminarMB0.07 |
|
Wed 8 Mar, '23- |
CRiSM SeminarMB0.07 |
|
Wed 8 Feb, '23- |
CRiSM SeminarMB0.07 |
|
Wed 25 Jan, '23- |
CRiSM SeminarMB0.07 |
|
Wed 11 Jan, '23- |
CRiSM SeminarMB0.07 |
|
Wed 30 Nov, '22- |
CRiSM SeminarMB0.07 and online via Teams |
|
Wed 16 Nov, '22- |
CRiSM SeminarMB0.07 and online via Teams |
|
Wed 2 Nov, '22- |
CRiSM SeminarMB0.07 and online via Teams |
|
Wed 19 Oct, '22- |
CRiSM SeminarMB0.07 and online via Teams |
|
Wed 5 Oct, '22- |
CRiSM SeminarMB0.07 and online via Teams |
|
Wed 29 Jun, '22- |
CRiSM SeminarMB0.08 |
|
Wed 15 Jun, '22- |
CRiSM SeminarMB0.08 |
|
Wed 1 Jun, '22- |
CRiSM SeminarMB0.08 |
|
Wed 18 May, '22- |
CRiSM SeminarMB0.08 |
|
Thu 5 May, '22- |
CRiSM SeminarMB0.08 |
|
Wed 16 Mar, '22- |
CRiSM SeminarMB0.08 |
|
Wed 2 Mar, '22- |
CRiSM SeminarMB0.08 |
|
Wed 16 Feb, '22- |
CRiSM SeminarMB0.08 |
|
Wed 2 Feb, '22- |
CRiSM SeminarMB0.08 |
|
Wed 19 Jan, '22- |
CRiSM SeminarMB0.08 |
|
Wed 8 Dec, '21- |
CRiSM SeminarMB0.08 |
|
Wed 24 Nov, '21- |
CRiSM SeminarMB0.08 |
|
Wed 10 Nov, '21- |
CRiSMMB0.07 |
|
Thu 28 Oct, '21- |
CRiSM SeminarMB0.08 |
|
Wed 30 Jun, '21- |
CRiSM Seminarvia Teams |
|
Thu 17 Jun, '21- |
CRiSM Seminarvia Teams |
|
Wed 2 Jun, '21- |
CRiSM Seminarvia Teams |
|
Wed 19 May, '21- |
CRiSM Seminarvia Teams |
|
Wed 5 May, '21- |
CRiSM Seminarsvia Teams |
|
Wed 17 Mar, '21- |
CRiSM Seminarvia Teams |
|
Wed 3 Mar, '21- |
CRiSM Seminar |
|
Wed 3 Feb, '21- |
CRiSM Seminarvia Teams |
|
Wed 20 Jan, '21- |
CRiSM Seminarvia Teams |
|
Thu 10 Dec, '20- |
CRiSM Seminarvia Teams |
|
Thu 26 Nov, '20- |
CRiSM Seminarvia Teams |
|
Thu 12 Nov, '20- |
CRiSM Seminarvia Teams |
|
Wed 28 Oct, '20- |
CRiSM Seminarvia Teams |
|
Thu 25 Jun, '20- |
CRiSM SeminarMB0.08 |
|
Thu 11 Jun, '20- |
CRiSM Seminar - Olivier RenaudOnline |
|
Thu 14 May, '20- |
CRiSM Seminar - Jane Hutton: I know I don't know: Covid-19 patients' journeys through hospitalOnlineI was asked to consider the available data on Covid-19 patients' path into hospital, and then to intensive care, or death, or transfer or discharge. Of course, once in intensive care, patients can move to the states death, discharge home, discharge to nursing home, discharge to hospital ward. I was invited by those who think I know about analysis of times to events with messy data. The data is messy, and there are other challenges. I benefited from conversations with medical friends and colleagues, particularly a respiratory physician. Depending on permissions, I will either illustrate issues with artificial data, or present actual results. |
|
Thu 30 Apr, '20- |
CRiSM Seminar - Simon FrenchOnline |
|
Wed 4 Mar, '20 |
CRiSM Seminar - Scaling Optimal Transport for High dimensional LearningMB0.07 Mathematical Sciences BuildingSpeaker: Gabriel Peyré, CNRS and Ecole Normale Supérieure |
|
Wed 26 Feb, '20- |
CRiSM Seminar - Sequential learning via a combined reinforcement learning and data assimilation ansatz for decision supportMB0.07 |
|
Wed 12 Feb, '20- |
CRiSM Seminar - Model Property-Based and Structure-Preserving ABC for complex stochastic modelsMB0.07 |
|
Wed 29 Jan, '20- |
CRiSM Seminar - Modelling spatially correlated binary data, Professor Jianxin PanMB0.07 |
|
Wed 15 Jan, '20- |
CRiSM Seminar - Deep learning in genomics, and a topic model for single cell analysis - Gerton LunterMB0.07 |
|
Thu 5 Dec, '19- |
CRiSM SeminarMB0.07 |
|
Thu 21 Nov, '19- |
CRiSM Seminar - Modelling Networks and Network Populations via Graph DistancesMB0.07 Mathematical Sciences BuildingSpeaker: Sofia Olhede |
|
Thu 7 Nov, '19- |
CRiSM Seminar: High-dimensional principal component analysis with heterogeneous missingnessMB0.07 |
|
Thu 24 Oct, '19- |
CRiSM SeminarMB0.07Localizing Changes in High-Dimensional Vector Autoregressive Processes |
|
Fri 28 Jun, '19- |
CRiSM SeminarMB2.23Dr. Pauline O'Shaughnessy, University of Wollongong, Australia Title: Bootstrap inference in the longitudinal data with multiple sources of variation Abstract: Linear mixed models allow us to model the dependence among the responses by incorporating random effects. Such dependence inherent in the longitudinal data from a complex design can be from the clustering between subjects and the repeated measurements within the subject. When the underlying distribution is not fully specified, we consider a class of estimators defined by the Gaussian quasi-likelihood for normal-like response variable. Historically it is challenging to make inference about the variance components in the framework of mixed models. We propose a new weighted estimating equation bootstrap, which varies weight schemes for different parameter estimators. The performance of the weighted estimating equation bootstrap is empirically evaluated in the simulation studies, showing improved coverage and variance estimation for the variance component estimators under models with normal and non-normal distributions for random effects. The asymptotic properties will also be addressed and we apply this new bootstrap method to a longitudinal dataset in biology. (This is a joint work with Professor Alan Welsh from the Australian National University.) |
|
Tue 25 Jun, '19- |
CRiSM SeminarMS.05Prof. Malgorzata Bogdan, University of Wroclaw, Poland (15:00-16:00) Abstract: Sorted L-One Penalized Estimator is a relatively new convex optimization procedure for identifying predictors in large data bases. |
|
Thu 13 Jun, '19- |
CRiSM SeminarMSB2.22Prof. Karla Hemming, University of Birmingham, UK (15:00-16:00) Speaker: Clair Barnes, University College London, UK Death & the Spider: postprocessing multi-ensemble weather forecasts with uncertainty quantification Ensemble weather forecasts often under-represent uncertainty, leading to overconfidence in their predictions. Multi-model forecasts combining several individual ensembles have been shown to display greater skill than single-ensemble forecasts in predicting temperatures, but tend to retain some bias in their joint predictions. Established postprocessing techniques are able to correct bias and calibration issues in univariate forecasts, but are generally not designed to handle multivariate forecasts (of several variables or at several locations, say) without separate specification of the structure of the inter-variable dependence. We propose a flexible multivariate Bayesian postprocessing framework, developed around a directed acyclic graph representing the relationships between the ensembles and the observed weather. The posterior forecast is inferred from the ensemble forecasts and an estimate of their shared discrepancy, which is obtained from a collection of past forecast-observation pairs. The approach is illustrated with an application to forecasts of UK surface temperatures during the winter period from 2007-2013. Speaker: Karla Hemming, University of Birmingham (1500-1600) The I-squared-CRT statistic to describe treatment effect heterogeneity in cluster randomized trials. |
|
Thu 30 May, '19- |
CRiSM SeminarA1.01Dr. Yoav Zemel, University of Göttingen, Germany (15:00-16:00) Title: Procrustes Metrics on Covariance Operators and Optimal Transportation of Gaussian Processes Abstract: Covariance operators are fundamental in functional data analysis, providing the canonical means to analyse functional variation via the celebrated Karhunen-Loève expansion. These operators may themselves be subject to variation, for instance in contexts where multiple functional populations are to be compared. Statistical techniques to analyse such variation are intimately linked with the choice of metric on covariance operators, and the intrinsic infinite-dimensionality of these operators. We describe the manifold-like geometry of the space of trace-class infinite-dimensional covariance operators and associated key statistical properties, under the recently proposed infinite-dimensional version of the Procrustes metric (Pigoli et al. Biometrika 101, 409–422, 2014). We identify this space with that of centred Gaussian processes equipped with the Wasserstein metric of optimal transportation. The identification allows us to provide a detailed description of those aspects of this manifold-like geometry that are important in terms of statistical inference; to establish key properties of the Fréchet mean of a random sample of covariances; and to define generative models that are canonical for such metrics and link with the problem of registration of warped functional data. |
|
Mon 13 May, '19- |
CRiSM SeminarMB0.07Prof. Renauld Lambiote, University of Oxford, UK (15:00-16:00) Higher-Order Networks: Network science provides powerful analytical and computational methods to describe the behaviour of complex systems. From a networks viewpoint, the system is seen as a collection of elements interacting through pairwise connections. Canonical examples include social networks, neuronal networks or the Web. Importantly, elements often interact directly with a relatively small number of other elements, while they may influence large parts of the system indirectly via chains of direct interactions. In other words, networks allow for a sparse architecture together with global connectivity. Compared with mean-field approaches, network models often have greater explanatory power because they account for the non-random topologies of real-life systems. However, new forms of high-dimensional and time-resolved data have now also shed light on the limitations of these models. In this talk, I will review recent advances in the development of higher-order network models, which account for different types of higher-order dependencies in complex data. Those include temporal networks, where the network is itself a dynamical entity and higher-order Markov models, where chains of interactions are more than a combination of links. |
|
Thu 2 May, '19- |
CRiSM SeminarA1.01Speaker: Dr. Ben Calderhead, Department of Mathematics, Imperial College London Abstract: Quasi-Monte Carlo (QMC) methods for estimating integrals are attractive since the resulting estimators typically converge at a faster rate than pseudo-random Monte Carlo. However, they can be difficult to set up on arbitrary posterior densities within the Bayesian framework, in particular for inverse problems. We introduce a general parallel Markov chain Monte Carlo(MCMC) framework, for which we prove a law of large numbers and a central limit theorem. In that context, non-reversible transitions are investigated. We then extend this approach to the use of adaptive kernels and state conditions, under which ergodicity holds. As a further extension, an importance sampling estimator is derived, for which asymptotic unbiasedness is proven. We consider the use of completely uniformly distributed (CUD) numbers within the above mentioned algorithms, which leads to a general parallel quasi-MCMC (QMCMC) methodology. We prove consistency of the resulting estimators and demonstrate numerically that this approach scales close to n^{-2} as we increase parallelisation, instead of the usual n^{-1} that is typical of standard MCMC algorithms. In practical statistical models we observe multiple orders of magnitude improvement compared with pseudo-random methods. |
|
Wed 27 Mar, '19- |
CRiSM SeminarMSB2.23Daniel Rudolf, Institute for Mathematical Stochastics, Georg-August-Universität Göttingen Title: Quantitative spectral gap estimate and Wasserstein contraction of simple slice sampling Abstract: By proving Wasserstein contraction of simple slice sampling for approximate sampling of distributions determined by log-concave rotational invariant unnormalized densities we derive an explicit quantitative lower bound of the spectral gap. In particular, the lower bound of the spectral gap carries over to more general distributions depending only on the volume of the (super-)level sets of the unnormalized density. |
|
Wed 20 Mar, '19- |
CRiSM DayMS.01 |
|
Thu 14 Mar, '19- |
CRiSM SeminarA1.01Speaker: Spencer Wheatley, ETH Zurich, Switzerland Title: The "endo-exo" problem in financial market price fluctuations, & the ARMA point process The "endo-exo" problem -- i.e., decomposing system activity into exogenous and endogenous parts -- lies at the heart of statistical identification in many fields of science. E.g., consider the problem of determining if an earthquake is a mainshock or aftershock, or if a surge in the popularity of a youtube video is because it is "going viral", or simply due to high activity across the platform. Solution of this problem is often plagued by spurious inference (namely false strong interaction) due to neglect of trends, shocks and shifts in the data. The predominant point process model for endo-exo analysis in the field of quantitative finance is the Hawkes process. A comparison of this field with the relatively mature fields of econometrics and time series identifies the need to more rigorously control for trends and shocks. Doing so allows us to test the hypothesis that the market is "critical" -- analogous to a unit root test commonly done in economic time series -- and challenge earlier results. Continuing "lessons learned" from the time series field, it is argued that the Hawkes point process is analogous to integer valued AR time series. Following this analogy, we introduce the ARMA point process, which flexibly combines exo background activity (Poisson), shot-noise bursty dynamics, and self-exciting (Hawkes) endogenous activity. We illustrate a connection to ARMA time series models, as well as derive an MCEM (Monte Carlo Expectation Maximization) algorithm to enable MLE of this process, and assess consistency by simulation study. Remaining challenges in estimation and model selection as well as possible solutions are discussed.
[1] Wheatley, S., Wehrli, A., and Sornette, D. "The endo-exo problem in high frequency financial price fluctuations and rejecting criticality". To appear in Quantitative Finance (2018). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3239443 [2] Wheatley, S., Schatz, M., and Sornette, D. "The ARMA Point Process and its Estimation." arXiv preprint arXiv:1806.09948 (2018). |
|
Thu 28 Feb, '19- |
CRiSM SeminarMSB2.23Prof. Isham Valerie, Statistical Science, University College London, UK (15:00-16:00) Stochastic Epidemic Models: Approximations, structured populations and networks |
|
Thu 14 Feb, '19- |
CRiSM SeminarMSB2.23Philipp Hermann, Institute of Applied Statistics, Johannes Kepler University Linz, Austria Time: 14:00-15:00 LDJump: Estimating Variable Recombination Rates from Population Genetic Data Recombination is a process during meiosis which starts with the formation of DNA double-strand breaks and results in an exchange of genetic material between homologous chromosomes. In many species, recombination is concentrated in narrow regions known as hotspots, flanked by large zones with low recombination. As recombination plays an important role in evolution, its estimation and the identification of hotspot positions is of considerable interest. In this talk we introduce LDJump, our method to estimate local population recombination rates with relevant summary statistics as explanatory variables in a regression model. More precisely, we divide the DNA sequence into small segments and estimate the recombination rate per segment via the regression model. In order to obtain change-points in recombination we apply a frequentist segmentation method. This approach controls a type I error and provides confidence bands for the estimator. Overall LDJump identifies hotspots at high accuracy under different levels of genetic diversity as well as demography and is computationally fast even for genomic regions spanning many megabases. We will present a practical application of LDJump on a region of the human chromosome 21 and compare our estimated population recombination rates with experimentally measured recombination events. (joint work with Andreas Futschik, Irene Tiemann-Boege, and Angelika Heissl) Professor Dr. Ingo Scholtes, Data Analytics Group, University of Zürich Time: 15:00-16:00 Optimal Higher-Order Network Analytics for Time Series Data Network-based data analysis techniques such as graph mining, social network analysis, link prediction and clustering are an important foundation for data science applications in computer science, computational social science, economics and bioinformatics. They help us to detect patterns in large corpora of data that capture relations between genes, brain regions, species, humans, documents, or financial institutions. While this potential of the network perspective is undisputed, advances in data sensing and collection increasingly provide us with high-dimensional, temporal, and noisy data on real systems. The complex characteristics of such data sources pose fundamental challenges for network analytics. They question the validity of network abstractions of complex systems and pose a threat for interdisciplinary applications of data analytics and machine learning. To address these challenges, I introduce a graphical modelling framework that accounts for the complex characteristics of real-world data on complex systems. I demonstrate this approach in time series data on technical, biological, and social systems. Current methods to analyze the topology of such systems discard information on the timing and ordering of interactions, which however determines which elements of a system can influence each other via paths. To solve this issue, I introduce a modelling framework that (i) generalises standard network representations towards multi-order graphical models for causal paths, and (ii) uses statistical learning to achieve an optimal balance between explanatory power and model complexity. The framework advances the theoretical foundation of data science and sheds light on the important question when network representations of time series data are justified. It is the basis for a new generation of data analytics and machine learning techniques that account both for temporal and topological characteristics in real-world data. |
|
Thu 31 Jan, '19- |
CRiSM SeminarMSB2.23Professor Paul Fearnhead, Lancaster University - 14:00-1500 Efficient Approaches to Changepoint Problems with Dependence Across Segments Changepoint detection is an increasingly important problem across a range of applications. It is most commonly encountered when analysing time-series data, where changepoints correspond to points in time where some feature of the data, for example its mean, changes abruptly. Often there are important computational constraints when analysing such data, with the number of data sequences and their lengths meaning that only very efficient methods for detecting changepoints are practically feasible. A natural way of estimating the number and location of changepoints is to minimise a cost that trades-off a measure of fit to the data with the number of changepoints fitted. There are now some efficient algorithms that can exactly solve the resulting optimisation problem, but they are only applicable in situations where there is no dependence of the mean of the data across segments. Using such methods can lead to a loss of statistical efficiency in situations where e.g. it is known that the change in mean must be positive. This talk will present a new class of efficient algorithms that can exactly minimise our cost whilst imposing certain constraints on the relationship of the mean before and after a change. These algorithms have links to recursions that are seen for discrete-state hidden Markov Models, and within sequential Monte Carlo. We demonstrate the usefulness of these algorithms on problems such as detecting spikes in calcium imaging data. Our algorithm can analyse data of length 100,000 in less than a second, and has been used by the Allen Brain Institute to analyse the spike patterns of over 60,000 neurons. (This is joint work with Toby Hocking, Sean Jewell, Guillem Rigaill and Daniela Witten.) Dr. Sandipan Roy, Department of Mathematical Science, University of Bath (15:00-16:00) Network Heterogeneity and Strength of Connections Abstract: Detecting strength of connection in a network is a fundamental problem in understanding the relationship among individuals. Often it is more important to understand how strongly the two individuals are connected rather than the mere presence/absence of the edge. This paper introduces a new concept of strength of connection in a network through a nonparameteric object called “Grafield”. “Grafield” is a piece-wise constant bi-variate kernel function that compactly represents the affinity or strength of ties (or interactions) between every pair of vertices in the graph. We estimate the “Grafield” function through a spectral analysis of the Laplacian matrix followed by a hard thresholding (Gavish & Donoho, 2014) of the singular values. Our estimation methodology is valid for asymmetric directed network also. As a by product we get an efficient procedure for edge probability matrix estimation as well. We validate our proposed approach with several synthetic experiments and compare with existing algorithms for edge probability matrix estimation. We also apply our proposed approach to three real datasets- understanding the strength of connection in (a) a social messaging network, (b) a network of political parties in US senate and (c) a neural network of neurons and synapses in C. elegans, a type of worm. |
|
Thu 17 Jan, '19- |
CRiSM SeminarMSB2.23Prof. Galin Jones, School of Statistics, University of Minnesota (14:00-15:00) Bayesian Spatiotemporal Modeling Using Hierarchical Spatial Priors, with Applications to Functional Magnetic Resonance Imaging We propose a spatiotemporal Bayesian variable selection model for detecting activation in functional magnetic resonance imaging (fMRI) settings. Following recent research in this area, we use binary indicator variables for classifying active voxels. We assume that the spatial dependence in the images can be accommodated by applying an areal model to parcels of voxels. The use of parcellation and a spatial hierarchical prior (instead of the popular Ising prior) results in a posterior distribution amenable to exploration with an efficient Markov chain Monte Carlo (MCMC) algorithm. We study the properties of our approach by applying it to simulated data and an fMRI data set. Dr. Flavio Goncalves, Universidade Federal de Minas Gerais, Brazil (15:00-16:00). Exact Bayesian inference in spatiotemporal Cox processes driven by multivariate Gaussian processes In this talk we present a novel inference methodology to perform Bayesian inference for spatiotemporal Cox processes where the intensity function depends on a multivariate Gaussian process. Dynamic Gaussian processes are introduced to allow for evolution of the intensity function over discrete time. The novelty of the method lies on the fact that no discretisation error is involved despite the non-tractability of the likelihood function and infinite dimensionality of the problem. The method is based on a Markov chain Monte Carlo algorithm that samples from the joint posterior distribution of the parameters and latent variables of the model. The models are defined in a general and flexible way but they are amenable to direct sampling from the relevant distributions, due to careful characterisation of its components. The models also allow for the inclusion of regression covariates and/or temporal components to explain the variability of the intensity function. These components may be subject to relevant interaction with space and/or time. Real and simulated examples illustrate the methodology, followed by concluding remarks. |
|
Thu 6 Dec, '18- |
CRiSM SeminarA1.01 Dr. Carlo Albert, EAWAG, Switzerland Bayesian Inference for Stochastic Differential Equation Models through Hamiltonian Scale Separation Bayesian parameter inference is a fundamental problem in model-based data science. Given observed data, which is believed to be a realization of some parameterized model, the aim is to find a distribution of likely parameter values that are able to explain the observed data. This so-called posterior distribution expresses the probability of a given parameter to be the "true" one, and can be used for making probabilistic predictions. For truly stochastic models this posterior distribution is typically extremely expensive to evaluate. We propose a novel approach for generating posterior parameter distributions, for stochastic differential equation models calibrated to measured time-series. The algorithm is inspired by re-interpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, whose dynamics is confined by both the model and the measurements. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for 1D problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable. |
|
Tue 20 Nov, '18- |
CRiSM SeminarA1.01 Dr. Kayvan Sadeghi, University College London Probabilistic Independence, Graphs, and Random Networks |
|
Thu 8 Nov, '18- |
CRiSM SeminarA1.01Dr. Martin Tegner, University of Oxford A probabilistic The local volatility model is a celebrated model widely used for pricing and hedging financial derivatives. While the model’s main appeal is its capability of reproducing any given surface of observed option prices—it provides a perfect fit—the essential component of the model is a latent function which can only be unambiguously determined in the limit of infinite data. To (re)construct this function, numerous calibration methods have been suggested involving steps of interpolation and extrapolation, most often of parametric form and with point-estimates as result. We seek to look at the calibration problem in a probabilistic framework with a fully nonparametric approach based on Gaussian process priors. This immediately gives a way of encoding prior believes about the local volatility function and a hypothesis model which is highly flexible whilst being prone to overfitting. Besides providing a method for calibrating a (range of) point-estimate(s), we seek to draw posterior inference on the distribution over local volatility. This to better understand the uncertainty attached with the calibration in particular, and with the model in general. Further, we seek to understand dynamical properties of local volatility by augmenting the hypothesis space with a time dimension. Ideally, this gives us means of inferring predictive distributions not only locally, but also for entire surfaces forward in time.
-------------------------- |
|
Thu 25 Oct, '18- |
CRiSM SeminarA1.01Speaker: Professor Martyn Plummer, Department of Statistics, Warwick University Abstract: We consider approximate Bayesian model choice for model selection problems that involve models whose Fisher information matrices may fail to be invertible along other competing sub-models. Such singular models do not obey the regularity conditions underlying the derivation of Schwarz’s Bayesian information criterion (BIC) and the penalty structure in BIC generally does not reflect the frequentist large-sample behavior of their marginal likelihood. While large-sample theory for the marginal likelihood of singular models has been developed recently, the resulting approximations depend on the true parameter value and lead to a paradox of circular reasoning. Guided by examples such as determining the number of components of mixture models, the number of factors in latent factor models or the rank in reduced-rank regression, we propose a resolution to this paradox and give a practical extension of BIC for singular model selection problems. |
|
Fri 15 Jun, '18- |
CRiSM SeminarB3.022-3pm B3.02, June 15, 2018 - Sarah Heaps (Newcastle University)Identifying the effect of public holidays on daily demand for gas Gas distribution networks need to ensure the supply and demand for gas are balanced at all times. In practice, this is supported by a number of forecasting exercises which, if performed accurately, can substantially lower operational costs, for example through more informed preparation for severe winters. Amongst domestic and commercial customers, the demand for gas is strongly related to the weather and patterns of life and work. In regard to the latter, public holidays have a pronounced effect, which often extends into neighbouring days. In the literature, the days over which this protracted effect is felt are typically pre-specified as fixed windows around each public holiday. This approach fails to allow for any uncertainty surrounding the existence, duration and location of the protracted holiday effects. We introduce a novel model for daily gas demand which does not fix the days on which the proximity effect is felt. Our approach is based on a four-state, non-homogeneous hidden Markov model with cyclic dynamics. In this model the classification of days as public holidays is observed, but the assignment of days as “pre-holiday”, “post-holiday” or “normal” is unknown. Explanatory variables recording the number of days to the preceding and succeeding public holidays guide the evolution of the hidden states and allow smooth transitions between normal and holiday periods. To allow for temporal autocorrelation, we model the logarithm of gas demand at multiple locations, conditional on the states, using a first-order vector autoregression (VAR(1)). We take a Bayesian approach to inference and consider briefly the problem of specifying a prior distribution for the autoregressive coefficient matrix of a VAR(1) process which is constrained to lie in the stationary region. We summarise the results of an application to data from Northern Gas Networks (NGN), the regional network serving the North of England, a preliminary version of which is already being used by NGN in its annual medium-term forecasting exercise. -- |
|
Fri 1 Jun, '18- |
CRiSM SeminarB3.02Victor Panaretos (EPFL)
What is the dimension of a stochastic process?
How can we determine whether a mean-square continuous stochastic process is, in fact, finite-dimensional, and if so, what its actual dimension is? And how can we do so at a given level of confidence? This question is central to a great deal of methods for functional data analysis, which require low-dimensional representations whether by functional PCA or other methods. The difficulty is that the determination is to be made on the basis of iid replications of the process observed discretely and with measurement error contamination. This adds a ridge to the empirical covariance, obfuscating the underlying dimension. We build a matrix-completion-inspired test procedure that circumvents this issue by measuring the best possible least square fit of the empirical covariance's off-diagonal elements, optimised over covariances of given finite rank. For a fixed grid of sufficient size, we determine the statistic's asymptotic null distribution as the number of replications grows. We then use it to construct a bootstrap implementation of a stepwise testing procedure controlling the family-wise error rate corresponding to the collection of hypothesis formalising the question at hand. The procedure involves no tuning parameters or pre-smoothing, is indifferent to the homoskedasticity or lack of it in the measurement errors, and does not assume a low-noise regime. Based on joint work with Anirvan Chakraborty (EPFL).
|
|
Fri 18 May, '18- |
CRiSM SeminarA1.01Caitlin Buck (University of Sheffield) A dilemma in Bayesian chronology construction Chronology construction was one of the first applications used to show case the value of MCMC methods for Bayesian inference (Naylor and Smith, 1988; Buck et al, 1992). As a result, Bayesian chronology construction is now ubiquitous in archaeology and is becoming increasingly popular in palaeoenvironmental research. Currently available software requires users to construct the statistical models and input prior knowledge by hand, requiring considerable expertise and patience. As a result, the published chronologies for most sites are based on a single model which is assumed to be correct. Recent research has, however, led to a proposal to automate production of Bayesian chronological models from field records. The approach uses directed acyclic graphs (DAGs) to represent the site stratigraphy and, from these, construct priors for the Bayesian hierarchical models (Dye and Buck, 2015). The related software is in the developmental stage but, before it can be released, we need to decide what advice to offer users about working with the large number of potential models that the new software will construct. In this seminar I will outline how and why Bayesian methods are so widely used in chronology construction, show case the new DAG-based approach, explain the nature of the dilemma we face and hope to start a discussion about potential practical solutions. C.E. Buck, C.D. Litton, & A.F.M. Smith (1992) Calibration of radiocarbon results pertaining to related archaeological events, Journal of Archaeological Science, Vol. 19, Iss. 5, pp 497-512. T. S. Dye & C.E. Buck (2015) Archaeological sequence diagrams and Bayesian chronological models, Journal of Archaeological Science, Vol. 63, pp 84-93. J. C. Naylor & A. F. M. Smith (1988) An Archaeological Inference Problem, Journal of the American Statistical Association, Vol. 83, Iss. 403, pp 588-595.
|
|
Fri 18 May, '18- |
CRiSM SeminarB3.02Sergio Bacallado (University of Cambridge)Three stories on clinical trial design The design of randomised clinical trials is one of the most classical applications of modern Statistics. The first part of this talk has to do with adaptive trial designs, which aim to minimise the harm to study participants by biasing randomisation toward arms that are performing well, or by closing experimental arms when there is early evidence of futility. We first propose a class of Bayesian uncertainty-directed trial designs, which aim to maximise information gain at the trial's conclusion, and we show in applications to various types of trial that it has superior operating characteristics when compared to simpler adaptive policies. In a second section, I will discuss the use of reinforcement learning algorithms to approximate Bayes-optimal policies given a prior for the treatment effects and a utility function combining outcomes for participants and the uncertainty of treatment effects. The last part of the talk will consider the possibility of sharing preliminary data from trials with patients and physicians who are making enrollment decisions. This practice may be in line with a trend toward patient-centred clinical research, but it presents many challenges and potential pitfalls. Through a simulation study, modelled on the landscape of Glioblastoma trials in the last 15 years, we explore how such 'permeable' designs could affect operating characteristics and the statistical validity of trial conclusions. Joint work with Lorenzo Trippa, Steffen Ventz, and Brian Alexander
|
|
Fri 4 May, '18- |
CRiSM SeminarB3.02Wenyang Zhang - (University of York)Homogeneity Pursuit in Single Index Models based Panel Data Analysis
|
|
Fri 2 Feb, '18- |
CRiSM SeminarA1.012nd Feb - 3pm - 4pm A1.01 - Azadeh Khaleghi (Lancaster Univeristy) Title: Approximations of the Restless Bandit Problem Abstract: In this talk I will discuss our recent paper on the multi-armed restless bandit problem. My focus will be on an instance of the bandit problem where the pay-off distributions are stationary $\phi$-mixing. This version of the problem provides a more realistic model for most real-world applications, but cannot be optimally solved in practice since it is known to be PSPACE-hard. The objective is to characterize a sub-class of the problem where good approximate solutions can be found using tractable approaches. I show that under some conditions on the $\phi$-mixing coefficients, a modified version of the UCB algorithm proves effective. The main challenge is that, unlike in the i.i.d. setting, the distributions of the sampled pay-offs may not have the same characteristics as those of the original bandit arms. In particular, the $\phi$-mixing property does not necessarily carry over. This is overcome by carefully controlling the effect of a sampling policy on the pay-off distributions. Some of the proof techniques developed can be more generally used in the context of online sampling under dependence. Proposed algorithms are accompanied with corresponding regret analysis. I will ensure to make the talk accessible to non-experts. |
|
Fri 2 Feb, '18- |
CRiSM SeminarMA_B1.012-3pm MA B1.01, 2 Feb, 2018 - Robin Evans - (Oxford University)Title: Geometry and statistical model selection Abstract: TBA |
|
Fri 19 Jan, '18- |
CRiSM SeminarA1.01 |
|
Fri 19 Jan, '18- |
CRiSM SeminarMA_B1.01Jonas Peters, Department of Mathematical Sciences, University of Copenhagen Invariant Causal Prediction Abstract: Why are we interested in the causal structure of a process? In classical prediction tasks as regression, for example, it seems that no causal knowledge is required. In many situations, however, we want to understand how a system reacts under interventions, e.g., in gene knock-out experiments. Here, causal models become important because they are usually considered invariant under those changes. A causal prediction uses only direct causes of the target variable as predictors; it remains valid even if we intervene on predictor variables or change the whole experimental setting. In this talk, we show how we can exploit this invariance principle to estimate causal structure from data. We apply the methodology to data sets from biology, epidemiology, and finance. The talk does not require any knowledge about causal concepts. David Ginsbourger, Idiap Research Institute and University of Bern, http://www.ginsbourger.ch
Abstract: Gaussian Process models have been used in a number of problems where an objective function f needs to be studied based on a drastically limited number of evaluations.
Global optimization algorithms based on Gaussian Process models have been investigated for several decades, and have become quite popular notably in design of computer experiments. Also, further classes of problems involving the estimation of sets implicitly defined by f, e.g. sets of excursion above a given threshold, have inspired multiple research developments.
In this talk, we will give an overview of recent results and challenges pertaining to the estimation of sets under Gaussian Process priors, with a particular interest for to the quantification and the sequential reduction of associated uncertainties.
Based on a series of joint works primarily with Dario Azzimonti, François Bachoc, Julien Bect, Mickaël Binois, Clément Chevalier, Ilya Molchanov, Victor Picheny, Yann Richet and Emmanuel Vazquez. |
|
Fri 8 Dec, '17- |
CRiSM SeminarA1.013-4pm A1.01, Dec 8, 2017 - Richard SamworthTitle: High-dimensional changepoint estimation via sparse projection Abstract: Changepoints are a very common feature of big data that arrive in the form of a data stream. We study high dimensional time series in which, at certain time points, the mean structure changes in a sparse subset of the co-ordinates. The challenge is to borrow strength across the co-ordinates to detect smaller changes than could be observed in any individual component series. We propose a two-stage procedure called inspect for estimation of the changepoints: first, we argue that a good projection direction can be obtained as the leading left singular vector of the matrix that solves a convex optimization problem derived from the cumulative sum transformation of the time series. We then apply an existing univariate changepoint estimation algorithm to the projected series. Our theory provides strong guarantees on both the number of estimated changepoints and the rates of convergence of their locations, and our numerical studies validate its highly competitive empirical performance for a wide range of data-generating mechanisms. Software implementing the methodology is available in the R package InspectChangepoint. 4-5pm A1.01, Dec 8, 2017 - Simon R. White, MRC Biostatistics Unit, University of CambridgeTitle: Spatio-temporal modelling and heterogeneity in neuroimaging Abstract: Neuroimaging allows us to gain insight into the structure and activity of the brain. Clearly, there is significant spatial structure that leads to dependencies across measurements that must be accounted for. Further, the brain as an organ is never idle, thus the local temporal behaviour is important when characterising long-term functional connectivity. In this talk we will discuss several approaches to modelling neuroimaging that account for these key features, namely spatio-temporal heterogeneity: a novel approach to spatial modelling as an extension to the commonly used dimension reduction technique independent component analysis (ICA) for tasked-based functional magnetic resonance imaging (fMRI); propagating subject-level heterogeneity through multi-stage analyses of dynamic functional connectivity (dFC) using resting-state fMRI (rs-fMRI), and structural development using structural MRI.
|
|
Fri 24 Nov, '17- |
CRiSM SeminarA1.013-4pm A1.01, Nov 24, 2017 - Song LiuTitle: Trimmed Density Ratio Estimation Abstract: Density ratio estimation has become a versatile tool in machine learning community recently. However, due to its unbounded nature, density ratio estimation is vulnerable to corrupted data points, which often pushes the estimated ratio toward infinity. In this paper, we present a robust estimator which automatically identifies and trims outliers. The proposed estimator has a convex formulation, and the global optimum can be obtained via subgradient descent. We analyze the parameter estimation error of this estimator under high-dimensional settings. Experiments are conducted to verify the effectiveness of the estimator. |
|
Thu 9 Nov, '17- |
CRiSM SeminarC0.08Speaker: Jonathan Keith (Monash University) Title: Markov chain Monte Carlo in discrete spaces, with applications in bioinformatics and ecology Abstract: Efficient sampling of probability distributions over large discrete spaces is a challenging problem that arises in many contexts in bioinformatics and ecology. For example, segmentation of genomes to identify putative functional elements can be cast as a multiple change-point problem involving thousands or even millions of change-points. Another example involves reconstructing the invasion history of an introduced species by embedding a phylogenetic tree in a landscape. A third example involves inferring networks of molecular interactions in cellular systems. In this talk I describe a generalisation of the Gibbs sampler that allows this well known strategy for sampling probability distributions in R^n to be adapted for sampling discrete spaces. The technique has been successfully applied to each of the problems mentioned above. However, these problems remain highly computationally intensive. I will discuss a number of alternatives for efficient sampling of such spaces, and will be seeking collaborations to develop these and other new approaches. |
|
Fri 27 Oct, '17- |
CRiSM SeminarA1.01Speaker: Davide Pigoli (King's College London) Title: Functional data analysis of biological growth processes Abstract: Functional data are examples of high-dimensional data when the observed variables have a natural ordering and are generated by an underlying smooth process. These additional properties allow us to develop methods that go beyond what would be possible with classical multivariate techniques. In this talk, I will demonstrate the potential of functional data analysis for biological growth processes in two different applications. The first one is in forensic entomology, where there is the need of estimating time-dependent growth curves from experiments where larvae have been exposed to a relatively small number of constant temperature profiles. The second one is in quantitative genetics, where the growth curve is a function-valued phenotypic trait from which the continuous genetic variation needs to be estimated. |
|
Fri 30 Jun, '17- |
CRiSM Seminar - Paul Kirk (BSU, Cambridge) (C1.06)C1.06, Zeeman BuildingTitle: Semi-supervised multiview clustering for high-dimensional data |
|
Fri 19 May, '17- |
CRiSM SeminarD1.07Korbinian Strimmer (Imperial) An entropy approach for integrative genomics and network modeling Multivariate regression approaches such as Seemingly Unrelated Regression (SUR) or Partial Least Squares (PLS) are commonly used in vertical data integration to jointly analyse different types of omics data measured on the same samples, such as SNP and gene expression data (eQTL) or proteomic and transcriptomic data. However, these approaches may be difficult to apply and to interpret for computational and conceptual reasons. Here we present a simple alternative approach to integrative genomics based on using relative entropy to characterise the overall association between two (or more) sets of omic data, and to infer the underlying corresponding association network among the individual covariates. This approach is computationally inexpensive and can be applied to large-dimensional data sets. A key and novel feature of our method is decomposition of the total strength between two or more groups of variables based on optimal whitening of the individual data sets. Correspondingly, it may also be viewed as a special form of a latent-variable multivariate regression model. We illustrate this approach by analysing metabolomic and transcriptomic data from the DILGOM study. References: A. Kessy, A. Lewin, and K. Strimmer. 2017. Optimal whitening and decorrelation. The American Statistician, to appear. http://dx.doi.org/10.1080/00031305.2016.1277159 T. Jendoubi and K. Strimmer. 2017. Data integration and network modeling: an entropy approach. In prep. |
|
Fri 5 May, '17- |
CRiSM SeminarD1.07"Adaptive MCMC For Everyone" |
|
Fri 17 Mar, '17- |
CRiSM SeminarMA_B1.01Paul Birrell (MRC Biostatistics Unit, Cambridge) Towards Computationally Efficient Epidemic Inference
|
|
Fri 3 Mar, '17- |
CRiSM SeminarMA_B1.01Marcelo Pereyra Bayesian inference by convex optimisation: theory, methods, and algorithms. Abstract: Convex optimisation has become the main Bayesian computation methodology in many areas of data science such as mathematical imaging and machine learning, where high dimensionality is often addressed by using models that are log-concave and where maximum-a-posteriori (MAP) estimation can be performed efficiently by optimisation. The first part of this talk presents a new decision-theoretic derivation of MAP estimation and shows that, contrary to common belief, under log-concavity MAP estimators are proper Bayesian estimators. A main novelty is that the derivation is based on differential geometry. Following on from this, we establish universal theoretical guarantees for the estimation error involved and show estimation stability in high dimensions. Moreover, the second part of the talk describes a new general methodology for approximating Bayesian high-posterior-density regions in log-concave models. The approximations are derived by using recent concentration of measure results related to information theory, and can be computed very efficiently, even in large-scale problems, by using convex optimisation techniques. The approximations also have favourable theoretical properties, namely they outer-bound the true high-posterior-density credibility regions, and they are stable with respect to model dimension. The proposed methodology is finally illustrated on two high-dimensional imaging inverse problems related to tomographic reconstruction and sparse deconvolution, where they are used to explore the uncertainty about the solutions, and where convex-optimisation-empowered proximal Markov chain Monte Carlo algorithms are used as benchmark to compute exact credible regions and measure the approximation error. |
|
Fri 17 Feb, '17- |
CRiSM SeminarMA_B1.01Ioannis Kosmidis Title: Reduced-bias inference for regression models with tractable and intractable likelihoods Abstract: This talk focuses on a unified theoretical and algorithmic framework for reducing bias in the estimation of statistical models from a practitioners point of view. We will briefly discuss how shortcomings of classical estimators and of inferential procedures depending on those can be overcome via reduction of bias, and provide a few demonstrations stemming from current and past research on well-used statistical models with tractable likelihoods, including beta regression for bounded-domain responses, and the typically small-sample setting of meta-analysis and meta-regression in the presence of heterogeneity. The large impact that bias in the estimation of the variance components can have on inference motivates delivering higher-order corrective methods for generalised linear mixed models. The challenges in doing that will be presented along with resolutions stemming from current research. |
|
Fri 3 Feb, '17- |
CRiSM SeminarMA_B1.01Liz Ryan (KCL) Title: Simulation-based Fully Bayesian Experimental Design Abstract: Bayesian experimental design is a fast growing area of research with many real-world applications. As computational power has increased over the years, so has the development of simulation-based design methods, which involve a number of Bayesian algorithms, such as Markov chain Monte Carlo (MCMC) algorithms. However, many of the proposed algorithms have been found to be computationally intensive for complex or nonstandard design problems, such as those which require a large number of design points to be found and/or those for which the observed data likelihood has no analytic expression. In this work, we develop novel extensions of existing algorithms which have been used for Bayesian experimental design, and also incorporate methodologies which have been used for Bayesian inference into the design framework, so that solutions to more complex design problems can be found. |
|
Fri 20 Jan, '17- |
CRiSM SeminarMA_B1.01Yi Yu (University of Bristol) Title: Estimating whole brain dynamics using spectral clustering Abstract: The estimation of time-varying networks for functional Magnetic Resonance Imaging (fMRI) data sets is of increasing importance and interest. In this work, we formulate the problem in a high-dimensional time series framework and introduce a data-driven method, namely Network Change Points Detection (NCPD), which detects change points in the network structure of a multivariate time series, with each component of the time series represented by a node in the network. NCPD is applied to various simulated data and a resting-state fMRI data set. This new methodology also allows us to identify common functional states within and across subjects. Finally, NCPD promises to offer a deep insight into the large-scale characterisations and dynamics of the brain. This is joint work with Ivor Cribben (Alberta School of Business). |
|
Fri 9 Dec, '16- |
CRiSM SeminarA1.01Satish Iyengar - Big Data Challenges in Psychiatry Current psychiatric diagnoses are based primarily on self-reported experiences. Unfortu- nately, treatments for the diagnoses are not effective for all patients. One hypothesized reason is that “artificial grouping of heterogeneous syndromes with different pathophysio- logical mechanisms into one disorder.” To address this problem, the US National Institute of Mental Health instituted the Research Domain Criteria framework in 2009. This re- search framework calls for integrating data from many levels of information: genes, cells, molecules, circuits, physiology, behavior, and self-report. Clustering comes to the forefront as a key tool in this big-data effort. In this talk, we present a case study of the use of mix- ture models to cluster older adults based on measures of sleep from three domains: diary, actigraphy, and polysomnography. Challenges in this effort include the use of mixtures of asymmetric (skewed) distributions, a large number of potential clustering variables, and seeking clinically meaningful solutions. We present novel variable selection algorithms, study them by simulation, and demonstrate our methods on the sleep data. This work is joint with Dr. Meredith Wallace.
|
|
Fri 25 Nov, '16- |
CRiSM SeminarA1.01 |
|
Fri 11 Nov, '16- |
CRiSM SeminarA1.01Mingli Chen |
|
Fri 28 Oct, '16- |
CRiSM SeminarA1.01Peter Orbanz |
|
Fri 14 Oct, '16- |
CRiSM SeminarA1.01Daniel Rudolf - Perturbation theory for Markov chains Perturbation theory for Markov chains addresses the question of how small differences in the transition probabilities of Markov chains are reflected in differences between their distributions. Under a convergence condition we present an estimate of the Wasserstein distance of the nth step distributions between an ideal, unperturbed and an approximating, perturbed Markov chain. We illustrate the result with an example of an autoregressive process. |
|
Tue 30 Aug, '16 - Thu 1 Sep, '16All-day |
CRiSM Master Class on Sparse RegressionMS.01Runs from Tuesday, August 30 to Thursday, September 01. |
|
Fri 1 Jul, '16- |
CRiSM SeminarD1.07Gonzalo Garcia Donato (Universidad Castilla La Mancha) Criteria for Bayesian model choice In model choice (or model selection) several statistical models are postulated as legitimate explanations for a response variable and this uncertainty is to be propagated in the inferential process. The type of questions one is aimed to answer is assorted ranging from e.g. identifying the `true’ model to produce more reliable estimates that takes into account this extra source of variability. Particular important problems of model choice are hypothesis testing, model averaging and variable selection. The Bayesian paradigm provides a conceptually simple and unified solution to the model selection problem: the posterior probabilities of the competing models. This is also named the posterior distribution over the model space and is a simple function of Bayes factors. Answering any question of interest just reduces to summarizing properly this posterior distribution. Unfortunately, the posterior distribution may depend dramatically on the prior inputs and unlike estimation problems (where model is fixed) such sensitivity does not vanish with large sample sizes. Additionally, it is well known that standard solutions like improper or vague priors cannot be used in general as they result in arbitrary Bayes factors. Bayarri et al (2012) propose tackling these difficulties basing the assignment of prior distributions in objective contexts on a number of sensible statistical. This approach takes a step beyond a way of analyzing the problem that Jeffreys inaugurated fifty years ago. In this talk the criteria will be presented with emphasis on those aspects who serve to characterize features of the priors that, until today, have been popularly used without a clear justification. Originally the criteria were accompanied with an application to variable selection in regression models and here we will see how they can be useful to tackle other important scenarios like high dimensional settings or survival problems. |
|
Fri 17 Jun, '16- |
CRiSM SeminarD1.07
|
|
Fri 10 Jun, '16- |
CRiSM SeminarClaire Gormley (University College Dublin) Clustering High Dimensional Mixed Data: Joint Analysis of Phenotypic and Genotypic Data The LIPGENE-SU.VI.MAX study, like many others, recorded high dimensional continuous phenotypic data and categorical genotypic data. Interest lies in clustering the study participant into homogeneous groups or sub-phenotypes, by jointly considering their phenotypic and genotypic data, and in determining which variables are discriminatory. A novel latent variable model which elegantly accommodates high dimensional, mixed data is developed to cluster participants using a Bayesian finite mixture model. A computationally efficient variable selection algorithm is incorporated, estimation is via a Gibbs sampling algorithm and an approximate BIC-MCMC criterion is developed to select the optimal model. Two clusters or sub-phenotypes (‘healthy’ and ‘at risk’) are uncovered. A small subset of variables is deemed discriminatory which notably includes phenotypic and genotypic variables, highlighting the need to jointly consider both factors. Further, seven years after the data were collected, participants underwent further analysis to diagnose presence or absence of the metabolic syndrome (MetS). The two uncovered sub-phenotypes strongly correspond to the seven year follow up disease classification, highlighting the role of phenotypic and genotypic factors in the MetS, and emphasising the potential utility of the clustering approach in early screening. Additionally, the ability of the proposed approach to define the uncertainty in sub-phenotype membership at the participant level is synonymous with the concepts of precision medicine and nutrition. |
|
Fri 3 Jun, '16- |
CRiSM SeminarD1.07Degui Li (University of York) Panel Data Models with Interactive Fixed Effects and Multiple Structural Breaks In this paper we consider estimation of common structural breaks in panel data models with interactive fixed effects which are unobservable. We introduce a penalized principal component (PPC) estimation procedure with an adaptive group fused LASSO to detect the multiple structural breaks in the models. Under some mild conditions, we show that with probability approaching one the proposed method can correctly determine the unknown number of breaks and consistently estimate the common break dates. furthermore, we estimate the regression coefficients through the post-LASSO method and establish the asymptotic distrbution theory for the resulting estimators. The developed methodology and theory are applicable to the case of dynamic panel data models. The Monte Carlo simulation results demonstrate that the proposed method works well in finite samples with low false detection probability when there is no structural break and high probability of correctly estimating the break numbers when the structural breaks exist. We finally apply our method to study the environmental Kuznets curve for 74 countries over 40 years and detect two breaks in the data. |
|
Fri 20 May, '16- |
CRiSM SeminarD1.07Jon Forster (Southampton) Model integration for mortality estimation and forecasting The decennial English Life Tables have been produced after every UK decennial census since 1841. They are based on graduated (smoothed) estimates of central mortality rates, or related functions. For UK mortality, over the majority of the age range, a GAM can provide a smooth function which adheres acceptably well to the crude mortality rates. At the very highest ages, the sparsity of the data mean that the uncertainty about mortality rates is much greater. A further issue is that life table estimation requires us to extrapolate the estimate of the mortality rate function to ages beyond the extremes of the observed data. Our approach integrates a GAM at lower ages with a low-dimensional parametric model at higher ages. Uncertainty about the threshold age ,at which the transition to the simpler model occurs, is integrated into the analysis. This base structure can then be extended into a model for the evolution of mortality rates over time, allowing the forecasting of mortality rates, a key input into demographic projections necessary for planning. |
|
Fri 13 May, '16- |
CRiSM SeminarB1.01Michael Newton (University of Wisconsin-Madison) Ranking and selection revisited In large-scale inference the precision with which individual parameters are estimated may vary greatly among parameters, thus complicating the task to rank order parameters. I present a framework for evaluating different ranking/selection schemes as well as an empirical Bayesian methodology showing theoretical and empirical advantages over available approaches. Examples from genomics and sports will help to illustrate the issues. |
|
Fri 6 May, '16- |
CRiSM SeminarMS.03Mikhail Malyutov (Northeastern University) Context-free and Grammer-free Statistical Testing Identity of Styles Our theory justifies our thorough statistical modification CCC of D.Khmelev's conditional compression based classification idea of 2001 and the 7 years of intensive applied statistical implementation of CCC for authorship attribution of literary works. Homogeneity testing based on SCOT training with applications to the financial modeling and Statistical Quality Control are also in progress. Both approaches are desrcibed in a Springer monograph which appears shortly. Stochastic Context Tree (abbreviated as SCOT) is m-Markov Chain with every state of a spring independent of the symbols in its more remote past than the context of length determined by the preceding symbols of this state. In all of our applications we uncover a complex sparse structure of memory in SCOT models that allows excellent discrimination power. In additiion, a straightforward estimation of the stationary distributio of SCOT gives insight into contexts crucial for discrimination between, say, different regimes of financial data or between styles of different authors of literary tests. |
|
Mon 4 Apr, '16 - Fri 8 Apr, '16All-day |
CRiSM Master Class: Non-Parametric BayesMS.01Runs from Monday, April 04 to Friday, April 08. |
|
Fri 18 Mar, '16- |
CRiSM SeminarB1.01Petros Dellaporta (UCL) Scalable inference for a full multivariate stochastic volatility model Abstract: We introduce a multivariate stochastic volatility model for asset returns that imposes no restrictions to the structure of the volatility matrix and treats all its elements as functions of latent stochastic processes. When the number of assets is prohibitively large, we propose a factor multivariate stochastic volatility model in which the variances and correlations of the factors evolve stochastically over time. Inference is achieved via a carefully designed feasible andscalable Markov chain Monte Carlo algorithm that combines two computationally important ingredients: it utilizes invariant to the prior Metropolis proposal densities for simultaneously updating all latent paths and has quadratic, rather than cubic, computational complexity when evaluating the multivariate normal densities required. We apply our modelling and computational methodology to 571 stock daily returns of Euro STOXX index for data over a period of 10 years. |
|
Fri 4 Mar, '16- |
CRiSM SeminarB1.01Alan Gelfand (Duke, Dept of Statistical Science) Title: Space and circular time log Gaussian Cox processes with application to crime event data Abstract: We view the locations and times of a collection of crime events as a space-time point pattern. So, with either a nonhomogeneous Poisson process or with a more general Cox process, we need to specify a space-time intensity. For the latter, we need a random intensity which we model as a realization of a spatio-temporal log Gaussian process. In fact, we view time as circular, necessitating valid separable and nonseparable covariance functions over a bounded spatial region crossed with circular time. In addition, crimes are classified by crime type. Furthermore, each crime event is marked by day of the year which we convert to day of the week. We present models to accommodate such data. Then, we extend the modeling to include the marks. Our specifications naturally take the form of hierarchical models which we t within a Bayesian framework. In this regard, we consider model comparison between the nonhomogeneous Poisson process and the log Gaussian Cox process. We also compare separable vs. nonseparable covariance specifications. Our motivating dataset is a collection of crime events for the city of San Francisco during the year 2012. Again, we have location, hour, day of the year, and crime type for each event. We investigate a rich range of models to enhance our understanding of the set of incidences. |
|
Fri 19 Feb, '16- |
CRiSM SeminarB1.01Theresa Smith (CHICAS, Lancaster Medical School) Modelling geo-located health data using spatio-temporal log-Gaussian Cox processes Abstract: Health data with high spatial and temporal resolution are becoming more common, but there are several practical and computational challenges to using such data to study the relationships between disease risk and possible predictors. These difficulties include lack of measurements on individual-level covariates/exposures, integrating data measured on difference spatial and temporal units, and computational complexity. In this talk, I outline strategies for jointly estimating systematic (i.e., parametric) trends in disease risk and assessing residual risk with spatio-temporal log-Gaussian Cox processes (LGCPs). In particular, I will present a Bayesian methods and MCMC tools for using spatio-temporal LGCPs to investigate the roles of environmental and socio-economic risk-factors in the incidence of Campylobacter in England.
|
|
Fri 5 Feb, '16- |
CRiSM SeminarB1.01Ewan Cameron (Oxford, Dept of Zoology) Progress and (Statistical) Challenges in Malariology Abstract: In this talk I will describe some key statistical challenges faced by researchers aiming to quantify the burden of disease arising from Plasmodium falciparum malaria at the population level. These include covariate selection in the 'big data' setting, handling spatially-correlated residuals at scale, calibration of individual simulation models of disease transmission, and the embedding of continuous-time, discrete-state Markov Chain solutions within hierarchical Bayesian models. In each case I will describe the pragmatic solutions we've implemented to-date within the Malaria Atlas Project, and highlight more sophisticated solutions we'd like to have in the near-future if the right statistical methodology and computational tools can be identified and/or developed to this end. References: http://www.nature.com/nature/journal/v526/n7572/abs/nature15535.html http://www.nature.com/ncomms/2015/150907/ncomms9170/full/ncomms9170.html http://www.ncbi.nlm.nih.gov/pubmed/25890035 http://link.springer.com/article/10.1186/s12936-015-0984-9
|
|
Fri 22 Jan, '16- |
CRiSM SeminarB1.01Li Su with Michael J. Daniels (MRC Biostatistics Unit) |
|
Thu 10 Dec, '15- |
CRiSM Seminar - Martin Lindquist (John Hopkins University, Dept of Biostatistics))A1.01Martin Lindquist (John Hopkins University, Dept of Biostatistics) New Approaches towards High-dimensional Mediation Mediation analysis is often used in the behavioral sciences to investigate the role of intermediate variables that lie on the path between a randomized treatment and an outcome variable. The influence of the intermediate variable (mediator) on the outcome is often determined using structural equation models (SEMs). While there has been significant research on the topic in recent years, little is known about mediation analysis when the mediator is high dimensional. Here we discuss two approaches towards addressing this problem. The first is an extension of SEMs to the functional data analysis (FDA) setting that allows the mediating variable to be a continuous function rather than a single scalar measure. The second finds the linear combination of a high-dimensional vector of potential mediators that maximizes the likelihood of the SEM. Both methods are applied to data from a functional magnetic resonance imaging (fMRI) study of thermal pain that sought to determine whether brain activation mediated the effect of applied temperature on self-reported pain. |
|
Thu 26 Nov, '15- |
CRiSM Seminar - Ismael Castillo (Universite Paris 6, Laboratoire de Probabilites et Modeles AleatoiresA1.01Ismael Castillo (Université Paris 6, Laboratoire de Probabilités et Modèles Aléatoires) In Bayesian nonparametrics, Polya tree distributions form a popular and flexible class of priors on distributions or density functions. In the problem of density estimation, for certain choices of parameters, Polya trees have been shown to produce asymptotically consistent posterior distributions in a Hellinger sense. In this talk, after reviewing some general properties of Polya trees, I will show that the previous consistency result can be made much more precise in two directions: 1) rates of convergence can be derived 2) it is possible to characterise the limiting shape of the posterior distribution in a functional sense. We will discuss a few applications to Donsker-type results on the cumulative distribution function and to the study of some functionals of the density. |
|
Thu 12 Nov, '15- |
CRiSM Seminar - Patrick Wolfe (UCL, Dept of Statistics Science))A1.01Patrick Wolfe (UCL, Dept of Statistical Science) Networks are ubiquitous in today's world. Any time we make observations about people, places, or things and the interactions between them, we have a network. Yet a quantitative understanding of real-world networks is in its infancy, and must be based on strong theoretical and methodological foundations. The goal of this talk is to provide some insight into these foundations from the perspective of nonparametric statistics, in particular how trade-offs between model complexity and parsimony can be balanced to yield practical algorithms with provable properties. |
|
Mon 26 Oct, '15- |
CRiSM Seminar - Hernando Ombao (UC Irvine, Dept of Statistics))A1.01Hernando Ombao (UC Irvine, Dept of Statistics) exhibits abnormal firing behavior which then spreads to other subpopulations of neurons. This abnormal firing behavior is captured by increases in signal amplitudes (which can be easily spotted by visual inspection) and changes in the decomposition of the waveforms and in the strength of dependence between different regions (which are more subtle). The proposed frequency-specific change-point detection method (FreSpeD) uses a cumulative sum test statistic within a binary segmentation algorithm. Theoretical optimal properties of the FreSpeD method will be developed. We demonstrate that, when applied to an epileptic seizure EEG data, FreSpeD identifies the correct brain region as the focal point of seizure, the time of seizure onset and the very subtle changes in cross-coherence immediately preceding seizure onset. The goal of the second project to track changes in spatial boundaries (or more generally spatial sets or clusters) as the seizure process unfolds. A pair of channels (or a pair of sets of channels) are merged into one cluster if they exhibit synchronicity as measured by, for example, similarities in their spectra or by the strength of their coherence. We will highlight some open problems including developing a model for the evolutionary clustering of non-stationary time series. The first project is in collaboration with Anna Louise Schröder (London School of Economics); the second is with Carolina Euan (CIMAT, Mexico), Joaquin Ortega (CIMAT, Mexico) and Ying Sun (KAUST, Saudi Arabia). |
|
Mon 12 Oct, '15- |
CRiSM Seminar - Dan Roy (University of Toronto)A1.01Dan Roy (University of Toronto) For finite parameter spaces under finite loss, there is a close link between optimal frequentist decision procedures and Bayesian procedures: every Bayesian procedure derived from a prior with full support is admissible, and every admissible procedure is Bayes. This relationship breaks down as we move beyond finite parameter spaces. There is a long line of work relating admissible procedures to Bayesian ones in more general settings. Under some regularity conditions, admissible procedures can be shown to be the limit of Bayesian procedures. Under additional regularity, they are generalized Bayesian, i.e., they minimize the average loss with respect to an improper prior. In both these cases, one must venture beyond the strict confines of Bayesian analysis. Using methods from mathematical logic and nonstandard analysis, we introduce the notion of a hyperfinite statistical decision problem defined on a hyperfinite probability space and study the class of nonstandard Bayesian decision procedures---namely, those whose average risk with respect to some prior is within an infinitesimal of the optimal Bayes risk. We show that if there is a suitable hyperfinite approximation to a standard statistical decision problem, then every admissible decision procedure is nonstandard Bayes, and so the nonstandard Bayesian procedures form a complete class. We give sufficient regularity conditions on standard statistical decision problems admitting hyperfinite approximations. Joint work with Haosui (Kevin) Duanmu. |
|
Fri 26 Jun, '15- |
CRiSM Seminar - Thomas Hamelryck (University of Copenhagan), Anjali Mazumder (Warwick)D1.07 (Complexity)Thomas Hamelryck (Bioinformatics Center, University of Copenhagen) Inference of protein structure and ensembles using Bayesian statistics and probability kinematics The so-called protein folding problem is the loose designation for an amalgam of closely related, unsolved problems that include protein structure prediction, protein design and the simulation of the protein folding process. We adopt a unique Bayesian approach to modelling bio-molecular structure, based on graphical models, directional statistics and probability kinematics. Notably, we developed a generative probabilistic model of protein structure in full atomic detail. I will give an overview of how rigorous probabilistic models of something as complicated as a protein's atomic structure can be formulated, focusing on the use of graphical models and directional statistics to model angular degrees of freedom. I will also discuss the reference ratio method, which is needed to "glue" several probabilistic models of protein structure together in a consistent way. The reference ratio method is based on "probability kinematics", a little known method to perform Bayesian inference proposed by the philosopher Richard C. Jeffrey at the end of the fifties. Probability kinematics might find widespread application in statistics and machine learning as a way to formulate complex, high dimensional probabilistic models for multi-scale problems by combining several simpler models. Anjali Mazumder (University of Warwick)
Probabilistic Graphical Models for planning and reasoning of scientific evidence in the courts
The use of probabilistic graphical models (PGMs) has gained prominence in the forensic science and legal literature when evaluating evidence under uncertainty. The graph-theoretic and modular nature of the PGMs provide a flexible and graphical representation of the inference problem, and propagation algorithms facilitate the calculation of laborious marginal and conditional probabilities of interest. In giving expert testimony regarding, for example, the source of a DNA sample, forensic scientists under much scrutiny, are often asked to justify their decision-making-process. Using information-theoretic concepts and a decision-theoretic framework, we define a value of evidence criterion as a general measure of informativeness for a forensic query and collection of evidence to determine which and how much evidence contributes to the reduction of uncertainty. In this talk, we demonstrate how this approach can be used for a variety of planning problems and the utility of PGMs for scientific and legal reasoning.
|
|
Fri 12 Jun, '15- |
CRiSM Seminar - Sara van der Geer (Zurich), Daniel Simpson (Warwick)D1.07 (Complexity)Daniel Simpson (University of Warwick) Penalising model component complexity: A principled practical approach to constructing priors Setting prior distributions on model parameters is the act of characterising the nature of our uncertainty and has proven a critical issue in applied Bayesian statistics. Although the prior distribution should ideally encode the users’ uncertainty about the parameters, this level of knowledge transfer seems to be unattainable in practice and applied statisticians are forced to search for a “default” prior. Despite the development of objective priors, which are only available explicitly for a small number of highly restricted model classes, the applied statistician has few practical guidelines to follow when choosing the priors. An easy way out of this dilemma is to re-use prior choices of others, with an appropriate reference. In this talk, I will introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user- defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to reparameterisations, have a natural connection to Jeffreys’ priors, are designed to support Occam’s razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations, like random effect models, spline smoothing, disease mapping, Cox proportional hazard models with time-varying frailty, spatial Gaussian fields and multivariate probit models. Further, we show how to control the overall variance arising from many model components in hierarchical models. This joint work with Håvard Rue, Thiago G. Martins, Andrea Riebler, Geir-Arne Fuglstad (NTNU) and Sigrunn H. Sørbye (Univ. of Tromsø). Sara van de Geer (ETH Zurich) Norm-regularized Empirical Risk Minimization |
|
Fri 29 May, '15- |
CRiSM Seminar - Clifford Lam (LSE), Zoltan Szabo (UCL)D1.07 (Complexity)Zoltán Szabó, (UCL) Regression on Probability Measures: A Simple and Consistent Algorithm We address the distribution regression problem: we regress from probability measures to Hilbert-space valued outputs, where only samples are available from the input distributions. Many important statistical and machine learning problems can be phrased within this framework including point estimation tasks without analytical solution, or multi-instance learning. However, due to the two-stage sampled nature of the problem, the theoretical analysis becomes quite challenging: to the best of our knowledge the only existing method with performance guarantees requires density estimation (which often performs poorly in practise) and the distributions to be defined on a compact Euclidean domain. We present a simple, analytically tractable alternative to solve the distribution regression problem: we embed the distributions to a reproducing kernel Hilbert space and perform ridge regression from the embedded distributions to the outputs. We prove that this scheme is consistent under mild conditions (for distributions on separable topological domains endowed with kernels), and construct explicit finite sample bounds on the excess risk as a function of the sample numbers and the problem difficulty, which hold with high probability. Specifically, we establish the consistency of set kernels in regression, which was a 15-year-old-open question, and also present new kernels on embedded distributions. The practical efficiency of the studied technique is illustrated in supervised entropy learning and aerosol prediction using multispectral satellite images. [Joint work with Bharath Sriperumbudur, Barnabas Poczos and Arthur Gretton.]
Clifford Lam, (LSE) Nonparametric Eigenvalue-Regularized Precision or COvariance Matrix Estimator for Low and High Frequency Data Analysis We introduce nonparametric regularization of the eigenvalues of a sample covariance matrix through splitting of the data (NERCOME), and prove that NERCOME enjoys asymptotic optimal nonlinear shrinkage of eigenvalues with respect to the Frobenius norm. One advantage of NERCOME is its computational speed when the dimension is not too large. We prove that NERCOME is positive definite almost surely, as long as the true covariance matrix is so, even when the dimension is larger than the sample size. With respect to the inverse Stein’s loss function, the inverse of our estimator is asymptotically the optimal precision matrix estimator. Asymptotic efficiency loss is defined through comparison with an ideal estimator, which assumed the knowledge of the true covariance matrix. We show that the asymptotic efficiency loss of NERCOME is almost surely 0 with a suitable split location of the data. We also show that all the aforementioned optimality holds for data with a factor structure. Our method avoids the need to first estimate any unknowns from a factor model, and directly gives the covariance or precision matrix estimator. Extension to estimating the integrated volatility matrix for high frequency data is presented as well. Real data analysis and simulation experiments on portfolio allocation are presented for both low and high frequency data. |
|
Fri 15 May, '15- |
CRiSM Seminar - Carlos Carvalho (UT Austin), Andrea Riebler (Norwegian University of Science & Technology)D1.07 (Complexity)Carlos Carvalho, (The University of Texas) Decoupling Shrinkage and Selection in Bayesian Linear Models: A Posterior Summary Perspective Andrea Riebler, (Norwegian University of Science and Technology) |
|
Fri 1 May, '15- |
CRiSM Seminar - Marcelo Pereyra (Bristol), Magnus Rattray (Manchester)D1.07 (Complexity)Marcelo Pereyra (Bristol)
Proximal Markov chain Monte Carlo: stochastic simulation meets convex optimisation
Convex optimisation and stochastic simulation are two powerful computational methodologies for performing statistical inference in high-dimensional inverse problems. It is widely acknowledged that these methodologies can complement each other very well, yet they are generally studied and used separately. This talk presents a new Langevin Markov chain Monte Carlo method that uses elements of convex analysis and proximal optimisation to simulate efficiently from high-dimensional densities that are log-concave, a class of probability distributions that is widely used in modern high-dimensional statistics and data analysis. The method is based on a new first-order approximation for Langevin diffusions that uses Moreau-Yoshida approximations and proximity mappings to capture the log-concavity of the target density and construct Markov chains with favourable convergence properties. This approximation is closely related to Moreau-Yoshida regularisations for convex functions and uses proximity mappings instead of gradient mappings to approximate the continuous-time process. The proposed method complements existing Langevin algorithms in two ways. First, the method is shown to have very robust stability properties and to converge geometrically for many target densities for which other algorithms are not geometric, or only if the time step is sufficiently small. Second, the method can be applied to high-dimensional target densities that are not continuously differentiable, a class of distributions that is increasingly used in image processing and machine learning and that is beyond the scope of existing Langevin and Hamiltonian Monte Carlo algorithms. The proposed methodology is demonstrated on two challenging models related to image resolution enhancement and low-rank matrix estimation, which are not well addressed by existing MCMC methodology.
|
|
Fri 20 Feb, '15- |
CRiSM Seminar - Marina Knight (York)B1.01 (Maths)Marina Knight (York) Hurst exponent estimation for long-memory processes using wavelet lifting |
|
Fri 6 Feb, '15- |
CRiSM Seminar - Gareth Peters (UCL), Leonhard Held (University of Zurich)B1.01 (Maths)Gareth Peters (UCL) Leonard Held (University of Zurich) |
|
Fri 23 Jan, '15- |
CRiSM Seminar - Rebecca Killick (Lancaster), Peter Green (Bristol)B1.01 (Maths)Rebecca Killick (Lancaster) |
|
Tue 2 Dec, '14- |
CRiSM Seminar - David Draper (UC-Santa Cruz), Luis Nieto Barajas (ITAM - Instituto Tecnologico Autonomo de Mexico)A1.01Luis Nieto Barajas (ITAM - Instituto Tecnologico Autonomo de Mexico) |
|
Thu 27 Nov, '14- |
CRiSM Seminar - Daniel Williamson (Exeter) & David van Dyk (Imperial)A1.01David van Dyk (Imperial) |
|
Thu 13 Nov, '14- |
CRiSM Seminar - Michael Eichler (Maastricht) & Richard Huggins (Melbourne)A1.01Michael Eichler (Maastricht) In time series analysis, inference about cause-effect relationships among multiple time series is commonly based on the concept of Granger causality, which exploits temporal structure to achieve causal ordering of dependent variables. One major and well known problem in the application of Granger causality for the identification of causal relationships is the possible presence of latent variables that affect the measured components and thus lead to so-called spurious causalities. We present a new graphical approach for describing and analysing Granger-causal relationships in multivariate time series that are possibly affected by latent variables. It is based on mixed graphs in which directed edges represent direct influences among the variables while dashed edges---directed or undirected---indicate associations that are induced by latent variables. We show how such representations can be used for inductive causal learning from time series and discuss the underlying assumptions and their implications for causal learning. Finally we will discuss tetrad constraints in the time series context and how the can be exploited for causal inference. Richard Huggins (Melbourne) |
|
Thu 30 Oct, '14- |
CRiSM Seminar - Pierre Jacob & Leonardo BottoloA1.01Pierre Jacob |
|
Thu 16 Oct, '14- |
CRiSM SeminarA1.01Karthik Bharath People use models in all fields of science, technology, management, etc. These can range from highly complex mathematical models based on systems of differential equations to relatively simple empirical, statistical, models. This talk is about the uncertainty in the predictions made by models. One aspect of this has come to be called Uncertainty Quantification (UQ), and is concerned with deriving the uncertainty in model outputs induced by uncertainty in the inputs. But there is another component of uncertainty that is much more important: all models are wrong. This talk is about just how badly misled we can be if we forget this fact. |
|
Wed 8 Oct, '14- |
CRiSM SeminarMS.03Christophe Ley - Universite Livre de Bruxelles Stein's method, Information theory, and Bayesian statistics In this talk, I will first describe a new general approach to the celebrated Stein method for asymptotic approximations and apply it to diverse approximation problems. Then I will show how Stein’s method can be successfully used in two a priori unrelated domains, namely information theory and Bayesian statistics. In the latter case, I will evaluate the influence of the choice of the prior on the posterior distribution at given sample size n. Based on joint work with Gesine Reinert (Oxford) and Yvik Swan (Liege). |
|
Wed 16 Jul, '14- |
CRiSM Seminar - Adelchi AzzaliniA1.01Adelchi Azzalini (University of Padova) Clustering based on non-parametric density estimation: A proposal Cluster analysis based on non-parametric density estimation represents an approach to the clustering problem whose roots date back several decades, but it is only in recent times that this approach could actually be developed. The talk presents one proposal within this approach which is among the few ones which have been brought up to the operational stage. |
|
Thu 12 Jun, '14- |
CRiSM Seminar - Emmanuele Giorgi (Lancaster)Emmanuele Giorgi (Lancaster) Combining data from multiple spatially referenced prevalence surveys using generalized linear geostatistical models Geostatistical methods are becoming more widely used in epidemiology to analyze spatial variation in disease prevalence. These methods are especially useful in resource-poor settings where disease registries are either non-existent or geographically incomplete, and data on prevalence must be obtained by survey sampling of the population of interest. In order to obtain good geographical coverage of the population, it is often necessary also to combine information from multiple prevalence surveys in order to estimate model parameters and for prevalence mapping. However, simply fitting a single model to the combined data from multiple surveys is inadvisable without testing the implicit assumption that both the underlying process and its realization are common to all of the surveys. We have developed a multivariate generalized linear geostatistical model to combine data from multiple spatially referenced prevalence surveys so as to address each of two common sources of variation across surveys: variation in prevalence over time; variation in data-quality. In the case of surveys that differ in quality, we assume that at least one of the surveys delivers unbiased gold-standard estimates of prevalence, whilst the others are potentially biased. For example, some surveys might use a random sampling design, the others opportunistic convenience samples. For parameter estimation and spatial predictions, we used Monte Carlo Maximum Likelihood methods. We describe an application to malaria prevalence data from Chikhwawa District, Malawi. The data consist of two Malaria Indicator Surveys (MISs) and an Easy Access Group (EAG) study, conducted over the period 2010-2012. In the two MISs, the data were collected by random selection of households in an area of 50 villages within 400 square kilometers, whilst the EAG study enrolled a random selection of children attending the vaccination clinic in Chikhwawa District Hospital. The second sampling strategy is more economical, but the sampling bias inherent to such "convenience" samples needs to be taken into account. |
|
Thu 12 Jun, '14- |
CRiSM Seminar - Ben Graham (Warwick)A1.01Ben Graham (University of Warwick) Handwriting, signatures, and convolutions The 'signature', from the theory of differential equations driven by rough paths, provides a very efficient way of characterizing curves. From a machine learning perspective, the elements of the signature can be used as a set of features for consumption by a classification algorithm. Using datasets of letters, digits, Indian characters and Chinese characters, we see that this improves the accuracy of online character recognition---that is the task of reading characters represented as a collection of pen strokes. |
|
Thu 29 May, '14- |
CRiSM Seminar - Randal DoucA1.01Randal Douc (TELECOM SudParis) Identifiability conditions for partially-observed Markov chains By R. Douc, F. Roueff and T. Sim This paper deals with a parametrized family of partially-observed bivariate Markov chains. We establish that the limit of the normalized log-likelihood is maximized when the parameter belongs to the equivalence class of the true parameter, which is a key feature for obtaining consistency the Maximum Likelihood Estimators (MLE) in well-specified models. This result is obtained in a general framework including both fully dominated or partially dominated models, and thus applies to both Hidden Markov models or Observation-Driven times series. In contrast with previous approaches, the identifiability is addressed by relying on the unicity of the invariant distribution of the Markov chain associated to the complete data, regardless its rate of convergence to the equilibrium. We use this approach to obtain a set of easy-to-check conditions which imply the consistency of the MLE of a general observation driven time series. |
|
Thu 29 May, '14- |
CRiSM Seminar - Rajen Shah (Cambridge)A1.01Rajen Shah (Cambridge) Random Intersection Trees for finding interactions in large, sparse datasets Many large-scale datasets are characterised by a large number (possibly tens of thousands or millions) of sparse variables. Examples range from medical insurance data to text analysis. While estimating main effects in regression problems involving such data is now a reasonably well-studied problem, finding interactions between variables remains a serious computational challenge. As brute force searches through all possible interactions are infeasible, most approaches build up interaction sets incrementally, adding variables in a greedy fashion. The drawback is that potentially informative high-order interactions may be overlooked. Here, we propose an alternative approach for classification problems with binary predictor variables, called Random Intersection Trees. It works by starting with a maximal interaction that includes all variables, and then gradually removing variables if they fail to appear in randomly chosen observations of a class of interest. We show that with this method, under some weak assumptions, interactions can be found with high probability, and that the computational complexity of our procedure is much smaller than for a brute force search. |
|
Thu 15 May, '14- |
CRiSM Seminar - David Leslie (Bristol)A1.01David Leslie (Bristol) Stochastic approximation was introduced as a tool to find the zeroes of a function under only noisy observations of the function value. A classical statistical example is to find the zeroes of the score function when observations can only be processed sequentially. The method has since been developed and used mainly in the control theory, machine learning and economics literature to analyse iterative learning algorithms, but I contend that it is time for statistics to re-discover the power of stochastic approximation. I will introduce the main ideas of the method, and describe an extension; the parameter of interest is an element of a function space, and we wish to analyse its stochastic evolution through time. This extension allows the analysis of online nonparametric algorithms - we present an analysis of Newton's algorithm to estimate nonparametric mixing distributions. It also allows the investigation of learning in games with a continuous strategy set, where a mixed strategy is an arbitrary distribution on an interval. (Joint work with Steven Perkins) |
|
Thu 15 May, '14- |
CRiSM Seminar - Mark Fiecas (Warwick)A1.01Mark Fiecas (Warwick) In recent years, research into analyzing brain signals has dramatically increased, and these rich data sets require more advanced statistical tools in order to perform proper statistical analyses. Consider an experiment where a stimulus is presented many times, and after each stimulus presentation (trial), time series data is collected. The time series data per trial exhibit nonstationary characteristics. Moreover, across trials the time series are non-identical because their spectral properties change over the course of the experiment. In this talk, we will look at a novel approach for analyzing nonidentical nonstationary time series data. We consider two sources of nonstationarity: 1) within each trial of the experiment and 2) across the trials, so that the spectral properties of the time series data are evolving over time within a trial, and are also evolving over the course of the experiment. We extend the locally stationary time series model to account for nonidentical data. We analyze a local field potential data set to study how the spectral properties of the local field potentials obtained from the nucleus accumbens and the hippocampus of a monkey evolve over the course of a learning association experiment. |
|
Thu 1 May, '14- |
Oxford-Warwick Seminar: David Dunson (Duke) and Eric Moulines (Télécom ParisTech)MS.03David Dunson (Duke University) Robust and scalable Bayes via the median posterior Bayesian methods have great promise in big data sets, but this promise has not been fully realized due to the lack of scalable computational methods. Usual MCMC and SMC algorithms bog down as the size of the data and number of parameters increase. For massive data sets, it has become routine to rely on penalized optimization approaches implemented on distributed computing systems. The most popular scalable approximation algorithms rely on variational Bayes, which lacks theoretical guarantees and badly under-estimates posterior covariance. Another problem with Bayesian inference is the lack of robustness; data contamination and corruption is particularly common in large data applications and cannot easily be dealt with using traditional methods. We propose to solve both the robustness and the scalability problem using a new alternative to exact Bayesian inference we refer to as the median posterior. Data are divided into subsets and stored on different computers prior to analysis. For each subset, we obtain a stochastic approximation to the full data posterior, and run MCMC to generate samples from this approximation. The median posterior is defined as the geometric median of the subset-specific approximations, and can be rapidly approximated. We show several strong theoretical results for the median posterior, including general theorems on concentration rates and robustness. The methods are illustrated through simple examples, including Gaussian process regression with outliers. Eric Moulines (Télécom ParisTech) Proximal Metropolis adjusted Langevin algorithm for sampling sparse distribution over high-dimensional spaces This talk introduces a new Markov Chain Monte Carlo method to sampling sparse distributions or to perform Bayesian model choices in high dimensional settings. The algorithm is a Hastings-Metropolis sampler with a proposal mechanism which combines (i) a Metropolis adjusted Langevin step to propose local moves associated with the differentiable part of the target density with (ii) a proximal step based on the non-differentiable part of the target density which provides sparse solutions such that small components are shrunk toward zero. Several implementations of the proximal step will be investigated adapted to different sparsity priors or allowing to perform variable selections, in high-dimensional settings. The performance of these new procedures are illustrated on both simulated and real data sets. Preliminary convergence results will also be presented. |
|
Thu 27 Mar, '14- |
CRiSM Seminar - Professor Adrian Raftery (Washington)A1.01Professor Adrian Raftery (Washington) Bayesian Reconstruction of Past Populations for Developing and Developed Countries |
|
Thu 13 Mar, '14- |
CRiSM Seminar - Darren Wilkinson (Newcastle), Richard Everitt (Reading)A1.01Darren Wilkinson (Newcastle) Saccharomyces cerevisiae (often known as budding yeast, or brewers yeast) is a single-celled micro-organism that is easy to grow and genetically manipulate. As it has a cellular organisation that has much in common with the cells of humans, it is often used as a model organism for studying genetics. High-throughput robotic genetic technologies can be used to study the fitness of many thousands of genetic mutant strains of yeast, and the resulting data can be used to identify novel genetic interactions relevant to a target area of biology. The processed data consists of tens of thousands of growth curves with a complex hierarchical structure requiring sophisticated statistical modelling of genetic independence, genetic interaction (epistasis), and variation at multiple levels of the hierarchy. Starting from simple stochastic differential equation (SDE) modelling of individual growth curves, a Bayesian hierarchical model can be built with variable selection indicators for inferring genetic interaction. The methods will be applied to data from experiments designed to highlight genetic interactions relevant to telomere biology. Richard Everitt (Reading) Inexact approximations for doubly and triply intractable problems Markov random field models are used widely in computer science, statistical physics and spatial statistics and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to an intractable likelihood function. Several methods have been developed that permit exact, or close to exact, simulation from the posterior distribution. However, estimating the marginal likelihood and Bayes' factors for these models remains challenging in general. This talk will describe new methods for estimating Bayes' factors that use simulation to circumvent the evaluation of the intractable likelihood, and compare them to approximate Bayesian computation. We will also discuss more generally the idea of "inexact approximations". |
|
Thu 13 Feb, '14- |
CRiSM Seminar - Vasileios Maroulas (Bath/Tennessee))A1.01Vasileios Maroulas (Bath/Tennessee)
|
|
Thu 13 Feb, '14- |
CRiSM Seminar - Amanda Turner (Lancaster)A1.01Amanda Turner (Lancaster) Small particle limits in a regularized Laplacian random growth model
|
|
Thu 30 Jan, '14- |
CRiSM Seminar - Judith Rousseau (Paris Dauphine), Jean-Michel Marin (Université Montpellier)A1.01Jean-Michel Marin Consistency of the Adaptive Multiple Importance Sampling (joint work with Pierre Pudlo and Mohammed Sedki Among Monte Carlo techniques, the importance sampling requires fine tuning of a proposal distribution, which is now fluently resolved through iterative schemes. The Adaptive Multiple Importance Sampling (AMIS) of Cornuet et al. (2012) provides a significant improvement in stability and Effective Sample Size due to the introduction of a recycling procedure. However, the consistency of the AMIS estimator remains largely open. In this work, we prove the convergence of the AMIS, at a cost of a slight modification in the learning process. Numerical experiments exhibit that this modification might even improve the original scheme. Judith Rousseau Asymptotic properties of Empirical Bayes procedures – in parametric and non parametric models
In this work we investigate frequentist properties of Empirical Bayes procedures. Empirical Bayes procedures are very much used in practice in more or less formalized ways as it is common practice to replace some hyperparameter in the prior by some data dependent quantity. There are typically two ways of constructing these data dependent quantities : using some king of moment estimator or some quantity whose behaviour is well understood or using a maximum marginal likelihood estimator. In this work we first give some general results on how to determine posterior concentration rates under the former setting, which we apply in particular to two types of Dirichlet process mixtures. We then shall discuss more parametric models in the context of maximum marginal likelihood estimation. We will in particular explain why some pathological behaviour can be expected in this case. |
|
Thu 16 Jan, '14- |
CRiSM Seminar - Chenlei Leng (Warwick), John Fox (Oxford & UCL/Royal Free Hospital)A1.01John Fox (Oxford & UCL/Royal Free Hospital) Despite the practical success of argumentation methods in risk management and other kinds of decision making the main theories ignore quantitative measurement of uncertainty, or they combine qualitative reasoning with quantitative uncertainty in ad hoc ways. After a brief introduction to argumentation theory I will demonstrate some medical applications and invite suggestions for ways of incorporating uncertainty probabilistically that are mathematically satisfactory. Chenlei Leng (Warwick)
|
|
Mon 9 Dec, '13- |
Seminar - Professor van ZantenA1.01 |
|
Thu 28 Nov, '13- |
CRiSM Seminar - Christian Robert (Warwick)A1.01Selection of (ABC) summary statistics towards estimation and model choice Abstract: The choice of the summary statistics in Bayesian inference and in particular in ABC algorithms is paramount to produce a valid outcome. We derive necessary and sufficient conditions on those statistics for the corresponding Bayes factor to be convergent, namely to asymptotically select the true model. Those conditions, which amount to the expectations of the summary statistics to asymptotically differ under both models, are then usable in ABC settings to determine which summary statistics are appropriate, via a standard and quick Monte Carlo validation. We also discuss new schemes to automatically select efficient summary statistics from a large collection of those. (Joint work with J.-M. Marin, N. Pillai, P. Pudlo & J. Rousseau) |
|
Thu 28 Nov, '13- |
CRiSM Seminar - Oliver Ratmann (Imperial)A1.01Statistical modelling of summary values leads to accurate Approximate Bayesian Computations Abstract: Approximate Bayesian Computations (ABC) are considered to be noisy. We present a statistical framework for accurate ABC parameter inference that rests on well-established results from indirect inference and decision theory. This framework guarantees that ABC estimates the mode of the true posterior density exactly and that the Kullback-Leibler divergence of the ABC approximation to the true posterior density is minimal, provided that verifiable conditions are met. Our approach requires appropriate statistical modelling of the distribution of "summary values" - data points on a summary level - from which the choice of summary statistics follows implicitly. This places elementary statistical modelling at the heart of ABC analyses, which we illustrate on several examples. |
|
Thu 31 Oct, '13- |
CRiSM Seminar - Ben Francis (Liverpool)A1.01Ben Francis Research has been undertaken to include process noise in the Pharmacokinetic/Pharmacodynamic (PK/PD) response prediction to better simulate the patient response to the dose by allowing values sampled from the individual PK/PD parameter distributions to vary over time. Further work explores different formulations of a cost function by considering probabilities from a Markov model. Using the introduced methodology, the drug dose algorithm is shown to be adaptive to patient needs for imatinib and simvastatin therapy. However, application of the drug dose algorithm in a wide range of clinical dosing decisions is possible. |
|
Thu 31 Oct, '13- |
CRiSM Seminar - Juhyun Park (Lancaster)A1.01Juhyun Park (Lancaster) Numerical examples are used to illustrate the method and to evaluate finite sample performance. |
|
Thu 17 Oct, '13- |
CRiSM Seminar - François Caron (Oxford), Davide Pigoli (Warwick)A1.01François Caron (Oxford) In this talk I will present a novel Bayesian nonparametric model for bipartite graphs, based on the theory of completely random measures. The model is able to handle a potentially infinite number of nodes and has appealing properties; in particular, it may exhibit a power-law behavior for some values of the parameters. I derive a posterior characterization, a generative process for network growth, and a simple Gibbs sampler for posterior simulation. Finally, the model is shown to provide a good fit to several large real-world bipartite social networks. Davide Pigoli (Warwick) Comparative linguistics is concerned with the exploration of languages evolution. The traditional way of exploring relationships across languages consists of examining textual similarity. However, this neglects the phonetic characteristics of the languages. Here a novel approach is proposed to incorporate phonetic information, based on the comparison of frequency covariance structures in spoken languages. In particular, the aim is to explore the relationships among Romance languages and how they have developed from their common Latin root. The covariance operator being the statistical unit, a framework is illustrated for inference concerning the covariance operator of a functional random process. First, the problem of the definition of possible metrics for covariance operators is considered. In particular, an infinite dimensional analogue of the Procrustes reflection size and shape distance is developed. Then, distance-based inferential procedures are proposed for estimation and hypothesis testing. Finally, it is shown that the analysis of pairwise distances between phonetic covariance structures can provide insight into the relationships among Romance languages. Some languages also present features that are not completely expected from linguistics theory, indicatingnew directions for investigations.
|
|
Wed 10 Jul, '13- |
CRiSM Seminar - Prof Donald MartinA1.01Professor Donald Martin (North Carolina State University) Computing probabilities for the discrete scan statistic through slack variables The discrete scan statistic is used in many areas of applied probability and statistics to study local clumping of patterns. Testing based on the statistic requires tail probabilities. Whereas the distribution has been studied extensively, most of the results are approximations, due to the difficulties associated with the computation. Results for exact tail probabilities for the statistic have been given for a binary sequence that is independent or first-order Markovian. We give an algorithm to obtain probabilities for the statistic over multi-state trials that are Markovian of a general order of dependence, and explore the algorithm’s usefulness. |
|
Thu 27 Jun, '13- |
CRiSM Seminar - Nicolai MeinshausenA1.01Nicolai Meinshausen (University of Oxford) Min-wise hashing for large-scale regression and classification. We take a look at large-scale regression analysis in a "large p, large n" context for a linear regression or classification model. In a high-dimensional "large p, small n" setting, we can typically only get good estimation if there exists a sparse regression vector that approximates the observations. No such assumptions are required for large-scale regression analysis where the number of observations n can (but does not have to) exceed the number of variables p. The main difficulty is that computing an OLS or ridge-type estimator is computationally infeasible for n and p in the millions and we need to find computationally efficient ways to approximate these solutions without increasing the prediction error by a large amount. Trying to find interactions amongst millions of variables seems to be an even more daunting task. We study a small variation of the b-bit minwse-hashing scheme (Li and Konig, 2011) for sparse datasets and show that the regression problem can be solved in a much lower-dimensional setting as long as the product of the number of non-zero elements in each observation and the l2-norm of a good approximation vector is small. We get finite-sample bounds on the prediction error. The min-wise hashing scheme is also shown to fit interaction models. Fitting interactions does not require an adjustment to the method used to approximate linear models, it just requires a higher-dimensional mapping. |
|
Thu 13 Jun, '13- |
CRiSM Seminar - Piotr FryzlewiczA1.01Piotr Fryzlewicz (London School of Economics) Wild Binary Segmentation for multiple change-point detection |
|
Thu 6 Jun, '13- |
CRiSM Seminar - Ajay JasraA1.01Ajay Jasra (National University of Singapore) On the convergence of Adaptive sequential Monte Carlo Methods In several implementations of Sequential Monte Carlo (SMC) methods, it is natural and important in terms of algorithmic efficiency, to exploit the information on the history of the particles to optimally tune their subsequent propagations. In the following talk we provide an asymptotic theory for a class of such adaptive SMC methods. Our theoretical framework developed here will cover for instance, under assumptions, the algorithms in Chopin (2002), Jasra et al (2011), Schafer & Chopin (2013). There are limited results about the theoretical underpinning of such adaptive methods: we will bridge this gap by providing a weak law of large numbers (WLLN) and a central limit theorem (CLT) for some of the algorithms. The latter seems to be the first result of its kind in the literature and provides a formal justification of algorithms that are used in many practical scenarios. This is a joint work with Alex Beskos (NUS/UCL). |
|
Thu 30 May, '13- |
CRiSM Seminar - Tom PalmerA1.01Tom Palmer (Warwick Medical School) Topics in instrumental variable estimation: structural mean models and bounds One aim of epidemiological studies is to investigate the effect of a risk factor on a disease outcome. However, these studies are prone to unmeasured confounding and reverse causation. The use of genotypes as instrumental variables, known as Mendelian randomization studies, are one way to overcome this. In this talk I describe some methods in instrumental variable estimation; structural mean models and nonparametric bounds for the average causal effect. Specifically, I describe how to estimate structural mean models using multiple instrumental variables in the generalized method of moments framework common in Econometrics. I describe the nonparametric bounds for the average causal effect of Balke and Pearl (JASA, 1997) which can be applied when each of the three variables; instrument, intermediate, and outcome are all binary. I describe some methodological extensions to these bounds and their limitations. To demonstrate the models I use a Mendelian randomization example investigating the effect of being overweight on the risk of hypertension in the Copenhagen General Population Study. I will also draw some comparisons with the application of instrumental variables to correct for noncompliance in randomized controlled trials. |
|
Thu 23 May, '13- |
CRiSM Seminar - Ioanna ManolopoulouA1.01Ioanna Manolopoulou (University College London) Bayesian observation modeling in presence-only data The prevalence of presence-only samples eg. in ecology or criminology has led to a variety of statistical approaches. Aiming to predict ecological niches, species distribution models provide probability estimates of a binary response (presence/absence) in light of a set of environmental covariates. Similarly, statistical models to predict crime use propensity indicators from observable attributes inferred from incidental data. However, the associated challenges are confounded by non-uniform observation models; even in cases where observation is driven by seemingly irrelevant factors, these may distort estimates about the distribution of occurrences as a function of covariates due to unknown correlations. We present a Bayesian non-parametric approach to addressing sampling bias by carefully incorporating an observation model in a partially identifiable framework with selectively informative priors and linking it to the underlying process. Any available information about the role of various covariates in the observation process can then naturally enter the model. For example, in cases where sampling is driven by presumed likelihood of detecting an occurrence, the observation model becomes a proxy of the presence/absence model. We illustrate our methods on an example from species distribution modeling and a corporate accounting application. Joint work with Richard Hahn from Chicago Booth. |
|
Thu 2 May, '13- |
CRiSM Seminar - Jon WarrenA1.01Dr Jon Warren (University of Warwick) Random matrices, Stochastic Growth models and the KPZ equation. I will base this talk on two pieces of joint work. One with Peter Windridge, the other with Neil O'Connell. Firstly I will show you how the distribution of a largest eigenvalue of certain random matrix ( in fact having a Wishart distribution) arises also in a simple stochastic growth model. In fact this growth model belongs to a large universality class, which includes mathematical models for interfaces as diverse as the edge of a burning piece of paper, or a colony of bacteria on a petri dish. The KPZ equation is a stochastic partial differential equation that also belongs to this universality class, and in the work with Neil we set out to construct an analogue, for the KPZ equation, for the second, third and so on largest eigenvalues of the random matrix. |
|
Thu 25 Apr, '13- |
CRiSM Seminar - Heather BatteyA1.01Heather Battey (University of Bristol) Smooth projected density estimation In this talk I will introduce a new class of estimators for multidimensional density estimation. The estimators are attractive in that they offer both flexibility and the possibility of incorporating structural constraints, whilst possessing a succinct representation that may be stored and evaluated easily. The latter property is of paramount importance when dealing with large datasets, which are now commonplace in many application areas. We show in a simulation study that the approach is universally unintimidated across a range of data generating mechanisms and often outperforms popular nonparametric estimators (including the kernel density estimator), even when structural constraints are not utilised. Moreover, its performance is shown to be somewhat robust to the choice of tuning parameters, which is an important practical advantage of our procedure. |
|
Thu 14 Mar, '13- |
CRiSM Seminar - Kevin Korb (Monash)A1.01Kevin Korb (Monash) An Overview of Bayesian Network Research at Monash Recent research on and around Bayesian net (BN) technology at Monash has featured: fog forecasting (the Bureau of Meteorology); environmental management (Victorian gov); biosecurity (ACERA); finding better discretizations, using cost-based data and Bayesian scoring rules; data mining dynamic Bayesian networks; re-activating MML causal discovery of linear models. I'll discuss these and some other BN activities, briefly describe other research happening at Monash FIT, and the opportunities for collaboration. |
|
Thu 28 Feb, '13- |
CRiSM Seminar - Andrew GolightlyA1.01Andrew Golightly (Newcastle University) Auxiliary particle MCMC schemes for partially observed Markov jump processes We consider Bayesian inference for parameters governing Markov jump processes (MJPs) using discretely observed data that may be incomplete and subject to measurement error. We use a recently proposed particle MCMC scheme which jointly updates parameters of interest and the latent process and present a vanilla implementation based on a bootstrap filter before considering improvements based around an auxiliary particle filter. In particular, we focus on a linear noise approximation to the MJP to construct a pre-weighting scheme and couple this with a bridging mechanism. Finally, we embed this approach within a 'delayed acceptance' framework to allow further computational gains. The methods are illustrated with some examples arising in systems biology. |
|
Thu 21 Feb, '13- |
CRiSM Seminar - Philip DawidA1.01Philip Dawid (University of Cambridge Theory and Applications of Proper Scoring Rules We give an overview of the theory of proper scoring rules, and some recent applications.
We have recently characterised those proper local scoring rules that can be computed without requiring the normalising constant of the density. This property is valuable for many purposes, including Bayesian model selection with improper priors.
|
|
Thu 31 Jan, '13- |
CRiSM Seminar - Catriona QueenA1.01Catriona Queen (The Open University) A graphical dynamic approach to forecasting flows in road traffic networks Traffic flow data are routinely collected for many networks worldwide. These invariably large data sets can be used as part of a traffic management system, for which good traffic flow forecasting models are crucial. While statistical flow forecasting models usually base their forecasts on flow data alone, data for other traffic variables are also routinely collected. This talk considers how cubic splines can be used to incorporate information from these extra variables to enhance flow forecasts. The talk also introduces a new type of chain graph model for forecasting traffic flows. The models are applied to the problem of forecasting multivariate road traffic flows at the intersection of three busy motorways near Manchester, UK. |
|
Thu 24 Jan, '13- |
CRiSM Seminar - Evangelos EvangelouA1.01Evangelos Evangelou (University of Bath) Spatial sampling design under cost constraints in the presence of sampling errors A sampling design scheme for spatial models for the prediction of the underlying Gaussian random field will be presented. The optimality criterion is the maximisation of the information about the random field contained in the sample. The model discussed departs from the typical spatial model by assuming measurement error in the observations, varying from location to location, while interest lies in prediction without the error term. In this case multiple samples need to be taken from each sampling location in order to reduce the measurement error. To that end, a hybrid algorithm which combines simulated annealing nested within an exchange algorithm will be presented for obtaining the optimal sampling design. Consideration is made with regards to optimal sampling under budget constraints accounting for initialisation and sampling costs. Joint work with Zhengyuan Zhu (Iowa State) |
|
Thu 17 Jan, '13- |
CRiSM Seminar - Anastasia Papavasiliou (Warwick)A1.01Dr Anastasia Papavasiliou (University of Warwick) Statistical Inference for differential equations driven by rough paths Differential equations driven by rough paths (RDEs for short) generalize SDEs by allowing the equation to be driven by any type of noise and not just Brownian motion. As such, they are a very flexible modelling tool for randomly evolving dynamical systems. So far, however, they have been ignored by the statistics community, in my opinion for the two followin reasons: (i) the abstract theory of rough paths is still very young and under development, which makes it very hard to penetrate; (ii) there are no statistical tools available. In this talk, I will give an introduction to the theory and I will also discuss how to approach the problem of statistical inference given a discretely observed solution to an RDE. |
|
Fri 14 Dec, '12- |
CRiSM Seminar - Bjarki EldonA1.01Bjarki Eldon (Institut für Mathematik, TU Berlin) Single, and multiple loci, large offspring number population models For many organisms the usual Wright-Fisher and Moran low offspring number population models are reasonable approximations. For some marine organisms (at least), this may not be the case, and one may need to consider large offspring number models, in which individuals can give rise to very many offspring, or on the order of the population size. The coalescent processes derived from the two classes of population models are very different. Examples of single, and multiple loci, population models will be discusses, and the implications large offspring number models have for inference of, for example, natural selection. |
|
Thu 29 Nov, '12- |
CRiSM Seminar - Nick WhiteleyA1.01Nick Whiteley (University of Bristol) Twisted Particle Filters This talk reports on an investigation of alternative sampling laws for particle filtering algorithms and the influence of these laws on the efficiency of particle approximations of marginal likelihoods in hidden Markov models. The focus is on the regime where the length of the data record tends to infinity. Amongst a broad class of candidates we characterize the essentially unique family of particle system transition kernels which is optimal with respect to an asymptotic-in-time variance growth rate criterion. The sampling structure of the algorithm defined by these optimal transitions turns out to be only subtly different from that of standard algorithms and yet the fluctuation properties of the estimates it provides are, in some ways, dramatically different. The structure of the optimal transition suggests a new class of algorithms, which we term "twisted" particle filters, and whose properties will be discussed. |
|
Thu 22 Nov, '12- |
CRiSM Seminar - Piotr ZwiernikA1.01Piotr Zwiernik (TU Eindhoven) Group invariance for Graphical Gaussian models Graphical models are a popular way of modeling complicated dependencies. In the Gaussian case they have a particularly simple structure. Let G be an undirected graph with n nodes. Then the graphical Gaussian model is parametrized by the set K(G) of all concentration (symmetric, positive definite) matrices with zeros corresponding to non-edges of G. In this talk I describe the maximal subgroup of the general linear group that stabilizes K(G) in the natural action on symmetric matrices. This group gives the representation of graphical Gaussian models as composite transformation families, which has important consequences for the study of this model class. In particular I show how this links to the concept of robustness of covariance matrix estimators and more classical topics like hypotheses testing. (This is joint work with Jan Draisma and Sonja Kuhnt) |
|
Thu 15 Nov, '12- |
CRiSM Seminar - Chris SherlockA1.01Chris Sherlock (Lancaster University) Inference for reaction networks using the Linear Noise Approximation We consider inference for the reaction rates in discretely observed networks such as those found in models for systems biology, population ecology and epidemics. Most such networks are neither slow enough nor small enough for inference via the true state-dependent Markov jump process to be feasible. Typically, inference is conducted by approximating the dynamics through an ordinary differential equation (ODE), or a stochastic differential equation (SDE). The former ignores the stochasticity in the true model, and can lead to inaccurate inferences. The latter is more accurate but is harder to implement as the transition density of the SDE model is generally unknown. The Linear Noise Approximation (LNA) is a first order Taylor expansion of the approximating SDE about a deterministic solution. It can be viewed as a compromise between the ODE and SDE models. It is a stochastic model, but discrete time transition probabilities for the LNA are available through the solution of a series of ordinary differential equations. We describe how the LNA can be used to perform inference for a general class of reaction networks; evaluate the accuracy of such an approach; and show how and when this approach is either statistically or computationally more efficient than ODE or SDE methods. We apply the method to the Google Flu Trends data for New Zealand, using an SEIR "community" model with separate compartments for North and South Islands. |
|
Thu 8 Nov, '12- |
CRiSM Seminar - Robert G. CowellA1.01Robert G. Cowell (City University London) A simple greedy algorithm for reconstructing pedigrees I present a simple greedy-search algorithm for finding high likelihood pedigrees using micro-satellite (STR) genotype information on a complete sample of related individuals. The core idea behind the algorithm is not new, but I believe that putting it into a greedy search setting, and specifically the application to pedigree learning, is novel. The algorithm does not require age or sex information, but this information can be incorporated if desired. Prior information concerning pedigree structure is readily incorporated after the greedy-search is completed, on the high-likelihood pedigrees found.The algorithm is applied to human and non-human genetic data and in a simulation study. |
|
Thu 1 Nov, '12- |
CRiSM Seminar - Patrick WolfeA1.01Patrick Wolfe (UCL) Modelling Network Data Networks are fast becoming a primary object of interest in statistical data analysis, with important applications spanning the social, biological, and information sciences. A common aim across these fields is to test for and explain the presence of structure in network data. In this talk we show how characterizing the structural features of a network corresponds to estimating the parameters of various random network models, allowing us to obtain new results for likelihood-based inference and uncertainty quantification in this context. We discuss asymptotics for stochastic blockmodels with growing numbers of classes, the determination of confidence sets for network structure, and a more general point process modeling for network data taking the form of repeated interactions between senders and receivers, where we show consistency and asymptotic normality of partial-likelihood-based estimators related to the Cox proportional hazards model (arXiv:1201.5871, 1105.6245, 1011.4644, 1011.1703). |
|
Thu 25 Oct, '12- |
CRiSM Seminar - Paul JenkinsA1.01Paul Jenkins (University of Warwick) Sequential importance sampling and resampling in population genetic inference Since 2008, the reduction in DNA sequencing costs has far outpaced Moore's law, giving rise to a wealth of data on sequence variation in contemporary populations. The patterns of variation we see are shaped both by biological processes such as mutation, recombination, and natural selection, and by demographic processes such as population expansion, contraction, and migration. In principle the nature of these processes can be inferred from the data, and one powerful approach is to use a stochastic model like Kingman's coalescent for the random genealogical relationships relating the sampled sequences. The problem then is to compute the likelihood under this model, which can be computationally very challenging. In this talk I will describe how we can make progress on this problem by using the Monte Carlo-based approaches of sequential importance sampling and resampling. I will discuss our approach to questions including the design of a suitable proposal distribution, and when and how to resample the particle-based approximation to the posterior distribution of genealogies. |
|
Thu 18 Oct, '12- |
CRiSM Seminar - Ann BerringtonA1.01Ann Berrington (Southampton University) Title: Gender, turning points and boomerangs: longitudinal analyses of returning home in Britain |
|
Thu 4 Oct, '12- |
CRiSM Seminar - Beatrijs MoerkerkeA1.01Beatrijs Moerkerke (Ghent University, Belgium) Estimation of controlled direct effects in the presence of exposure-induced confounding and latent variables Estimation of the direct effect of an exposure on an outcome requires adjustment for confounders of the exposure-induced and mediator-outcome relationships. When some of these confounders are affected by the exposure, standard regression adjustment is prone to possibly severe bias. The use of inverse probability weighting has recently been suggested as a solution in the psychological literature. In this presentation, we present G-estimation as an alternative. We show that this estimation method can be easily embedded within the structural equation modeling framework and may in particular be used for estimating direct effects in the presence of latent variables. By avoiding inverse probability weighting, it accommodates the problem of unstable weights. We illustrate the approach both by simulations and by the analysis of an empirical study on the basis of which we explore the effect of age on negativity that is not mediated by mindfulness. |
|
Mon 25 Jun, '12- |
CRiSM Seminar - Pat Carter (WSU, Biology)A1.01Pat Carter (WSU, Biology) |
|
Thu 31 May, '12- |
6th Oxford-Warwick Joint Seminar (at Warwick)MS.0215.00 - 16.00 Simon Tavaré (Cambridge) 16.30 - 17.30 Simon French (Warwick) |
|
Thu 17 May, '12- |
CRiSM Seminar - Michael SørensenA1.01Michael Sørensen (Copenhagen) |
|
Mon 14 May, '12- |
CRiSM Seminar - Anthony Lee (Warwick)A1.01Dr Anthony Lee (Warwick) |
|
Thu 22 Mar, '12- |
CRiSM Seminar - Roberto Leon-GonzalezA1.01Roberto Leon-Gonzalez, National Graduate Institute for Policy Studies, Tokyo Endogeneity and Panel Data in Growth Regressions: A Bayesian Model Averaging Approach |
|
Wed 14 Mar, '12- |
CRiSM Seminar - Heather BatteyA1.01Heather Battey (University of Bristol) Further details to follow |
|
Wed 14 Mar, '12- |
CRiSM Seminar - Heather BatteyA1.01Heather Battey (University of Bristol) |
|
Thu 1 Mar, '12- |
CRiSM Seminar - Graham WoodA1.01Graham Wood (Macquarie University and Warwick Systems Biology) Normalization of ratio data Quantitative mass spectrometry techniques are commonly used for comparative proteomic analysis in order to provide relative quantitation between samples. For example, in attempting to find the proteins expressed in ovarian cancer, the quantities of a given protein are assessed by mass spectrometry in separate samples of both cancerous and healthy cells. To account for the variable “loading” (the total volumes of samples) from one sample to the other, a normalization procedure is required. A common approach to normalization is to use internal standards, proteins that are assumed to display only minimal changes in abundance between the samples under comparison. A normalization procedure then allows adjustment of the data, so enabling true relative quantities to be reported. Normalization is determined by centring the symmetrized ratio (say, cancerous over healthy) internal standards data. This presentation makes two contributions to an understanding of ratio normalization. First, the customary centring of logarithmically transformed ratios (frequently used, for example, in microarray analyses) is shown to attend not only to centring but also to minimisation of the spread of the symmetrized data. Second, the normalization problem is set in a larger context, allowing normalization to be achieved based on a symmetrization which carries the ratios to approximate normality, so increasing the power with which under or over-expressed proteins can be detected. Both simulated and real data will be used to illustrate the new method. |
|
Thu 1 Mar, '12- |
CRiSM Seminar - Stephen ConnorC1.06Stephen Connor (University of York) State-dependent Foster-Lyapunov criteria |
|
Thu 16 Feb, '12- |
CRiSM Seminar - Yee Whye TehA1.01Yee Whye Teh (Gatsby Computational Neuroscience Unit, UCL) A Bayesian nonparametric model for genetic variations based on fragmentation-coagulation processes Hudson's coalescent with recombination (aka ancestral recombination |
|
Thu 2 Feb, '12- |
CRiSM Seminar - Theodor StewartA1.01Theodor Stewart (University of Cape Town) Principles and Practice of Multicriteria Decision Analysis The role of multicriteria decision analysis (MCDA) in the broader context of decision science will be discussed. We will review the problem structuring needs of MCDA, and caution against over-simplistic approaches. Different schools of thinking in MCDA, primarily for deterministic problems, will be introduced, to demonstrate that even such problems include many complexities and pitfalls. The practicalities will be illustrated by means of value function methods (and perhaps goal programming if time permits). We will conclude with consideration of the impact of uncertainty on MCDA and the role of scenario planning in this regard. |
|
Mon 16 Jan, '12- |
CRiSM Seminar - Shinto Eguchi (Institute of Statistical Mathematics, Japan)C1.06Shinto Eguchi (Institute of Statistical Mathematics, Japan) Maximization of a generalized t-statistic for linear discrimination in the two group classification problem We discuss a statistical method for the classification problem with two groups labelled 0 and 1. We envisage a situation in which the conditional distribution given label 0 is well specified by a normal distribution, but the conditional distribution given label 1 is not well modelled by any specific distribution. Typically in a case-control study the distribution in the control group can be assumed to be normal, however the distribution in the case group may depart from normality. In this situation the maximum t-statistic for linear discrimination, or equivalently Fisher's linear discriminant function, may not be optiimal. We propose a class of generalized t-statistics and study asymptotic consistency and normality. The optimal generalized t-statistic in the sense of asymptotic variance is derived in a semi-parametric manner, and its statistical performance is confirmed in several numerical experiments. |
|
Thu 1 Dec, '11- |
CRiSM Seminar - Mark StrongA1.01Mark Strong (University of Sheffield) Managing Structural Uncertainty in Health Economic Decision Models |
|
Thu 17 Nov, '11- |
CRiSM Seminar - Nick Chater (Warwick Business School)A1.01Nick Chater (Warwick Business School)
Is the brain a Bayesian?
Almost all interesting problems that the brain solves involve probabilities inference; and the brain is clearly astonishingly effective at solving such problems. A substantial movement in cognitive science, neuroscience and artificial intelligence has suggested that the brain may, to some approximation, be a Bayesian. This talk considers in what sense, if any, this might be true; and asks how it might be that a Bayesian brain might, nonetheless, so poor at explicit probabilistic reasoning. |
|
Thu 3 Nov, '11- |
CRiSM Seminar - Dave Woods (Southampton)A1.01Dave Woods (University of Southampton) Design of experiments for Generalised Linear (Mixed) Models |
|
Thu 3 Nov, '11- |
CRiSM Seminar - Scott Schmidler (Duke University)MS.01Scott Schmidler (Duke University) Bayesian Shape Matching for Protein Structure Alignment and Phylogeny |
|
Thu 20 Oct, '11- |
Joint CRiSM-Systems Biology SeminarMOAC Seminar Room, Coventry HouseChris Brien (University of South Australia)
Robust Microarray Experiments by Design: A Multiphase Framework
This seminar will outline a statistical approach to the design of microarray experiments, taking account of all the experimental phases involved from initial sample collection to assessment of gene expression. The approach being developed is also highly relevant for other high-throughput technologies. This seminar should be of interest to all those working with experiments using microarray and other high-throughput technologies, as well as to statisticians.
|
|
Mon 17 Oct, '11- |
CRiSM Seminar - Atanu Biswas (Indian Statistical Institute)B1.01Atanu Biswas (Indian Statistical Institute) Comparison of treatments and data-dependent allocation for circular data from a cataract surgery |
|
Thu 6 Oct, '11- |
CRiSM Seminar - Marek Kimmel (Rice University, Houston)A1.01Marek Kimmel, Rice University, Houston Modeling the mortality reduction due to computed tomography screening for lung cancer The efficacy of computed tomography (CT) screening for lung cancer remains controversial despite the fact that encouraging results from the National Lung Screening Trial are now available. In this study, the authors used data from a single-arm CT screening trial to estimate the mortality reduction using a modeling-based approach to construct a control comparison arm. |
|
Fri 8 Jul, '11- |
Prof. Hernando Ombao - CRiSM SeminarA1.01Hernando Ombao Analysis of non-stationary time series |
|
Thu 7 Jul, '11- |
Prof. Hernando Ombao - CRiSM SeminarA1.01Hernando Ombao Special topics on spectral analysis: principal components analysis, clustering and discrimination |
|
Wed 6 Jul, '11- |
Prof. Hernando Ombao - CRiSM SeminarA1.01Hernando Ombao Intro to spectral analysis and coherence |
|
Thu 2 Jun, '11- |
CRiSM Seminar - Evsey MorozovA1.01Evsey Morozov (Karelian Research Centre, Russia) Regenerative queues: stability analysis and simulation We present a general approach to stability of regenerative queueing systems, which is based on the properties of the embedded renewal process of regenerations. Such a process obeys a useful characterization of the limiting remaining renewal time allowing in many cases to establish minimal stability conditions by a two-step procedure. At first step, a negative drift condition is used to prove that the basic process does not go to infinity (in probability), and at the second step, the finiteness of the mean regeneration period is proved. This approach has lead to the effective stability analysis of some models describing, in particular, such modern telecommunication systems as retrial queues and queues with optical buffers. Moreover, we discuss regenerative simulation method including both classical and non-classical (extended) regeneration allowing a dependence between regeneration cycles. |
|
Thu 26 May, '11- |
CRiSM Seminar - Postponed due to illness
|
|
Mon 23 May, '11- |
CRiSM PhD TalksMS.03Chris Nam (Warwick) Bryony Hill (Warwick) Ashley Ford (Warwick) |
|
Thu 19 May, '11- |
CRiSM Seminar - Sumeetpal SinghA1.01Sumeetpal Singh (Cambridge) Computing the filter derivative using Sequential Monte Carlo Sequential Monte Carlo (SMC) methods are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. We propose a SMC algorithm to compute the derivative of the optimal filter in a Hidden Markov Model (HMM) and study its stability both theoretically and with numerical examples. Applications include calibrating the HMM from observed data in an online manner. (Joint work with P. Del Moral and A. Doucet) |
|
Thu 12 May, '11- |
CRiSM Seminar - Alexander GorbanA1.01Alexander Gorban (Leicester) Geometry of Data Sets Plan |
|
Thu 28 Apr, '11- |
CRiSM Seminar - Sofia MassaA1.01Dr Sofia Massa (Oxford) Combining information from graphical Gaussian models In some recent applications, the interest is in combining information about relationships between variables from independent studies performed under partially comparable circumstances. One possible way of formalising this problem is to consider combination of families of distribution respecting conditional independence constraints with respect to a graph G, i.e., graphical models. In this talk I will introduce some motivating examples of the research question and I will present some relevant types of combinations and associated properties, in particular the relation between the properties of the combination and the structure of the graphs. Finally I will discuss some issues related to the estimation of the parameters of the combination. |
|
Thu 24 Mar, '11- |
CRiSM Seminar - Carlos NavaretteA1.01Carlos Navarette (Universidad de La Serena) Similarity analysis in Bayesian random partition models This work proposes a method to assess the influence of individual observations in the clustering generated by any process that involves random partitions. It is called Similarity Analysis. It basically consists of decomposing the estimated similarity matrix into an intrinsic and an extrinsic part, coupled with a new approach for representing and interpreting partitions. Individual influence is associated with the particular ordering induced by individual covariates, which in turn provides an interpretation of the underlying clustering mechanism. Some applications in the context of Species Sampling Mixture Models will be presented, including Bayesian density estimation, dependent linear regression models and logistic regression for bivariate response. Additionally, an application to time series modelling based on time-dependent Dirichlet processes will be outlined. |
|
Thu 24 Feb, '11- |
CRiSM Seminar - Iain MurrayA1.01Iain Murray (University of Edinburgh) Sampling latent Gaussian models and hierarchical modelling Sometimes hyperparameters of hierarchical probabilistic models are not well-specified enough to be optimized. In some scientific applications inferring their posterior distribution is the objective of learning. Using a simple example, I explain why Markov chain Monte Carlo (MCMC) simulation can be difficult, and offer a solution for latent Gaussian models. |
|
Thu 17 Feb, '11- |
CRiSM Seminar - Wicher BergsmaA1.01Wicher Bergsma (LSE) Marcel Croon, Jacques Hagenaars Marginal Models for Dependent, Clustered, and Longitudinal Categorical Data
In the social, behavioural, educational, economic, and biomedical sciences, data are often collected in ways that introduce dependencies in the observations to be compared. For example, the same respondents are interviewed at several occasions, several members of networks or groups are interviewed within the same survey, or, within families, both children and parents are investigated. Statistical methods that take the dependencies in the data into account must then be used, e.g., when observations at time one and time two are compared in longitudinal studies. At present, researchers almost automatically turn to multi-level models or to GEE estimation to deal with these dependencies. Despite the enormous potential and applicability of these recent developments, they require restrictive assumptions on the nature of the dependencies in the data. The marginal models of this talk provide another way of dealing with these dependencies, without the need for such assumptions, and can be used to answer research questions directly at the intended marginal level. The maximum likelihood method, with its attractive statistical properties, is used for fitting the models. This talk is based on a recent book by the authors in the Springer series Statistics for the Social Sciences, see www.cmm.st.
|
|
Thu 3 Feb, '11- |
CRiSM Seminar - Simon SpencerA1.01Simon Spencer (Warwick) Outbreak detection for campylobacteriosis in New Zealand Identifying potential outbreaks of campylobacteriosis from a background of sporadic cases is made more difficult by the large spatial and temporal variation in incidence. One possible approach involves using Bayesian hierarchical models to simultaneously estimate spatial, temporal and spatio-temporal components of the risk of infection. By assuming that outbreaks are characterized by spatially localised periods of increased incidence, it becomes possible to calculate an outbreak probability for each potential disease cluster. The model correctly identifies known outbreaks in data from New Zealand for the period 2001 to 2007. Studies using simulated data have shown that by including epidemiological information in the model construction, this approach can outperform an established method. |
|
Thu 27 Jan, '11- |
CRiSM Seminar - Alberto SorrentinoAlberto Sorrentino (Warwick) Bayesian filtering for estimation of brain activity in magnetoencephalography Magnetoencephalography (MEG) is a sophisticated technique measuring the tiny magnetic fields produced by the brain activity. Relative to other functional neuroimaging techniques MEG recordings feature an outstanding temporal sampling resolution, in principle allowing for a study of the neural dynamics on a millisecond-by-millisecond time scale, but the spatial localization of neural currents from MEG data turns out to be an ill-posed inverse problem, i.e. a problem which has infinitely many solutions. To mitigate ill-posedness, a variety of parametric models of the neural currents are proposed in the burgeoning neuroimaging literature. In particular, under suitable approximations the problem of estimating brain activity from MEG data can be re-phrased as a Bayesian filtering problem with an unknown and time-varying number of sources. In this talk I will first illustrate a statistical model of source localisation for MEG data which builds directly on the well-established Physics of the electro-magnetic brain field. The focus of the talk will then be to describe the application of a recently developed class of sequential Monte Carlo methods (particle filters) for estimation of the model parameters using empirical MEG data. |
|
Thu 20 Jan, '11- |
CRiSM Seminar - Jouni KuhaJouni Kuha (London School of Economics) Sample group means in multilevel models: Sampling error as measurement error Research questions for models for clustered data often concern the effects of cluster-level averages of individual-level variables. For example, data from a social survey might characterise neigborhoods in |
|
Thu 13 Jan, '11- |
CRiSM Seminar - Tilman DaviesA1.01Tilman Davies (Massey University, NZ) Refining Current Approaches to Spatial and Spatio-Temporal Modelling in Epidemiology It is reasonable to expect both space and time to be important factors when investigating disease in human, animal and even plant populations. A common goal in many studies in geographical epidemiology, for example, is the idenification of disease risk 'hotspots', where spatial sub-regions that correspond to a statistically significant increase in the risk of infection are highlighted. More advanced problems involving not just space but space-time data, such as real-time disease surveillance, can be difficult to model due to complex correlation structures and computationally demanding operations. Decisions based on these kinds of analyses can range from the local, to national and even global levels. It is therefore important we continue to improve statistical methodology in this relatively young field, and ensure any theoretical benefits can flow through in practice. This talk aims to give an overview of the PhD research currently underway in an effort to develop and implement refinements to spatial and spatio-temporal modelling. Of note include use of a spatially adaptive smoothing parameter for estimation of the kernel-smoothed relative-risk function, development of a novel, computationally inexpensive method for associated spatial tolerance contour calculation, release of an R package implementing these capabilities, and the scope for improvement to the current marginal minimum-contrast methods for parameter estimation in relevant stochastic models. |
|
Thu 9 Dec, '10- |
CRiSM Seminar - Ayanendranath BasuA1.01Ayanendranath Basu (Indian Statistical Institute) Contamination Envelopes for Statistical Distances with Applications to Power Breakdown |
|
Thu 2 Dec, '10- |
CRiSM Seminar - Matti ViholaMS.01Matti Vihola (University of Jyväskylä) On the stability and convergence of adaptive MCMC Adaptive MCMC algorithms tune the proposal distribution of the Metropolis-Hastings Markov kernel continuously using the simulated history of the chain. There are many applications where AMCMC algorithms are empirically shown to improve over the traditional MCMC methods. Due to the non-Markovian nature of the adaptation, however, the analysis of these algorithms requires more care than the traditional methods. Most results on adaptive MCMC in the literature are based on assumptions that require the adaptation process to be `stable' (in a certain sense). Such stability can rarely be established unless the process is modified by introducing specific stabilisation structures. The most straightforward strategy is to constrain the adaptation within certain pre-defined limits. Such limits may sometimes be difficult to choose in practice, and the algorithms are generally sensitive to these parameters. In the worst case, poor choices can render the algorithms useless. This talk focuses on the recent stability and ergodicity results obtained for adaptive MCMC algorithms without such constraints. The key idea behind the results is that the ergodic averages can converge even if the Markov kernels gradually `lose' their ergodic properties. The new approach enables to show some sufficient conditions for stability and ergodicity for random walk Metropolis algorithms: the seminal Adaptive Metropolis algorithm and an algorithm adjusting the scale of the proposal distribution based on the observed acceptance probability. The results assume only verifiable conditions on the target distribution and the functional. |
|
Thu 25 Nov, '10- |
CRiSM Seminar - Andrew GrieveA1.01Andrew Grieve (King's College London) Recent developments in Bayesian Adaptive design |
|
Thu 18 Nov, '10- |
Joint Oxford-Warwick SeminarOxford: Tsuzuki Lecture Theatre, St Anne's College, Woodstock Road2:30 Philip Dawid (University of Cambridge) Local Proper Scoring Rules A scoring rule S(x, Q) measures the quality of a quoted distribution Q for an uncertain quantity X in the light of the realised value x of X. It is proper when it encourages honesty, i.e, when, if your uncertainty about X is represented by a distribution P, the choice Q = P minimises your expected loss. Traditionally, a scoring rule has been called local if it depends on Q only through q(x), the density of Q at x. The only proper local scoring rule is then the log-score, -log q(x). For the continuous case, we can weaken the definition of locality to allow dependence on a finite number m of derivatives of q at x. A characterisation is given of such order-m local proper scoring rules, and their behaviour under transformations of the outcome space. In particular, any m-local scoring rule with m > 0 can be computed without knowledge of the normalising constant of the density. Parallel results for discrete spaces will be given. 3:30 - 4:00 Tea break 4:00 Christl Donnelly (Imperial College London) Badger culling to control bovine TB: its potential role in a science-led policy 5:00 Reception to be held in Foyer A, Ruth Deech Building |
|
Thu 11 Nov, '10- |
CRiSM Seminar - Richard GillA1.01Richard Gill (Leiden University) Murder by numbers In March 2003, Dutch nurse Lucia de Berk was sentenced to life imprisonment by a court in The Hague for 5 murders and 2 murder attempts of patients in her care at a number of hospitals where she had worked in the Hague between 1996 and 2001. The only hard evidence against her was a statistical analysis resulting in a p-value of 1 in 342 million which purported to show that it could not be chance that so many incidents and deaths occured on her ward while she was on duty. On appeal in 2003 the life sentence was confirmed, this time for 7 murders and 3 attempts. This time, no statistical evidence was used at all: all the deaths were proven to be unnatural and Lucia shown to have caused them using scientific medical evidence only. However, after growing media attention and pressure by concerned scientists, including many statisticians, new forensic investigations were made which made showed that the conviction was unsafe. After a new trial, Lucia was spectacularly and completely exhonerated in 2010. I'll discuss the statistical evidence and show how it became converted into incontrovertible medical-scientific proof in order to secure the second, and as far as the Dutch legal system was concerned, definitive conviction. I'll also show how statisticians were instrumental in convincing the legal establishment that Lucia should be given a completely new trial. The history of Lucia de Berk brought a number of deficiencies to light in the way in which scientific evidence is evaluated in criminal courts. Similar cases to that of Lucia occur regularly all over the world. The question of how that kind of data should be statistically analyzed is still problematic. I believe that there are also important lessens to be learnt by the medical world, however, the Dutch medical community, where most people still believe Lucia is a terrible serial killer, is resisting all attempts to uncover what really happened. |
|
Thu 4 Nov, '10- |
CRiSM Seminar - Jianxin PanA1.01Jianxin Pan (University of Manchester) Joint modelling of mean and covariance structures for longitudinal data When analysing longitudinal/correlated data, misspecification of covariance structures may lead to very inefficient estimators of parameters in the mean. In some circumstances, e.g., when missing data are present, it may result in biased estimators of the mean parameters. Hence, correct models for covariance structures play a very important role. Like the mean, covariance structures can be actually modelled using linear or nonlinear regression model techniques. A number of estimation methods were developed recently for modelling of mean and covariance structures, simultaneously. In this talk, I will review some methods on joint modelling of the mean and covariance structures for longitudinal data, including linear, non-linear regression models and semiparametric models. Real examples and simulation studies will be provided for illustration.
|
|
Thu 28 Oct, '10- |
CRiSM Seminar - Sofia OlhedeA1.01Sofia Olhede (UCL) Estimation of Nonstationary Time Series A time series is usually, unless a specific parametric model is assumed, understood from its first and second moments. If the process is also stationary, e.g. its first and second moments are invariant to time translations, then estimation is a mature and well-developed field. Unfortunately, most observed processes are not stationary, as they are the result of the observation of transient phenomena. Therefore in classical time series analysis a theory has also been developed for the analysis of such processes. There are a number of short-comings of existing and well-developed methods, in particular in how the processes are allowed to evolve in time. I will discuss how to relax existing assumptions and still be able to develop good inference methods from a single time course of observations. |
|
Thu 21 Oct, '10- |
CRiSM Seminar - Richard BoysA1.01Richard Boys (Newcastle University) Linking systems biology models to data: a stochastic kinetic model of p53 oscillations This talk considers the assessment and refinement of a dynamic stochastic process model of the cellular response to DNA damage. The proposed model is a complex nonlinear continuous time latent stochastic process. It is compared to time course data on the levels of two key proteins involved in this response, captured at the level of individual cells in a human cancer cell line. The primary goal of is to "calibrate" the model by finding parameters of the model (kinetic rate constants) that are most consistent with the experimental data. Significant amounts of prior information are available for the model parameters. It is therefore most natural to consider a Bayesian analysis of the problem, using sophisticated MCMC methods to overcome the formidable computational challenges. |
|
Thu 14 Oct, '10- |
CRiSM Seminar - Ajay JasraA1.01Ajay Jasra (Imperial College London) The Time Machine: A Simulation Approach for Stochastic Trees One of the most important areas in computational genetics is the calculation and subsequent maximization of the likelihood function associated to such models. This typically consists of using importance sampling and sequential Monte Carlo techniques. The approach proceeds by simulating, backward in time from observed data, to a most recent common ancestor. However, in many cases, the computational time and variance of estimators are often too high to make standard approaches useful. In this talk I propose to stop the simulation, subsequently yielding biased estimates of the likelihood surface. The bias investigated from a theoretical point of view. Also, extensive simulation results are given, which justify the loss of accuracy with significant savings in computational time. This is a joint work with Maria De Iorio and Marc Chadeau-Hyam. |
|
Thu 23 Sep, '10- |
CRiSM Seminar - Tim JohnsonA1.01Timothy D. Johnson (University of Michigan) Predicting Treatment Efficacy via Quantitative MRI: A Bayesian Joint Model Abstract |
|
Thu 23 Sep, '10- |
CRiSM LecturesA1.01Tim Johnson Lecture 3: Simulation and Bayesian Methods
Thursday 23 Sept, 11-noon, A1.01
1. Spatial Birth and Death Process Algorithm and Alternatives
2. Fast Computation for Log-Gaussian Cox Processes
3. Bayesian Methods for
(a) Independent Cluster Processes
(b) Log-Gaussian Cox Processes
|
|
Tue 21 Sep, '10- |
CRiSM LecturesA1.01Tim Johnson Lecture 2: Aggregative, Repulsive and Marked Point Processes
Tuesday 21 Sept, 11-noon, A1.01
1. Cluster Point Processes
(a) Independent Cluster Process
(b) Log-Gaussian Cox Process
2. Markov Point Processes
(a) Hard-Core Process
(b) Strauss Process
3. Marked Point Processes
|
|
Mon 20 Sep, '10- |
CRiSM LecturesA1.01Tim Johnson Lecture 1: Introduction to Spatial Point Processes
Monday 20 Sept, 11-noon, A1.01
1. Introduction
2. Spatial Poisson Process
3. Spatial Cox Processes
|
|
Mon 26 Jul, '10- |
CRiSM Seminar - Andrew GelmanA1.01Andrew Gelman (Columbia University) Nothing is Linear, Nothing is Additive: Bayesian Models for Interactions in Social Science |
|
Tue 13 Jul, '10- |
CRiSM Seminar - Freedom Gumedze (University of Cape Town)A1.01Freedom Gumedze (University of Cape Town)
An alternative approach to outliers in meta-analysis
Meta-analysis involves the combining of estimates from independent studies on some treatment in order to get an estimate across studies. However, outliers often occur even under the random effects model. The presence of such outliers could alter the conclusions in a meta-analysis. This paper proposes a methodology that detects and accommodates outliers in a meta-analysis rather than remove them to achieve homogeneity. An outlier is taken as an observation (study result) with inflated random effect variance, with the status of the ith observation as an outlier indicated by the size of the associated shift in the variance. We use the likelihood ratio test statistic as an objective measure for determining whether the ith observation has inflated variance and is therefore an outlier. A parametric bootstrap procedure is proposed to obtain the sampling distribution for the likelihood ratio test and to account for multiple testing. We illustrate the methodology and its usefulness using three meta-analysis data sets from the Cochrane Collaboration.
|
|
Fri 25 Jun, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)Jim Nolen (Duke)
|
|
Thu 24 Jun, '10- |
CRiSM Seminar - Sujit Sahu (Southampton)A1.01Sujit Sahu (Southampton) High Resolution Bayesian Space-Time Modelling for Ozone Concentration Levels Ground-level ozone is a pollutant that is a significant health risk, especially for children with asthma. It also damages crops, trees and other vegetation. It is a main ingredient of urban smog. To evaluate exposure to ozone levels, the United States Environmental Protection Agency (USEPA) has developed a primary and a secondary air quality standard. To assess compliance to these standards, the USEPA collects ozone concentration data continuously from several networks of sparsely and irregularly spaced monitoring sites throughout the US. Data obtained from these sparse networks must be processed using spatial and spatio-temporal methods to check compliance to the ozone standards at an unmonitored site in the vast continental land mass of the US.
This talk will first discuss the two air quality standards for ozone levels and then will develop high resolution Bayesian space-time models which can be used to assess compliance. Predictive inference properties of several rival modelling strategies for both spatial interpolation and temporal forecasting will be compared and illustrated with simulation and real data examples. A number of large real life ozone concentration data sets observed over the eastern United States will also be used to illustrate the Bayesian space-time models. Several prediction maps from these models for the eastern US, published and used by the USEPA, will be discussed. |
|
Fri 18 Jun, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)Informal Group Meeting
|
|
Thu 17 Jun, '10- |
CRiSM Seminar - Adrian Bowman (Glasgow)A1.01Prof Adrian Bowman, University of Glasgow Surfaces, shapes and anatomy
Three-dimensional surface imaging, through laser-scanning or stereo-photogrammetry, provides high resolution data defining the shape of objects. In an anatomical setting this can provide invaluable quantitative information, for example on the success of surgery. Two particular applications are in the success of breast reconstruction and in facial surgery following conditions such as cleft lip and palate. An initial challenge is to extract suitable information from these images, to characterise the surface shape in an informative manner. Landmarks are traditionally used to good effect but these clearly do not adequately represent the very much richer information present in each digitised images. Curves with clear anatomical meaning provide a good compromise between informative representations of shape and simplicity of structure. Some of the issues involved in analysing data of this type will be discussed and illustrated. Modelling issues include the measurement of asymmetry and longitudinal patterns of growth.
A second form of surface data arises in the analysis of MEG data which is collected from the head surface of patients and gives information on underlying brain activity. In this case, spatiotemporal smoothing offers a route to a flexible model for the spatial and temporal locations of stimulated brain activity.
|
|
Fri 11 Jun, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)John Aston (Warwick)
|
|
Fri 4 Jun, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)Informal Group Meeting
|
|
Thu 3 Jun, '10- |
CRiSM Seminar - Idris Eckley (Lancaster)A1.01Idris Eckley (Lancaster)
Wavelets - the secret to great looking hair? Texture is the visual character of an image region whose structure is, in some sense, regular, for example the appearance of a woven material. The perceived texture of an image depends on the scale at which it is observed. In this talk we show how wavelet processes can be used to model and analyse texture structure. Our wavelet texture models permit the classification of images based on texture and reveal important information on differences between subtly different texture types. We provide examples, taken from industry, where wavelet methods have enhanced the classification of images of hair and fabrics. |
|
Fri 28 May, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)Mahadevan Ganesh (Edinburgh)
|
|
Thu 27 May, '10- |
CRiSM Seminar - William Astle (Imperial)A1.01William Astle (Imperial)
A Bayesian model of NMR spectra for the deconvolution and quantification of metabolites in complex biological mixtures |
|
Wed 26 May, '10- |
Seminar from Warwick back to MelbourneDigital Laboratory AuditoriumAnn Nicholson (Monash University) Bayesian networks (BNs) are rapidly becoming a tool of choice for ecological and environmental modelling and decision making. By combining a graphical representation of the dependencies between variables with probability theory and efficient inference algorithms, BNs provide a powerful and flexible tool for reasoning under uncertainty. The popularity of BNs is based on their ability to reason both diagnostically and predictively, and to explicitly model causal interventions and cost-benefit trade-offs. |
|
Fri 21 May, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)Omiros Papaspiliopoulos (Universitat Pompeu Fabra)
|
|
Thu 20 May, '10- |
CRiSM Seminar - Claudia Kirch (Karlsruhe)A1.01Claudia Kirch (Karlsruhe)
Resampling Methods in Change-Point Analysis
Real life data series are frequently not stable but exhibit changes in parameters at unknown time points. We encounter changes (or the possibility thereof) everyday in such diverse fields as economics, finance, medicine, geology, physics and so on. Therefore the detection, location and investigation of changes is of special interest. Change-point analysis provides the statistical tools (tests, estimators, confidence intervals). Most of the procedures are based on distributional asymptotics, however convergence is often slow -- or the asymptotic does not sufficiently reflect dependency. Using resampling procedures we obtain better approximations for small samples which take possible dependency structures more efficiently into account.
In this talk we give a short introduction into change-point analysis. Then we investigate more closely how resampling procedures can be applied in this context. We have a closer look at a classical location model with dependent data as well as a sequential location test, which has become of special interest in recent years.
|
|
Wed 19 May, '10- |
CRiSM Seminar - Petros Dellaportas (Athens University)A1.01Petros Dellaportas (Athens University of Economics and Business)
Control variates for reversible MCMC samplers A general methodology is presented for the construction and effective use of control variates for reversible MCMC samplers. The values of the coefficients of the optimal linear combination of the control variates are computed, and adaptive, consistent MCMC estimators are derived for these optimal coefficients. All methodological and asymptotic arguments are rigorously justified. Numerous MCMC simulation examples from Bayesian inference applications demonstrate that the resulting variance reduction can be quite dramatic. |
|
Fri 14 May, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)Informal Group Meeting
|
|
Thu 13 May, '10- |
CRiSM Seminar - Federico Turkheimer (Imperial)A1.01Federico Turkheimer (Imperial)
Title: Higher Mental Ability: A Matter of Persistence? Abstract: Executive function is thought to originate in the dynamics of frontal cortical networks of the human brain. We examined the dynamic properties of the blood oxygen level-dependent (BOLD) time-series measured with fMRI within the prefrontal cortex to test the hypothesis that temporally persistent neural activity underlies executive performance in normal controls doing executive tasks. A numerical estimate of signal persistence, derived from wavelet scalograms of the BOLD time-series and postulated to represent the coherent firing of cortical networks, was determined and correlated with task performance. We further tested our hypothesis on traumatic brain injury subjects that present with mild diffuse heterogenous injury but common executive dysfunction, this time using a resting state experimental condition.
|
|
Thu 13 May, '10- |
Ann Nicholson - Workshop 2C1.06Applications of Bayesian Networks
|
|
Fri 7 May, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)Informal Group Meeting
|
|
Fri 30 Apr, '10- |
Applied Maths & Stats SeminarB3.02 (Maths)David White (Warwick)
|
|
Thu 29 Apr, '10- |
CRiSM Seminar - Ann Nicholson (Monash)A1.01Ann Nicholson (Monash) Incorporating expert knowledge when learning Bayesian network structure: Heart failure as a case studyBayesian networks (BNs) are rapidly becoming a leading technology in applied Artificial Intelligence (AI), with medicine one of its most popular application area. Both automated learning of BNs and expert elicitation have been used to build these networks, but the potentially more useful combination of these two methods remain underexplored. In this seminar, I will present a case study of this combination using public-domain data for heart failure. We run an automated causal discovery system (CaMML), which allows the incorporation of multiple kinds of prior expert knowledge into its search, to test and compare unbiased discovery with discovery biased with different kinds of expert opinion. We use adjacency matrices enhanced with numerical and colour labels to assist with the interpretation of the results. These techniques are presented within a wider context of knowledge engineering with Bayesian networks (KEBN). |
|
Fri 26 Mar, '10- |
CRiSM Seminar - David Findley (US Census Bureau)A1.01David Findley (US Census Bureau) Two improved Diebold-Mariano test statistics for comparing the forecasting ability of incorrect time series models We present and show applications of two new test statistics for deciding if one ARIMA model provides significantly better h -step-ahead forecasts than another, as measured by the difference of approximations to their asymptotic mean square forecast errors. The two statistics differ in the variance estimate whose square root is the statistic's denominator. Both variance estimates are consistent even when the ARMA components of the models considered are incorrect. Our principal statistic's variance estimate accounts for parameter estimation. Our simpler statistic's variance estimate treats parameters as fixed. The broad consistency properties of these estimates yield improvements to what are known as tests of Diebold and Mariano (1995) type. These are tests whose variance estimates treat parameters as fixed and are generally not consistent in our context. We describe how the new test statistics can be calculated algebraically for any pair of ARIMA models with the same differencing operator. Our size and power studies demonstrate their superiority over the Diebold-Mariano statistic. The power study and the empirical study also reveal that, in comparison to treating estimated parameters as fixed, accounting for parameter estimation can increase power and can yield more plausible model selections for some time series in standard textbooks. (Joint work with Tucker McElroy) |
|
Thu 18 Mar, '10- |
CRiSM Seminar - Prakash Patil (Birmingham)A1.01Prakash Patil (University of Birmingham) |
|
Thu 4 Mar, '10- |
CRiSM Seminar - Jeremy Taylor (Michigan)A1.01Jeremy Taylor, University of Michigan
Individualized predictions of prostate cancer recurrence following radiation therapy
In this talk I will present a joint longitudinal-survival statistical model for the pattern of psa values and clinical recurrence for data from patients following radiation therapy for prostate cancer. A random effects model is used for the longitudinal psa data and a time dependent proportional hazards model is used for clinical recurrence of prostate cancer. The model is implemented on a website, psacalc.sph.umich.edu , where patients or doctors can enter a series of psa values and obtain a prediction of future disease progression. Details of the model estimation and validation will be described and the website calculator demonstrated.
|
|
Thu 25 Feb, '10- |
CRiSM Seminar - Vincent Macaulay (Glasgow)A1.01Vincent Macaulay, Dept of Statistics, University of Glasgow
Inference of migration episodes from modern DNA sequence variation One view of human prehistory is of a set of punctuated migration events across space and time, associated with settlement, resettlement and discrete phases of immigration. It is pertinent to ask whether the variability that exists in the DNA sequences of samples of people living now, something which can be relatively easily measured, can be used to fit and test such models. Population genetics theory already makes predictions of patterns of genetic variation under certain very simple models of prehistoric demography. In this presentation I will describe an alternative, but still quite simple, model designed to capture more aspects of human prehistory of interest to the archaeologist, show how it can be rephrased as a mixture model, and illustrate the kinds of inferences that can be made on a real data set, taking a Bayesian approach. |
|
Thu 18 Feb, '10- |
CRiSM Seminar - Theo Kypraios (Nottingham)A1.01Theo Kypraios (Nottingham)
A novel class of semi-parametric time series models: Construction and Bayesian Inference Abstract -------- In this talk a novel class of semi-parametric time series models will be presented, for which we can specify in advance the marginal distribution of the observations and then build the dependence structure of the observations around them by introducing an underlying stochastic process termed as 'latent branching tree'. It will be demonstrated how can we draw Bayesian inference for the model parameters using Markov Chain Monte Carlo methods as well as Approximate Bayesian Computation methodology. Finally a real dataset on genome scheme data will be fitted to these models and we will also discuss how this kind of models can be used in modelling Internet traffic. |
|
Thu 11 Feb, '10- |
CRiSM Seminar - Alexander Schied (Mannheim)A1.01Alexander Schied (Mannheim)
Mathematical aspects of market impact modeling Abstract: In this talk, we discuss the problem of executing large orders in illiquid markets so as to optimize the resulting liquidity costs. There are several reasons why this problem is relevant. On the mathematical side, it leads to interesting nonlinearity effects that arise from the price feedback of strategies. On the economical side, it helps understanding which market impact models are viable, because the analysis of order execution provides a test for the existence of undesirable properties of a model. In the first part of the talk, we present market impact models with transient price impact, modeling the resilience of electronic limit order books. In the second part of the talk, we consider the Almgren-Chriss market impact model and analyze the effects of risk aversion on optimal strategies by using stochastic control methods. In the final part, we discuss effects that occur in a multi-player equilibrium. |
|
Thu 28 Jan, '10- |
CRiSM Seminar - Jan PalczewskiA1.01Dr Jan Palczewski (University of Leeds)
Why Markowitz portfolio weights are so volatile?
Markowitz theory of asset allocation is one of very few research ideas that made it into practical finance. Yet, its investment recommendations exhibit incredible sensitivity to even smallest variations in the estimation horizon or estimation techniques. Scientists as well as practitioners have put enormous effort into stabilizing the estimators of portfolios (with moderate success, according to some). However, there seems to be no simple quantitative method to measure the portfolio stability. In this talk, I will deriveanalytical formulas that relate the mean and the covariance matrix of asset returns with the stability of portfolio composition. These formulas allow for the indentification of main culprits of worse-than-expected performance of the Markowitz framework. In particular, I will question the common wisdom that puts the main responsibility on estimation errors of the mean. This research is a spin-off of a consultancy project at the University of Warsaw regarding the allocation of the foreign reserves of Polish Central Bank. |
|
Thu 10 Dec, '09- |
CRiSM/Stats Seminar - Siem Jan KoopmanA1.01Siem Jan Koopman
Free University Amsterdam
Title: Dynamic factor analysis and the dynamic modelling of the yield curve of interest rates.
Abstract:
A new approach to dynamic factor analysis by imposing smoothness restrictions on the factor loadings is proposed. A statistical procedure based on Wald tests that can be used to find a suitable set of such restrictions is presented. These developments are presented in the context of maximum likelihood estimation. The empirical illustration concerns term structure models but the methodology is also applicable in other settings. An empirical study using a data set of unsmoothed Fama-Bliss zero yields for US treasuries of different maturities is performed. The general dynamic factor model with and without smooth loadings is considered in this study together with models that are associated with Nelson-Siegel and arbitrage-free frameworks. These existing models can be regarded as special cases of the dynamic factor model with restrictions on the model parameters. Statistical hypothesis tests are performed in order to verify whether the restrictions imposed by the models are supported by the data. The main conclusion is that smoothness restrictions can be imposed on the loadings of dynamic factor models for the term structure of US interest rates. [Joint work with Borus Jungbacker and Michel van der Wel]
|
|
Thu 3 Dec, '09- |
CRiSM Seminar - Serge Guillas (UCL)A1.01Serge Guillas (UCL) |
|
Fri 27 Nov, '09- |
CRiSM Seminar - Peter MullerA1.01Peter Müller
A DEPENDENT POLYA TREE MODEL with Lorenzo Trippa and Wes Johnson We propose a probability model for a family of unknown distributions indexed with covariates. The marginal model for each distribution is a Polya tree prior. The proposed model introduces the desired dependence across the marginal Polya tree models by defining dependent random branching probabilities of the unknown distributions. An important feature of the proposed model is the easy centering of the nonparametric model around any parametric regression model. This is important for the motivating application to the proportional hazards (PH) model. We use the proposed model to implement nonparametric inference for survival regression. The proposed model allows us to center the nonparametric prior around parametric PH structures. In contrast to many available models that restrict the non-parametric extension of the PH model to the baseline hazard, the proposed model defines a family of random probability measures that are a priori centered around the PH model but allows any other structure. This includes, for example, crossing hazards, additive hazards, or any other structure as supported by the data. |
|
Thu 19 Nov, '09- |
CRiSM Seminar - Anna GottardA1.01Anna Gottard (Florence) |
|
Tue 17 Nov, '09- |
CRiSM Seminar - James CurranD1.07 Complexity Seminar RmJames Curran (Auckland) Some issues in modern forensic DNA evidence interpretation The forensic biology community has adopted new DNA typing technology relatively quickly as it has evolved over the last twenty years. However, the adoption of the statistical methodology used for the interpretation of this evidence has not been as fast. In this talk I will discuss classical forensic DNA interpretation and how it the changes in technology and thinking have led to challenges to the way we interpret evidence. I will present some relatively new models for evidence interpretation, and discuss future directions. This talk is aimed at a general audience. |
|
Wed 11 Nov, '09- |
CRiSM Seminar - Tomasz SchreiberA1.01Professor Tomasz Schreiber (Nicolaus Copernicus University) |
|
Thu 5 Nov, '09- |
Warwick Oxford Joint Seminar (at Warwick)PS1.28Oxford-Warwick Joint Seminar (2 talks) Speaker 1: Andrew Stuart (University of Warwick) Title: MCMC in High Dimensions Abstract: Metropolis based MCMC methods are a flexible tool for sampling a wide variety of complex probability distributions. Nonetheless, their effective use depends very much on careful tuning of parameters, choice of proposal distribution and so forth. A thorough understanding of these issues in high dimensional problems is particularly desirable as they can be critical to the construction of a practical sampler.
In this talk we study MCMC methods based on random walk, Langevin and Hybrid Monte Carlo proposals, all of which are based on the discretization of a (sometimes stochastic) differential equation. We describe how to scale the time-step in this discretization to achieve optimal efficiency, and compare the resulting computational cost of the different methods. We initially confine our study to target distributions with a product structure but then show how the ideas may be extended to a wide class of non-product measures arising in applications; these arise from measures on a Hilbert space which are absolutely continuous with respect to a product measure. We illustrate the ideas through application to a range of problems arising in molecular dynamics and in data assimilation in the ocean-atmosphere sciences.
The talk will touch on various collaborations with Alex Beskos (UCL), Jonathan Mattingly (Duke), Gareth Roberts (Warwick), Natesh Pillai (Warwick) and Chus Sanz-Serna (Valladolid).
|
|
Thu 22 Oct, '09- |
CRiSM Seminar - Roman BelavkinA1.01Roman Belavkin (Middlesex University) The effect of information constraints on decision-making and economic behaviour Economic theory is based on the idea of rational agents acting according to their preferences. Mathematically, this is represented by maximisation of some utility or the expected utility function, if choices are made under uncertainty. Although this formalism has become dominant in optimisation, game theories and even AI, there is a degree of scepticism about the expected utility representation, especially among behavioural economists who often use paradoxical counter-examples dating back as far as Aristotle. I will try to convince you that many of these paradoxes can be avoided if the problem is treated from a learning theory point of view, where information constraints are explicitly taken into account. I will use methods of functional and convex analyses to demonstrate geometric interpretation of the solution of an abstract optimal learning problem, and demonstrate how this solution explains the mismatch between the normative and behavioural theories of decision-making. |
|
Fri 16 Oct, '09- |
CRiSM Seminar - Arnaud DoucetA1.01Arnaud Doucet (ISM) Forward Smoothing using Sequential Monte Carlo with Application to Recursive Parameter Estimation Sequential Monte Carlo (SMC) methods are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. We propose new SMC algorithms to compute the expectation of additive functionals recursively. Compared to the standard path space SMC estimator whose asymptotic variance increases quadratically with time even under favourable mixing assumptions, the asymptotic variance of the proposed SMC estimates only increase linearly with time. We show how this allows us to perform recursive parameter estimation using SMC algorithms which do not not suffer from the particle path degeneracy problem.
Joint work with P. Del Moral (INRIA Bordeaux) & S.S. Singh (Cambridge University. |
|
Thu 8 Oct, '09- |
CRiSM Seminar - Peter DiggleA1.01Peter Diggle (Lancaster University and Johns Hopkins University) Statistical Modelling for Real-time Epidemiology |
|
Thu 25 Jun, '09- |
CRiSM Seminar - Dr Frederic FerratyA1.01Dr Frederic Ferraty, University of Toulouse, France Most-predictive design points for functional data predictors (In coll. with Peter Hall and Phillipe Vieu) Functional data analysis (FDA) has found application in a great many fields, including biology, chemometrics, econometrics, geophysics, medical sciences, pattern recognization.... For instance a sample of curves or a sample of surfaces is a special case of functional data. In the example of near infrared (NIR) spectroscopy, X(t) denotes the absorbance of the NIR spectrum at wavelength t. The observation of X(t) for a discrete but large set of values (or design points) t produces what is called a spectrometric curve. A standard chemometrical dataset is that where X(t) corresponds to the NIR spectrum of a piece of meat and where a scalar response Y denotes a constituent of the piece of meat (eg, fat or moisture). Here, we are interested in regressing a scalar response Y on a functional predictor X(t) where t belongs to a discrete but large set I of "design points" (hundreds or thousands). From now on, one sets X:={X(t);t in I}. It is of practical interest to know which design points of t, have greatest influence on the response, Y. In this situation, we propose a method for choosing a small subset of design points to optimize prediction of a response variable, Y. The selected design points are referred to as the most predictive design points, or covariates, and are computed using information contained in a set of independent observations (X_i,Y_i) of (X,Y). The algorithm is based on local linear regression, and calculations can be accelerated using linear regression to preselect design points. Boosting can be employed to further improve predictive performance. We illustrate the usefulness of our ideas through examples drawn from chemometrics, and we develop theoretical arguments showing that the methodology can be applied successfully in a range of settings. |
|
Thu 11 Jun, '09- |
CRiSM Seminar - Dr Daniel JacksonA1.01Dr Daniel Jackson, MRC Biostatistics Unit How much can we learn about missing data? An exploration of a clinical trial in psychiatry by Dan Jackson, Ian R White and Morven Leese When a randomised controlled trial has missing outcome data, any analysis is based on untestable assumptions, for example that the data are missing at random, or less commonly on other assumptions about the mising data mechanism. Given such assumptions, there is an extensive literature on suitable analysis methods. However, little is known about what assumptions are appropriate. We use two sources of ancillary data to explore the missing data mechanism in a trial of adherence therapy in patients with schizophrenia: carer-reported (proxy) outcomes and the number of contact attempts. This requires making additional assumptions whose plausibility we discuss. We also perform sensitivity analyses to departures from missing at random. Wider use of techniques such as these will help to inform the choice of suitable assumptions for the analysis of randomised controlled trials. |
|
Tue 9 Jun, '09- |
CRiSM Seminar - Dr Pulak GhoshA1.01Dr Pulak Ghosh, Georgia State University, USA Joint Modelling of Multivariate Longitudinal Data for Mixed Responses with Application to Multiple Sclerosis Data Multiple sclerosis (MS) is one of the most chronic neurological diseases in young adults with around 2.5 million affected individuals worldwide (Compston 2006). The most common presenting symptoms are inflammation of the optic nerve, weakness, sensory disturbances, gait disturbances and bladder dysfunction. So far only standard analysis methodology to estimate risks for relapse occurence has been used. This includes mostly single endpoint survival analysis in which MRI information is shrunken to baseline values or aggregated measures such as means. In the present analysis we aim to establish a model that llows the description and prediction of occurence of relapses by considering processes in the brain (visualized on T1 and T2 weighted MRI) simultaneously. These complex processes, together with clinical baseline infromation, have never been considered in one model so far. We will use our model to evaluate strength of dependencies of multivariate longitudinal MRI measures with the occurrence of MS relapses. |
|
Thu 4 Jun, '09- |
CRiSM Seminar - Dr Chendi ZhangA1.01Dr Chendi Zhang, Warwick Business School Information Salience, Investor Sentiment and Stock Returns : The Case of British Soccer Betting Soccer clubs listed on the London Stock Exchange provide a unique way of testing stock price reactions to different types of news. For each firm, two pieces of information are released on a weekly basis : experts' expectations about game outcomes through the betting odds, and the game outcomes themselves. The stock market reacts strongly to news about game results, generating significant abnormal returns and trading volumes. We find evidence that the abnormal returns for the winning teams do not reflect rational expectations but are high due to overreactions induced by investor sentiment. This is not the case for losign teams. There is no market reaction to the release of new betting information although these betting odds are excellent predictors of the game outcomes. Ths discrepancy between the strong market reaction to game results and the lack of reaction to betting odds may not only be the result from overreaction to game results but also from the lack of informational content or information salience of the betting information. Therefore, we also examine whether betting information can be used to predict short-run stock returns subsequent to the games. We reach mixed results: we conclude that investors ignore some non-salient public information such as betting odds, and betting information predicts a stock price overreaction to game results which is influenced by investors' mood (especially when the teams are strongly expected to win). |
|
Thu 14 May, '09- |
CRiSM Seminar - Prof Howell TongA1.01Prof Howell Tong, LSE & Hong Kong University Time Reversibility of Multivariate Linear Time Series In the time series literature, time reversibility is often assumed either explicitly (if honest) or implicitly (if less so). In reality, time reversibility is the exception rather than the rule. The situation with multivariate time series is much more exotic as this seminar will explore. |
|
Thu 7 May, '09- |
CRiSM Seminar - Prof T BandyopadhyayA1.01Professor Tathagata Bandyopadhyay, Indian Institute of Management, Ahmedabad, India Testing Equality of Means from Paired Data When The Labels Are Missing Suppose (Xi, Y9) i=1,..,n represent a random sample of size n from a bivariate normal population. Suppose for some reason the labels in each pair are missing. We consider the problem of testing the null hypothesis of equality of means based on such a messy data set. We will cite a few practical instances when such a situation may arise. Naturally, standard t- test cannot be applied here since one can not label the components of each pair by 'X' or 'Y'. Instead, the observable pairs are (Mi, mi), i=1, ...n, where Mi = max(Xi, Yi) and mi = min(Xi, Yi). We will talk about a number of large sample tests based on (Mi, mi) for testing the above hypothesis.
|
|
Thu 30 Apr, '09- |
CRiSM Seminar - Dr David MacKayMS.03David MacKay, University of Cambridge Hands-free Writing Keyboards are inefficient for two reasons: they do not exploit the predictability of normal language; and they waste the fine analogue capabilities of the user's muscles. I describe Dasher, a communications system designed using information theory. A person's gestures are a source of bits, and the sentences they wish to communicate are the sink. We aim to maximise the number of bits per second conveyed from user into text. Users can achieve single-finger writing speeds of 35 words per minute and hands-free writing speeds of 25 words per minute. Dasher is free spftware, and it works in over one hundred languages. |
|
Thu 23 Apr, '09- |
CRiSM Seminar - Prof Hans KuenschA1.01Prof Hans Kuensch, ETH Zurich Ensemble kalman filter versus particle filters With high dimensional state space, particle filters usually behave badly. In atmospheric science, the ensemble Kalman filter is used instead which assumes a linear Gaussian observation and a Gaussian prediction distribution. I will discuss some open problems from a statistical perspective. |
|
Tue 17 Mar, '09- |
CRiSM Seminar - Dr Cristiano VarinA1.01Dr Cristiano Varin, Ca' Foscari University, Venice Marginal Regression Models with Stationary Time Series Errors This talk is concerned with regression analysis of serially dependent non-normal observations. Stemming from traditional linear regression models with stationary time series errors, a class of marginal models that can accommodate responses of any type is presented. Model fitting is performed either by exact or simulated maximum likelihood whether the responses is continuous or not. Computational aspects are described in detail. Real data applications to time series of counts are considered for illustration. The talk is based on collaborative work with Guido Masarotto, University of Padova. |
|
Thu 12 Mar, '09- |
CRiSM Seminar - Dr Peter CraigA1.01Dr Peter Craig, University of Durham Multivariate normal orthant probabilities - geometry, computation and application to statistics The multivariate normal distribution is the basic model for multivariate continuous variability and uncertainty and its properties are intrinsically interesting. The orthant probablility (OP) is the probability that each component is positive and is of practical importance both as the generalisation of tail probability and as the likelihood function for multivariate probit models. Efficient quasi-Monte Carlo methods are available for approximation of OPs but are unsuitable for high-precision calculations. However, accurate calculations are relatively straightforward for some covariance structures other than independence. I shall present the geometry of two ways to express general OPs in terms of these simpler OPs, discuss the computational consequences and briefly illustrate the application of these methods to a classic application of multivariate probit modelling. |
|
Thu 5 Mar, '09- |
CRiSM Seminar - Dr Catherine HurleyA1..01Dr Catherine Hurley, National University of Ireland, Maynooth Composition of statistical graphics: a graph-throretic perspective Visualization methods are crucial in data exploration, presentation and modelling. A carefully chosen graphic reveals information about the data, assisting the viewer in comparing and relating variables, cases, groups, clusters ormodel fits. A statistical graphic is composed of display components whose arrangement (in space or time) facilitates thses comparisons. In this presentation, we take a graph-theoretic prespective ot the graphical layout problem. The basic idea is that graphical layouts are essentially traversals of appropriately constructed mathematical graphs. We explore the construction of familiar scatterplot matrices and parallel coordinate displays from this perspective. We present graph traversal algorithms tailored to the graphical layout prolem. Novel applications range from a new display for pairwise comparison of treatment groups, to a guided parallel coordinate display and on to a road map for dynamic exploration of high-dimensional data. |
|
Fri 27 Feb, '09- |
CRiSM Seminar - Victor M PanaretosA1.01Victor M Panaretos, Institute of Mathematics, Ecole Polytechnique Federale de Lausanne Modular Statistical Inference in Single Particle Tomography What can be said about an unknown density function on $\mathbb{R}^n$ given a finite collection of projections onto random and unknown hyperplanes? This question arises in single particle electron microscopy, a powerful method that biophysicists employ to learn about the structure of biological macromolecules. The method images unconstrained particles, as opposed to particles fixed on a lattice (crystallography) and poses a variety of problems. We formulate and study statistically the problem of determining a structural model for a biological particle given random projections of its Coulomb potential density, observed through the electron microscope. Although unidentifiable (ill-posed), this problem can be seen to be amenable to a consistent modular statistical solution once it is viewed through the prism of shape theory. |
|
Thu 26 Feb, '09- |
CRiSM Seminar - Dr Abhir BhaleraoA1.01Dr Abhir Bhalerao, University of Warwick Automatic Screening for Microaneurysms in Digital Images Diabetic retinopathy is one of the major causes of blindness. However, diabetic retinopathy does not usually casuse a loss of sight until it has reached an advanced stage. The earliest signs of the disease are microaneurysms (MA) which appear as small red dots on retinal fundus images. Various screening programmes have been established in the UK and other coutries to collect and assess images on a regular basis, especially in the diabetic population. A considerable amount of time and money is spent in manually grading these images, a large percentage of which are normal. By automatically identifying the normal images, the manual workload and costs could be reducced greatly while increasing the effectiveness of the screening programmes. A novel method of microaneurysm from digital retinal screening images is presented. It is based on image filtering using complex-valued circular-symmetric filters, and an eigen-image, morphological analysis of the candidate regions to reduce the false-positive rate. The image processing algorithms will be presented with evaluation on a typical set of 89 images from a published database. Teh resulting method is shown to have a best operating sensitivity of 82.6% at a specificity of 80.2% which makes is useful for screening. The results are discussed in the context of a model of visual search and the ROC curves that it can predict. |
|
Mon 23 Feb, '09- |
CRiSM Seminar - Dr Yvonne HoA1.01Dr Yvonne Ho, Imperial College London Conditional accuracy and robustness We shall review the classical conditionality principle in statistics and study conditional inference procedures under nonparametric regression models, conditional on the observed residuals. As revealed from the title of the talk, we shall consider two issues: inference accuracy and robustness. An innovative procedure using a smoothing technique in conjunction with configural polysampling will be presented. |
|
Thu 19 Feb, '09- |
CRiSM Seminar - Dr Jian Qing ShiA1.01Dr Jian Qing Shi, University of Newcastle Curve Prediction and Clustering using Mixtures of Gaussian Process Functional Regression Models The problem of large data is one major statistical challenges, for example, more and more data are generated from subjects with different backgrounds at an incredible rate in one of our biomechanical project as well as some many other examples. For such functional/longitudinal data, it is more flexible and efficient to treat them as curves, and flexible mixture models are capable of capturing variation and biases for the data generated from different resources. Mixture models are also applicable to classification and clustering of the data. In this talk, I will first introduce a nonparametric Gaussian process functional regression (GPFR) model, and then discuss how to extend to a mixture model to address the problem of heterogeneity with multiple data types. A mew method will be presented for modelling functional data with 'spatially' indexed data, ie, the heterogeneity is dependent on factors such as region and individual patient's information. Nonparametric and functional mixture models have also been developed for curve clustering for some very complex systems which the response curves may depend on a number of functional and non-functional covariates. Some numerical results with simulations study and real applications will also be present. |
|
Thu 29 Jan, '09- |
CRiSM Seminar - Dr Igor PruensterA1.01 - Zeeman BuildingDi Finetti Afternoon, two seminars on Non-Parametric Bayesian Analysis, 2-5pm Dr Igor Pruenster (Torino, Italy) Asymptotics for posterior hazards A popular Bayesian nonpaametric approach to survival analysis consists in modelling hazard rates as kernel mixtures driven by a completely random measure. A comprehensive analysis of the asymptotic behaviour of such models is provided. Consistency of the posterior distribution is investigated and central limit theorems for both linear and quadratic functionals of the posterior hazaed rate are derived. The general results are then specialized to various specific kernels and mixing measures, thus yielding consistency under minimal conditions and near central limit theorems for the distribution of functionals. Joint work with P. De Blasi and G. Peccati |
|
Thu 29 Jan, '09- |
CRiSM Seminar - Dr Antonio LijoiA1.01 - Zeeman BuildingDi Finetti Afternoon, two seminars on Non-Parametric Bayesian Analysis, 2-5pm Dr Antonio Lijoi (Pavia, Italy) Priors for vectors of probability distributions In this talk we describe the construction of a nonparametric prior for vectors of probablility distributions obtained by a suitable transformation of completely random measures. The dependence between random probability distributions by a Levy copula. A first example that will be presented concerns a prior for a pair of survival functions and it will be used to model two-sample data: a posterior characterization, conditionally on possibly right-censored data, will be provided. Then, a vector of random probabilities, with two-parameter Poisson-Dirichlet marginals, is introduced. |
|
Thu 22 Jan, '09- |
CRiSM SeminarRoom A1.01, Zeeman BuildingDr Laura Sangalli, Politecnico di Milano Title : Efficient estimation of curves in more than one dimension by free-knot regression splines, with applications to the analysis of 3D cerebral vascular geometries. Abstract : We deal with the problem of efficiently estimating a 3D curve and its derivatives, starting from a discrete and noisy observation of the curve. We develop a regression technique based on free-knot splines, ie. regression splines where the number and position of knots are not fixed in advance but chosen in a way to minimize a penalized sum of squared errors criterion. We thoroughly compare this technique to a classical regression method, local polynomial smoothing, via simulation studies and application to the analysis of inner carotid artery centerlines (AneuRisk Project dataset). We show that 3D free-knot regression splines yield more accurate and efficient estimates. Joint work with Piercesare Secchi and Simone Vantini. |
|
Fri 12 Dec, '08- |
Joint CRiSM/Applied Maths/Stats SeminarA1.01Professor Jeff Rosenthal, University of Toronto
Adaptive MCMC |
|
Thu 27 Nov, '08- |
CRiSM Seminar - Anthony ReveillacA1.01Anthony Reveillac
Humboldt University - Berlin
Stein estimators and SURE shrinkage estimation for Gaussian processes using the Malliavin calculus |
|
Wed 12 Nov, '08- |
Joint Stats/Econometrics Seminar: Sylvia Fruehwirth-SchnatterA1.01, Zeeman BuildingProf Sylvia Fruehwirth-Schnatter, Johannes Kepler University, Austria
Latent Variable models are widely used in applied statistics and econometrics to deal with data where the underlying processes change either over time or between units. Whereas estimation of these models is well understood, model selection problems are rarely studies, because such an issue usually leads to a non-regular testing problem. Bayesian statistics offers in principle a framework for model selection even for non-regular problems, as is shortly discussed in the first part of the talk. The practical application of the Bayesian approach, however, proves to be challenging and numerical technique like marginal likelihoods, RJMCMC or the variable selection approach have to be used. The main contribution of this talk is to demonstrate that the Bayesian variable selection approach is useful far beyond the common problem of selecting covariates in a classical regression model and may be extended to deal model selection problems in various latent variable models. First, it is extended to testing for the presence of unobserved heterogeneity in random effects models. Second, dynamic regression models are considered, where one has to choose between fixed and random coefficients. Finally, the variable selection approach is extended to state space models, where testing problems like discriminating between models with a stochastic trend, a deterministic trend and a model without trend arise. Case studies from marketing, economics and finance will be considered for illustration. |
|
Wed 12 Nov, '08- |
CRiSM Seminar - John CussensA1.01, Zeeman BuildingJames Cussens, University of York Model Selection using weighted MAX-SAT Solvers This talk concerns encoding problems of statistical model selection in such a way that "weighted MAX-SAT solvers" can be used to search for the 'best' model. In this approach each model is (implicitly) encoded as a joint instantiation of n binary variables. Each of these binary variables encodes the truth/falsity of a logical proposition and weighted logical formulae are used to represent the model selection problem. Once encoded in this way we can tap into years of research and use any of the state-of-the-art solvers to conduct the search. In the talk I will show how to use this approach when the model class is that of Bayesian networks, and also for clustering. I will briefly touch on related methods which permit the calculation of marginal probabilities in discrete distributions. |
|
Thu 6 Nov, '08- |
CRiSM Seminar: Ming-Yen ChengA1.01Prof Ming-Yen Cheng UCL |
|
Thu 30 Oct, '08- |
CRiSM Seminar: Christopher SherlockA1.01Dr Christopher Sherlock
Optimal scaling of the random walk Metropolis University of Lancaster
Abstract: The random walk Metropolis (RWM) is one of the most commonly used Metropolis-Hasting algorithms, and choosing the appropriate scaling for the proposal is an important practical problem. Previous theoretical approaches have focussed on high-dimensional algorithms and have revolved around a diffusion approximation of the trajectory. For certain specific classes of targets it has been possible to show that the algorithm is optimal when the acceptance rate is approximately
0.234. We develop a novel approach which avoids the need for diffusion limits. Focussing on spherically symmetric targets, it is possible to derive simple exact formulae for efficiency and acceptance rate for a "real" RWM algorithm, as opposed to a limit process. The limiting behaviour of these formulae can then be explored. This in some sense "simpler" approach allows important general intuitions as to when and why the 0.234 rule holds, when the rule fails, and what may happen when it does fail. By extending the theory to include elliptically symmetric targets we obtain further intuitions about the role of the proposal's shape. |
|
Thu 23 Oct, '08- |
Oxford-Warwick Joint SeminarOxford
3rd JOINT WARWICK-OXFORD STATISTICS SEMINAR 2:30 – 5:00 pm at The Mary Ogilvie Lecture Theatre, St. Anne’s College, University of Oxford 2:30 p.m. Speaker 1: Julian Besag (University of Bath, University of Washington, Seattle) Title:
Continuum limits of Gaussian Markov random fields: resolving the
conflict with geostatistics MRFs refer to fixed regular or irregular discrete lattices or arrays and questions arise regarding inconsistencies between MRFs specified at differing scales, especially for regional data. Ideally, one would often prefer an underlying continuum formulation, as in geostatistics, which can then be integrated to the regions of interest. However, limiting continuum versions of MRFs, as lattice spacing decreases, proved elusive until recently. This talk briefly presents some motivating examples and shows that limiting processes indeed exist but are defined on arbitrary regions of the plane rather than pointwise. Especially common is the conformally invariant de Wijs process, which coincidentally was used originally by mining engineers but which became unfashionable as geostatistics developed. Convergence is generally very fast. The de Wijs process is also shown to be a natural extension of Brownian motion to the plane. Other processes, including the thin-plate spline, can be derived as limits of MRFs. The talk closes by briefly discussing data analysis.
ÍÎ
3:30 to 4.00 - Tea, Coffee and biscuits in foyer outside lecture theatre
4:00 p.m. Speaker 2: Susan Lewis (University of Southampton, UK) Title:
Screening experiments |
|
Thu 2 Oct, '08- |
CRiSM Seminar: Martin BaxterA1.01Dr Martin Baxter, Nomura International
Levy Modelling of Credit This talk will start with some simple models of credit dynamics, and embed them in a general Levy process framework. A particular instance, the Gamma process, will then be studied with reference to both its theoretical and practical properties. A brief analysis of the ongoing credit crisis in terms of Levy modelling. Time permitting, we will also look at some other applications.
|
|
Mon 22 Sep, '08- |
CRiSM SeminarA1.01Dr Jonathan Evans, Institute of Linguistics, Academia Sinica
Statistical Modelling in Linguistics: Approaches and Challenges in pitch analysis
This talk introduces the use of Linear Mixed Effects (LME) analysis to model f0 (pitch) production in a language with two tones, and demonstrates the advantages of using such a method of analysis. LME can be used to weigh the impact of a large number of effects, it can demonstrate the interaction among those effects, and can also show how both fixed and random effects contribute to the model. Unlike previous analytical methods for modeling f0 in tone languages, LME analysis allows researchers to have more freedom in designing experiments, and to have sufficient variety in the dataset without having to rely on nonsense words and phrases to fill out a data matrix. LME makes it is possible to put a multitude of effects and interactions into a single comprehensive model of f0. The ensuing model is easy to interpret and straightforward to compare crosslinguistically. LME analysis makes possible a quantitative typology that shows clearly how linguistic and nonlinguistic factors combine in the production of f0 for each language thus analyzed. The talk will also veer into discussion of how to model f0 based on the pitch curve of each syllable. Although each curve contains an infinite number of points, there is striking similarity between the curve-based model and the point-based model. |
|
Tue 9 Sep, '08- |
CRiSM SeminarA1.01Professor P Cheng, Academia Sinica, Taipei, Republic of China
Linear Information Models and Applications Log-likelihood information identities and Venn diagrams for categorical data exhibit fundamental differences from those of continuous variables. This presentation will start with three-way contingency tables and the associated likelihood ratio tests. It will introduce linear information models that deviate from hierarchical log-linear models, beginning with three-way tables. A connection to latent class analysis with two-way tables and the geometry of the one-degree-freedom chi-square test and exact test for two-way independence is also investigated. Key Names: Pearson; Fisher; Neyman and Pearson; Kullback and Leibler; Cochran, Mantel, and Heanszel; Goodman. Co-authors: John A. D. Aston, Jiun W. Liou, and Michelle Liou. |
|
Mon 25 Aug, '08- |
Economics/Stats SeminarS2.79Donald Rubin (Harvard)
For Objective
Causal Inference, Design Trumps Analysis
For obtaining causal inference that are
objective, and therefore have the best chance of revealing scientific truths,
carefully designed and executed randomized experiments are generally
considered to be the gold standard. Observational studies, in contrast,
are generally fraught with problems that compromise any claim for objectivity
of the resulting causal inferences. The thesis here is that
observational studies have to be carefully designed to approximate randomized
experiments, in particular, without examining any final outcome data.
Often a candidate data set will have to be rejected as inadequate because of
lack of data on key covariates, or because of lack of overlap in the
distributions of key covariates between treatment and control groups, often
revealed by careful propensity score analyses. Sometimes the template
for the approximating randomized experiment will have to be altered, and the
use of principal stratification can be helpful in doing this. These
issues are discussed and illustrated using the framework of potential
outcomes to define causal effects, which greatly clarifies critical
issues. |
|
Mon 14 Jul, '08- |
CRiSM SeminarA1.01Prof Donald Martin, North Carolina State University
Markov chain pattern distributions We give a method for predicting statistics of hidden state sequences, where the conditional distribution of states given observations is modeled by a factor graph with factors that depend on past states but not future ones. Model structure is exploited to develop a deterministic finite automaton and an associated Markov chain that facilitates efficient computation of the distributions. Examples of applications of the methodology are the computation of distributions of patterns and statistics in a discrete hidden state sequence perturbed by noise and/or missing values, and patterns in a state sequence that serves to classify the observations. Two detailed examples are given to illustrate the computational procedure. |
|
Thu 26 Jun, '08- |
CRiSM SeminarA1.01Gersende Fort, ENST (Ecole Nationale Superieure Des Telecommunications, France
Stability of Markov Chains based on fluid limit techniques. Applications to MCMC We propose a transformation of some Markov chains which will allow us to define its fluid limit: by renormalization in time, space, and initial value of the chain, we exhibit a time-continuous process which governs the dynamic of the initial chain. The goal is to identify the quantities that govern the ergodic behavior of the Markov chain, by showing their impact on the dynamics of the associated fluid process which, by definition, gives information on the transient steps of the chain. We will consider applications of these techniques to the choice of the design parameters of some MCMC samplers. |
|
Thu 19 Jun, '08- |
CRiSM SeminarA1.01Ian Dryden, University of Nottingham |
|
Thu 12 Jun, '08- |
CRiSM SeminarA1.01Jonathan Dark, University of Melbourne
Dynamic hedging with futures that are subject to price limits The standard approaches to estimating minimum variance hedge ratios (MVHRs) are mis-specified when futures prices are subject to price limits. This paper proposes a bivariate tobit-FIGARCH model with maturity effects to estimate dynamic MVHRs using single and multiple period approaches. Simulations and an application to a commodity futures hedge support the proposed approach and highlight the importance of allowing for price limits when hedging. |
|
Thu 5 Jun, '08- |
CRiSM SeminarA1.01Thomas Nichols, GlaxoSmithKline Clinical Imaging Centre |
|
Thu 29 May, '08- |
CRiSM SeminarA1.01Geoff McLachlan, University of Queensland An important problem in microarray experiments is the detection of genes that are differentially expresse in a given number of classes. As there are usually thousands of genes to be considered simultaneously, one encounters high-dimensional testing problems. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null (not differentially expressed). The problem can be expressed in a two-component mixture framework. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with the computationally intensive nature of more specific assumptions. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The approach provides an estimate of the local false discovery rate (FDR) for each gene, which is taken to be the posterior probability that the gene is null. Genes with the local FDR less than a specified threshold C are taken to be differentially expressed. For a given C, this approach also provides estimates of the implied overall errors such as the (global) FDR and the false negative/positive rates. |
|
Thu 22 May, '08- |
CRiSM SeminarA1.01Thomas Richardson, University of Washington I will first review well-known differences between odds ratios, relative risks and risk differences. These results motivate the development of methods, analogous to logistic regression, for estimating the latter two quantities. I will then describe simple parametrizations that facilitate maximum-likelihood estimation of the relative risk and risk-difference. Further, these parametrizations allow for doubly-robust g-estimation of the relative risk and risk difference. (Joint work with James Robins, Harvard School of Public Health). |
|
Thu 8 May, '08- |
CRiSM SeminarA1.01Cees Diks, University of Amsterdam
Linear and Nonlinear Causal Relations in Exchange Rates and Oil Spot and Futures Prices Various tests have been proposed recently in the literature for detecting causal relationships between time series. I will briefly review the traditional linear methods and some more recent contributions on testing for nonlinear Granger causality. The relative benefits and limitations of these methods are then compared in two different case studies with real data. In the first case study causal relations between six main currency exchange rates are considered. After correcting for linear causal dependence using VAR models there is still evidence presence for nonlinear causal relations between these currencies. ARCH and GARCH effects are insufficient to fully account for the nonlinear causality found. The second case study focuses on nonlinear causal linkages between daily spot and futures prices at different maturities of West Texas Intermediate crude oil. The results indicate that after correcting for possible cointegration, linear dependence and multivariate GARCH effects, some causal relations are still statistically significant. In both case studies the conclusion is that non-standard models need to be developed to fully capture the higher-order nonlinear dependence in the data. |
|
Thu 1 May, '08- |
CRiSM SeminarA1.01Alastair Young, Imperial College London |
|
Thu 3 Apr, '08- |
CRiSM SeminarA1.01Professor Jay Kadane, Carnegie Mellon University
Driving While Black: Statisticians Measure Discriminatory Law Enforcement (joint work with John Lamberth) The US Constitution guarantees "equal protection under the law" regardless of race, but sometimes law enforcement practices have failed to adhere to this standard.In the 1990's, a suit was brought alleging that the New Jersey State Police were stopping Blacks at disproportionately high rates in the southern end of the New Jersey Turnpike. In this talk I * review the evidence in that case, the decision, and its immediate aftermath * discuss criticisms of that decision * examine new evidence that rebuts those criticisms * comment on the extent to which the Constitutional standard is now being met. |
|
Thu 13 Mar, '08- |
CRiSM SeminarA1.01Prof Antony Pettitt, Lancaster University
|
|
Thu 6 Mar, '08- |
CRiSM SeminarA1.01Dr Cliona Golden, UCD, Dublin
On the validity of ICA for fMRI data Functional Magnetic Resonance Imaging (fMRI) is a brain-imaging technique which, over time, records changes in blood oxygenation level that can be associated with underlying neural activity. However, fMRI images are very noisy and extracting useful information from them calls for a variety of methods of analysis. I will discuss the validity of the use of two popular Independent Component Analysis (ICA) algorithms, InfoMax and FastICA, which are commonly used for fMRI data analysis. Tests of the two algorithms on simulated, as well as real, fMRI data, suggest that their successes are related to their ability to detect "sparsity" rather than the independence which ICA is designed to seek. |
|
Thu 28 Feb, '08- |
CRiSM SeminarA1.01Alexey Koloydenko & Juri Lember (Joint Talk), University of Nottingham |
|
Mon 18 Feb, '08- |
CRiSM SeminarA1.01Terry Speed, University of California, Berkeley |
|
Thu 14 Feb, '08- |
CRiSM SeminarA1.01Professor Simon Wood, University of Bath |
|
Thu 31 Jan, '08- |
CRiSM SeminarA1.01Dr Robert Gramacy, Statistical Laboratory Cambridge |
|
Thu 24 Jan, '08- |
CRiSM SeminarA1.01Dr Richard Samworth, Statistical Laboratory, Cambridge We show that if
$X_1,...,X_n$ are a random sample from a log-concave density $f$ in
$\mathbb{R}^d$, then with probability one there exists a unique maximum
likelihood estimator $\hat{f}_n$ of $f$. The use of this estimator is
attractive because, unlike kernel density estimation, the estimator is fully
automatic, with no smoothing parameters to choose. The existence proof is
non-constructive, however, and in practice we require an iterative algorithm
that converges to the estimator. By reformulating the problem as one of
non-differentiable convex optimisation, we are able to exhibit such an
algorithm. We will also show how the method can be combined with the EM
algorithm to fit finite mixtures of log-concave densities. The talk will be
illustrated with pictures from the R package LogConcDEAD. |
|
Thu 17 Jan, '08- |
CRiSM SeminarDr Elena Kulinskaya, Statistical Advisory Service, Imperial College |
|
Thu 13 Dec, '07- |
CRiSM SeminarA1.01Professor Malcolm Faddy, Queensland University of Technology, Australia Hospital length of stay data typically show a distribution with a mode near zero and a long right tail, and can be hard to model adequately. Traditional models include the gamma and log-normal distributions, both with a quadratic variance-mean relationship. Phase-type distributions which describe the length of time to absorption of a Markov chain with a single absorbing state also have a quadratic variance-mean relationship. Covariates of interest include an estimate of the length of stay for an uncomplicated admission, with excess length of stay modelled relative to this quantity either multiplicatively or additively. A number of different models can therefore be constructed, and the results of fitting these models will be discussed in terms of goodness of fit, significance of covariate effects and estimation of quantities of interest to health economists. |
|
Thu 29 Nov, '07- |
CRiSM SeminarA1.01Dr Sumeet Singh, Signal Processing Laboratory, Cambridge We consider the inference of a hidden spatial Point Process (PP) X on a CSMS (complete separable metric space) X, from a noisy observation y modeled as the realisation of another spatial PP Y on a CSMS Y. We consider a general model for the observed process Y which includes thinning and displacement and characterise the posterior distribution of X for a Poisson and Gauss-Poisson prior. These results are then applied in a filtering context when the hidden process evolves in discrete time in a Markovian fashion. The dynamics of X considered are general enough for many arget Tracking applications, which is an important study area in Engineering. Accompanying numerical implementations based on Sequential Monte Carlo will be presented. |
|
Thu 22 Nov, '07- |
CRiSM SeminarA1.01Prof Gareth Roberts, University of Warwick (Joint Statistics/Econometrics Seminar) |
|
Thu 15 Nov, '07- |
CRiSM SeminarA1.01Dr Daniel Farewell, Cardiff University Models for longitudinal measurements truncated by possibly informative dropout have tended to be either mathematically complex or computationally demanding. I will review an alternative recently proposed in our RSS discussion paper (Diggle et al. 2007), using simple ideas from event-history analysis (where censoring is commonplace) to yield moment-based estimators for balanced, continuous longitudinal data. I shall then discuss some work in progress: extending these ideas to more general longitudinal data, while maintaining simplicity of understanding and implementation. |
|
Thu 1 Nov, '07- |
CRiSM SeminarA1.01Prof Valentine Genon-Catalot, Paris 5 Consider a pair signal-observation ((x_n, y_n), n > 0) where the unobserved signal (x_n) is a Markov chain and the observed component is such that, given the whole sequence (x_n), the random variables (y_n) are independent and the conditional distribution of y_n only depends on the corresponding state variable x_n. Concrete problems raised by these observations are the prediction, filtering or smoothing of (x_n). This requires the computation of the conditional distributions of x_l given y_n, . . . , y_1, y_0 for all l, n. We introduce sufficient conditions allowing to obtain explicit formulae for these conditional distributions and extend the notion of finite dimensional filters using mixtures of distributions. The method is applied to the case where the signal x_n = Xn_ is a discrete sampling of a one dimensional diffusion process: Concrete models are proved to fit in our conditions. Moreover, for these models, exact likelihood inference based on the observation (y_0, . . . , y_n) is feasible.
|
|
Wed 31 Oct, '07- |
CRiSM SeminarA1.01Dr Alex Schmidt, Instituto de Matematica - UFRJ, Brazil |
|
Thu 25 Oct, '07- |
CRiSM SeminarA1.01Prof Wilfrid Kendall, University of Warwick How efficiently can one move about in a network linking a configuration of n cities? Here the notion of "efficient" has to balance (a) total network length against (b) short network distances between cities. My talk will explain how to use Poisson line processes to produce networks which are nearly of shortest total length, which make the average inter-city distance almost Euclidean. |
|
Thu 11 Oct, '07- |
CRiSM seminarOXFORD-WARWICK JOINT SEMINAR Prof Odd Aalen, Dept of Biostatistics, University of Oslo (3-4pm Rm L4, Science Concourse Main Level) Prof Geert Molenberghs, Centre for Statistics, Hasselt University, Belgium (4.30-5.30 Rm LIB1, Library) |
Jump to the Dept of Statistics event diary