CRiSM Seminars
Seminars take place in Room A1.01, Dept of Statistics, University of Warwick at 4pm, unless otherwise stated. There will be tea, coffee and biscuits in the Statistics Common Room (Room C0.06) at 3.30pm. After the seminar there will usually be wine and snacks. 
In particular, we ask all postgraduate students to attend the seminars. Please come and join us for a glass of wine afterwards. For more information on the CRiSM seminar series please contact Dr Julia Brettschneider, email Julia dot Brettschneider at warwick dot ac dot uk. 
from the University of Warwick
Event Diary
Thu 17 Jan, '19 
CRiSM SeminarMSB2.23Prof. Galin Jones, School of Statistics, University of Minnesota (14:0015:00) Bayesian Spatiotemporal Modeling Using Hierarchical Spatial Priors, with Applications to Functional Magnetic Resonance Imaging We propose a spatiotemporal Bayesian variable selection model for detecting activation in functional magnetic resonance imaging (fMRI) settings. Following recent research in this area, we use binary indicator variables for classifying active voxels. We assume that the spatial dependence in the images can be accommodated by applying an areal model to parcels of voxels. The use of parcellation and a spatial hierarchical prior (instead of the popular Ising prior) results in a posterior distribution amenable to exploration with an efficient Markov chain Monte Carlo (MCMC) algorithm. We study the properties of our approach by applying it to simulated data and an fMRI data set. Dr. Flavio Goncalves, Universidade Federal de Minas Gerais, Brazil (15:0016:00). Exact Bayesian inference in spatiotemporal Cox processes driven by multivariate Gaussian processes In this talk we present a novel inference methodology to perform Bayesian inference for spatiotemporal Cox processes where the intensity function depends on a multivariate Gaussian process. Dynamic Gaussian processes are introduced to allow for evolution of the intensity function over discrete time. The novelty of the method lies on the fact that no discretisation error is involved despite the nontractability of the likelihood function and infinite dimensionality of the problem. The method is based on a Markov chain Monte Carlo algorithm that samples from the joint posterior distribution of the parameters and latent variables of the model. The models are defined in a general and flexible way but they are amenable to direct sampling from the relevant distributions, due to careful characterisation of its components. The models also allow for the inclusion of regression covariates and/or temporal components to explain the variability of the intensity function. These components may be subject to relevant interaction with space and/or time. Real and simulated examples illustrate the methodology, followed by concluding remarks. 

Thu 31 Jan, '19 
CRiSM SeminarMSB2.23Professor Paul Fearnhead, Lancaster University  14:001500 Efficient Approaches to Changepoint Problems with Dependence Across Segments Changepoint detection is an increasingly important problem across a range of applications. It is most commonly encountered when analysing timeseries data, where changepoints correspond to points in time where some feature of the data, for example its mean, changes abruptly. Often there are important computational constraints when analysing such data, with the number of data sequences and their lengths meaning that only very efficient methods for detecting changepoints are practically feasible. A natural way of estimating the number and location of changepoints is to minimise a cost that tradesoff a measure of fit to the data with the number of changepoints fitted. There are now some efficient algorithms that can exactly solve the resulting optimisation problem, but they are only applicable in situations where there is no dependence of the mean of the data across segments. Using such methods can lead to a loss of statistical efficiency in situations where e.g. it is known that the change in mean must be positive. This talk will present a new class of efficient algorithms that can exactly minimise our cost whilst imposing certain constraints on the relationship of the mean before and after a change. These algorithms have links to recursions that are seen for discretestate hidden Markov Models, and within sequential Monte Carlo. We demonstrate the usefulness of these algorithms on problems such as detecting spikes in calcium imaging data. Our algorithm can analyse data of length 100,000 in less than a second, and has been used by the Allen Brain Institute to analyse the spike patterns of over 60,000 neurons. (This is joint work with Toby Hocking, Sean Jewell, Guillem Rigaill and Daniela Witten.) Dr. Sandipan Roy, Department of Mathematical Science, University of Bath (15:0016:00) Network Heterogeneity and Strength of Connections Abstract: Detecting strength of connection in a network is a fundamental problem in understanding the relationship among individuals. Often it is more important to understand how strongly the two individuals are connected rather than the mere presence/absence of the edge. This paper introduces a new concept of strength of connection in a network through a nonparameteric object called “Grafield”. “Grafield” is a piecewise constant bivariate kernel function that compactly represents the affinity or strength of ties (or interactions) between every pair of vertices in the graph. We estimate the “Grafield” function through a spectral analysis of the Laplacian matrix followed by a hard thresholding (Gavish & Donoho, 2014) of the singular values. Our estimation methodology is valid for asymmetric directed network also. As a by product we get an efficient procedure for edge probability matrix estimation as well. We validate our proposed approach with several synthetic experiments and compare with existing algorithms for edge probability matrix estimation. We also apply our proposed approach to three real datasets understanding the strength of connection in (a) a social messaging network, (b) a network of political parties in US senate and (c) a neural network of neurons and synapses in C. elegans, a type of worm. 

Thu 14 Feb, '19 
CRiSM SeminarMSB2.23Speaker: Professor Ingo Scholtez, Department of Informatics, University of Zurich, Switzerland 

Thu 28 Feb, '19 
CRiSM SeminarMSB2.23Prof. Isham Valerie, Statistical Science, University College London, UK (15:0016:00) 

Thu 14 Mar, '19 
CRiSM SeminarA1.01Dr. Spencer Wheatley, ETH Zurich, Switzerland, (15:0016:00)
The "endoexo" problem in financial market price fluctuations, & the ARMA point process The "endoexo" problem  i.e., decomposing system activity into exogenous and endogenous parts  lies at the heart of statistical identification in many fields of science. E.g., consider the problem of determining if an earthquake is a mainshock or aftershock, or if a surge in the popularity of a youtube video is because it is "going viral", or simply due to high activity across the platform. Solution of this problem is often plagued by spurious inference (namely false strong interaction) due to neglect of trends, shocks and shifts in the data. The predominant point process model for endoexo analysis in the field of quantitative finance is the Hawkes process. A comparison of this field with the relatively mature fields of econometrics and time series identifies the need to more rigorously control for trends and shocks. Doing so allows us to test the hypothesis that the market is "critical"  analogous to a unit root test commonly done in economic time series  and challenge earlier results. Continuing "lessons learned" from the time series field, it is argued that the Hawkes point process is analogous to integer valued AR time series. Following this analogy, we introduce the ARMA point process, which flexibly combines exo background activity (Poisson), shotnoise bursty dynamics, and selfexciting (Hawkes) endogenous activity. We illustrate a connection to ARMA time series models, as well as derive an MCEM (Monte Carlo Expectation Maximization) algorithm to enable MLE of this process, and assess consistency by simulation study. Remaining challenges in estimation and model selection as well as possible solutions are discussed.


Thu 2 May, '19 
CRiSM SeminarA1.01Speaker: Dr. Ben Calderhead, Department of Mathematics, Imperial College London Abstract: QuasiMonte Carlo (QMC) methods for estimating integrals are attractive since the resulting estimators typically converge at a faster rate than pseudorandom Monte Carlo. However, they can be difficult to set up on arbitrary posterior densities within the Bayesian framework, in particular for inverse problems. We introduce a general parallel Markov chain Monte Carlo(MCMC) framework, for which we prove a law of large numbers and a central limit theorem. In that context, nonreversible transitions are investigated. We then extend this approach to the use of adaptive kernels and state conditions, under which ergodicity holds. As a further extension, an importance sampling estimator is derived, for which asymptotic unbiasedness is proven. We consider the use of completely uniformly distributed (CUD) numbers within the above mentioned algorithms, which leads to a general parallel quasiMCMC (QMCMC) methodology. We prove consistency of the resulting estimators and demonstrate numerically that this approach scales close to n^{2} as we increase parallelisation, instead of the usual n^{1} that is typical of standard MCMC algorithms. In practical statistical models we observe multiple orders of magnitude improvement compared with pseudorandom methods. 

Mon 13 May, '19 
CRiSM SeminarMB0.07Prof. Renauld Lambiote, University of Oxford, UK (15:0016:00) 

Thu 30 May, '19 
CRiSM SeminarA1.01Dr. Yoav Zemel, University of Göttingen, Germany (15:0016:00) 

Thu 13 Jun, '19 
CRiSM SeminarMSB2.22Prof. Karla Hemming, University of Birmingham, UK (15:0016:00) Speaker: Clair Barnes, University College London, UK Death & the Spider: postprocessing multiensemble weather forecasts with uncertainty quantification Ensemble weather forecasts often underrepresent uncertainty, leading to overconfidence in their predictions. Multimodel forecasts combining several individual ensembles have been shown to display greater skill than singleensemble forecasts in predicting temperatures, but tend to retain some bias in their joint predictions. Established postprocessing techniques are able to correct bias and calibration issues in univariate forecasts, but are generally not designed to handle multivariate forecasts (of several variables or at several locations, say) without separate specification of the structure of the intervariable dependence. We propose a flexible multivariate Bayesian postprocessing framework, developed around a directed acyclic graph representing the relationships between the ensembles and the observed weather. The posterior forecast is inferred from the ensemble forecasts and an estimate of their shared discrepancy, which is obtained from a collection of past forecastobservation pairs. The approach is illustrated with an application to forecasts of UK surface temperatures during the winter period from 20072013.


Tue 25 Jun, '19 
CRiSM seminarMS.05Prof. Malgorzata Bogdan, University of Wroclaw, Poland (15:0016:00) 