Currently confirmed plenary speakers are:
- John Aston - University of Cambridge
- Title: Functional Data in Constrained Spaces for Understanding Language
- Abstract: Functional Data Analysis is concerned with the statistical analysis of data which are curves or surfaces. There has been considerable progress made in this area over the last 20-30 years, but most of this work has been focused on 1-dimensional curves living in a standard space such as the space of square integrable functions. However, many real data applications, such as those from linguistics, imply considerable constraints on data which is not a simple curve. These constraints will be integral to understanding the application, but also be pivotal to how we need to perform the statistical analysis. Considering constrained functional data will allow us to think about how we might generate ancient sounds from the past.
[Joint work with Davide Pigoli, Shahin Tavakoli and John Coleman]
- Bärbel Finkenstädt
- Title: Hidden Markov modelling for Digital circadian and Sleep Health
- Abstract: Telemonitoring of Circadian rhythmicity in physical activity (PA), sleep, and body temperature, could identify individuals at increased risk of poor health, including cancer and cardiovascular diseases, and could be used to support chronomodulated therapies and personalized prevention. Among the numerous application fields of HMMs, there is a growing interest within the context of e-Health in order to gain insight into an individual’s health status based on relevant biomarker data such as PA which can be easily and objectively measured in a non-obtrusive way under normal living conditions using accelerometry or actigraphy by means of wearable sensing devices. We will show novel HMM approaches with circadian-clock driven transition probabilities which give rise to model-derived and interpretable "circadian parameters" for monitoring and quantifying a subject’s circadian rhythm, as well as quality and quantity of sleep. Results of applications to individuals as risk, such as nightshiftworkers and cancer patients, will be shown and an outlook towards statistical telemonitoring will be discussed.
- Saul Jacka
- Title: Derivatives, Risk and Regulation
- Abstract: The inception of Mathematical Finance (MF) is normally dated from the publication of Black and Scholes in 1973, but the roots in insurance, finance and probability go back a lot further.
Modern developments are very broad and bring in techniques from functional analysis, statistics, dynamical systems, stochastic control and machine learning, while a major emphasis is now on regulation and reserving (in the old-fashioned insurance sense).
We’ll talk about some (a little) MF history, removing some of the artificial assumptions underlying “classical” MF, and where it leads to in (some) modern developments.
- Wilfrid Kendall
- Title: From Buffon's needle to random spatial networks
- Abstract: A classic theme of stochastic geometry started 300 years ago, when Buffon described how to estimate pi by examining how randomly thrown baguettes landed on a ruled floor. The talk will connect this to modern work on random spatial transportation networks: how can they be made efficient? what might one say about traffic in network models? and to what extent can one construct scale-invariant random spatial networks that are flexible enough to permit statistical modelling?
- Silvia Liverani
- Title: Bayesian modelling for spatially misaligned areal data
- Abstract: I will present a method for spatially misaligned areal data using the multiple membership principle and a weighted average of conditional autoregressive spatial random effects. This allows to embed spatial dependence to model a misaligned outcome variables and estimate relative risks. I will also discuss parametrisation and identifiability of this model. The methods are illustrated with an application of this modelling strategy to diabetes prevalence data in South London.
- Tony O'Hagan
- Title: Neutron overpower protection trip setpoint for CANDU nuclear reactors
- Abstract: One of several safety systems in a CANDU (Canada Deuterium Uranium) nuclear reactor is called the NOP (Neutron Overpower Protection) system. Several detectors in the core monitor neutron flux, and an emergency shutdown is triggered if the flux reading in one (or more, depending on configuration) detectors exceeds a level known as the TSP (Trip Setpoint). Several factors make determination of a suitable TSP complex.
- A unique feature of the CANDU design of nuclear reactors is that individual fuel channels are refuelled without shutting the reactor down. This means that the distribution and balance of flux across the core is constantly changing.
- The principal role of the NOP TSP is to protect against anticipated failures or malfunctions of the reactor regulatory system, together with other design basis events. The TSP therefore has to operate effectively for a wide variety of events that may arise outside normal operations.
- Flux detectors only monitor a few points in the core (and are subject to measurement error and calibration drift), and it is generally not possible to obtain direct and detailed measurements of processes in the core. Most of the data used for analysing a TSP comes from complex computer codes that model the nuclear physics and the thermalhydraulics.
- Gareth Roberts
- Title: Retrospective Simulation
- Abstract: This presentation will describe a collection of stochastic simulation techniques known as retrospective simulation methods. These simple techniques subvert the normal order of simulation operations within an algorithm often leading to striking efficiency gains. These methods were designed as essential tools for simulation and inference from statistical models involving diffusions and related stochastic processes. They have also been applied to Bayesian inference for various Dirichlet mixture models without the need for truncation/approximation. The methods are frequently employed as important components within MCMC algorithms.
The main example presented in this talk however will be a pure simulation problem. The FIFA World Cup draw allocates 32 teams into 8 groups of 4. However many configurations are excluded by geographical and seeding constraints. The problem is how can this be done in a sequential fashion (as is required for the entertainment value of the draw) in order to achieve a draw which is distributed uniformly among all possible configurations. The procedure adopted by FIFA (for example in the 2022 draw) can be shown to be biased. However retrospective simulation methods can be used to provide a practical and completely unbiased alternative.
- Jim Smith
- Title: Dynamic Subjective Bayes - Managing Massive Data through the Language of Probability
- Abstract: Once upon a time foundational Bayes was generally perceived as at best an unrealisable ideal only applicable in toy problems in applied statistics. Over my lifetime this perception has been stood on its head. For the usual problems addressed by statisticians it is common to apply Bayesian computational methodologies to most of these traditional domains - often using so-call objective methods.
However, over the last 10 years dynamic streaming albeit patchy data sets can be massive and parameter spaces are then of extremely high dimension. Even specifying credible full joint distributions over these spaces is a severe challenge and we find in practice that the vast data sets available do not really cover the areas of the problem of interest. Therefore prior structural information - elicited using natural language - together with a utility focus, only then embellished by probabilistic judgements, are required before reliable inferences become feasible.
In this talk I will demonstrate how sound inferential principles and sophisticated mathematics can be used to guide dynamic decisions in one such complex domain to provide new tools for real time decision support. I will argue that if statisticians are prepared to use probability models as a language around which to explore and develop hypotheses then the scope of the problems addressed by statisticians is excitingly expanded and urge that we do this.
- Dootika Vats
- Title: Comparing apples to oranges: a universal effective sample size
- Abstract: Effective sample size (ESS) is a popular, powerful, and practical numerical summary for assessing the performance of a Markov chain Monte Carlo (MCMC) sampler. ESS estimates the number of iid samples that would return the same variance of an estimator, as a given MCMC sample. The idea of an ESS is also used in importance sampling to determine the quality of the proposal, although it is often remarked that comparing the ESS in importance sampling to the ESS in MCMC would be like comparing apples to oranges. In this talk, I will compare apples to oranges! I present a unifying framework for ESS that allows users to compare MCMC and importance sampling for a given estimation problem. I further discuss how the ESS can be employed to arrive at principled stopping rules for simulations. Some open problems and practical concerns will be presented in addition to a few examples.
- Mike West
- Title: Bayesian Forecasting and Decisions – with some attention to a 50+year perspective
- Abstract: Bayesian analysis, and its roles in time series forecasting, prediction more broadly, and as the foundation for coherent decision analysis, is and has been a central pillar of the success and development of our discipline for the last several decades. Bayesian modelling and subjective reasoning for prediction and decisions also defined the foundation– and continues as a core theme and central focus of the intellectual community– of and for the Department of Statistics at Warwick.
Continuing in this tradition, I will discuss some recent developments in Bayesian thinking addressing questions of statistical model assessment, calibration, comparison and combination that define continuing conceptual and practical challenges to all areas of quantitative analysis. Bayesian predictive synthesis (BPS) has evolved to advance methodology in areas including forecast distribution combination with defined predictive goals, while also highlighting foundational questions on the scope of Bayesian model uncertainty analysis as it is traditionally understood. More recent evolution of these core foundational trends in thinking and resulting methodology have broadened perspective to remind and refocus attention on decision analysis, and to emphasise both predictive and decision analytic goals in the model uncertainty, evaluation and synthesis enterprise. The broader foundational framework and theory of Bayesian predictive decision synthesis (BPDS) emerges. Applied contexts defining examples for this event and talk link to areas of applied focus that have been central in the annals of statistics at Warwick– including optimal design for regression prediction, and sequential time series forecasting for financial portfolio decisions.