Skip to main content Skip to navigation

Abstracts


Talks

  • Paul Embrechts (ETH) - Topics in Quantitative Risk Management Short Course
    • Abstract: In this short course I will discuss three problems from the realm of Quantitative Risk Management.
      These topics are based on the following recent papers:
      • Topic 1: Embrechts, P., Wang, B., Wang, R. (2015): Aggregation-robustness and model uncertainty of regulatory risk measures. Finance and Stochastics 19(4), 763-790.
      • Topic 2: Chavez-Demoulin, V., Embrechts, P., Hofert, M. (2016): An extreme value approach for modeling Operational Risk losses depending on covariates. Journal of Risk and Insurance 83(3), 735-776.
      • Topic 3: Embrechts, P., Hofert, M., Wang, R. (2016): Bernoulli and tail-dependence compatibility. Annals of Applied Probability 26(3), 1636-1658.
  • Vicky Henderson (Warwick) - Prospect Theory in a Dynamic Context
    • Abstract: Prospect theory is one of the most popular non-expected utility models designed to capture biases in our behaviour well documented in experimental studies. In fact, Kahneman and Tversky's (1979, 1992) papers on prospect theory are among the most highly cited papers in economics. One important component of prospect theory is the overweighting of unlikely events, captured by probability weighting. For example, this has been used to explain why people both buy insurance and lottery tickets. In this talk I will present some recent progress on dynamic prospect theory models with probability weighting. We will find people can change their minds, in some settings people might ''gamble until the bitter end'', and find randomization attractive, but fortunately, in other settings, we can better capture features of investor behaviour including the desire for stop-loss strategies, right-skewed payoffs, and the disposition effect.
  • Alexander McNeil (York) - More Powerful Backtests for Market Risk Models Using Realized p-Values
    • Abstract: We present an overarching approach to the statistical backtesting of market risk models over daily or other short time horizons using weighted realized p-values. This approach subsumes VaR exceptions tests at one or more levels as well as other tests that have been proposed in the recent academic literature. It allows new, more powerful tests to be devised that are effective at exposing deficiencies of the predictive models that are used to compute measures of tail risk for capital adequacy purposes. Under the Fundamental Review of the Trading Book (FRTB) banks are required to implement backtesting programmes that go beyond the basic VaR exception tests that are necessary for internal model approval. Our approach sheds light on the kind of tests that banks and regulators should consider to satisfy this requirement. We also show how the methodology can be generalized to construct simultaneous backtests of multiple desks.
  • Ioannis Papastathopoulos (Edinburgh) - Modelling Clusters of Extreme Values
    • Abstract: It is well known that short range dependence of random processes leads to clusters of extreme values. In particular, the generating mechanism of extreme events in time or space, has an inhomogeneous Poisson process character that describes the peaks of the events together with a cluster distribution that can vary spatio-temporally. This representation provides an insightful modelling basis according to which asymptotically motivated statistical models can be constructed. Such models are typically fitted to observed data in order to infer summaries of the cluster distribution such as the extent and severity of the event. In this talk, I will present a broad framework for modelling clusters of extreme events based on limiting forms of stationary Markov processes with higher order memory and with special emphasis placed on asymptotically independent stochastic processes. I will present strong solutions of limiting stochastic difference equations that give rise to a new class of tail chains that encompasses a very rich structure that is indispensable for statistical inference. Last, I will discuss extensions to non-stationary processes and statistical estimation together with an application to extremes of North Sea ocean waves.
  • Jon Tawn (Lancaster) - Extreme Value Methods and their Applications Short Course
    • Abstract: This series of talks will introduce the strategy and associated methods of using asymptotically motivated extreme values models for univariate and multivariate extremes value problems and will provide illustrations of how these are used to address substantive problems in environmental risk assessment. In the univariate case I will cover maxima and threshold methods and discuss how to account for non-identically and non-independent variables. In the multivariate case the definition of an extreme is not unique. I will give an overview of the different types of asymptotic approach in terms of the direction of the extrapolation and show different ways of measuring extremal dependence. I will illustrate these methods in their use for the prevention of coastal flooding, for identifying the cause of the sinking of the MV Derbyshire, for setting new global shipping safety standards, for assessing the risk of heatwaves, for finding out how often the UK gets a 1 in 100 years flood event, and for deriving the estimated distribution of flood insurance claims.
  • Jenny Wadsworth (Lancaster) - Non-limiting spatial extremes
    • Abstract: Many questions concerning environmental risk can be phrased as spatial extreme value problems. Classical extreme value theory provides limiting models for maxima or threshold exceedances of a wide class of underlying spatial processes. These models can then be fitted to suitably defined extremes of spatial datasets and used, for example, to estimate the probability of events more extreme than we have observed to date. However, a major practical problem is that frequently the data do not appear to follow these limiting models at observable levels, and assuming otherwise leads to bias in estimation of rare event probabilities. To deal with this we require models that allow flexibility in both what the limit should be, and in the mode of convergence towards it. I will present a construction for such a model and discuss its application to some wave height data from the North Sea.

Posters

  • Nanda Aryal (Melbourne) - Fitting the Bartlett-Lewis rainfall model using Approximate Bayesian Computation
    • Abstract: Presently cluster rainfall models are usually fitted using Generalized Method of Moments (GMM). GMM compares empirical and theoretical moments using weighted least squared criteria. Complex stochastic models may not have theoretical moments. Approximate Bayesian Computation (ABC) fills this gap by simulation techniques. It is a new approach for Poisson cluster rainfall modeling regime. ABC compares the observed data with the simulated data through summary statistics. The method replaces the theoretical moments by simulated data moments. This simulation study shows that ABC is a better option for parameter estimation of the Bartlett-Lewis rainfall model.
  • Alejandra Avalos Pacheco (Warwick) - Batch effect adjustment using Bayesian factor analysis
    • Abstract: With the rapidly increasing volume of heterogeneous biological data, available due to high-throughput technologies, it is becoming more and more difficult to integrate and summarise these data for exploratory analyses. Experimental variation, such as “batch effects”, is present in most large datasets and adds difficulty to the aforementioned task. In order to keep up with this large influx of biological data, new integrative methods are needed, based on dimensionality reduction techniques for effective understanding of these vast amounts
      of information. There is a need for new dimensionality reduction methods that can integrate heterogeneous data while preventing these technical biases from dominating the results. We provide a model based on factor analysis and latent factor regression, which incorporates a novel adjustment for variance batch effects often observed in bioinformatics data. Our model is directly applicable as data integration technique via a concatenation of heterogeneous datasets, resulting into common latent factors across the datasets.
  • Cyril Chimisov (Warwick) - Adaptive MCMC
    • Abstract: The popularity of Adaptive MCMC has been fueled on the one hand by its success in applications, and on the other hand, by mathematically appealing and computationally straightforward optimisation criteria for the Metropolis algorithm acceptance rate (and, equivalently, proposal scale). Similarly principled and operational criteria for optimising the selection probabilities of the Random Scan Gibbs Sampler have not been devised to date. In the present work we close this gap and develop a general purpose Adaptive Random Scan Gibbs Sampler that adapts the selection probabilities. The adaptation is guided by optimising the L2−spectral gap for the target’s Gaussian analogue, gradually, as target’s global covariance is learned by the sampler. The additional computational cost of the adaptation represents a small fraction of the total simulation effort. We present a number of moderately- and high-dimensional examples, including Truncated Normals, Bayesian Hierarchical Models and Hidden Markov Models, where significant computational gains are empirically observed for both, Adaptive Gibbs, and Adaptive Metropolis within Adaptive Gibbs version of the algorithm. We argue that Adaptive Random Scan Gibbs Samplers can be routinely implemented and substantial computational gains will be observed across many typical Gibbs sampling problems.
  • Mathias Christensen Cronjager (Oxford) - An almost infinite sites model
    • Abstract: When modelling aligned nucleotide sequences using a coalescent subject to mutation, it is common to presume that every mutation will affect a previously unaffected position on the genome; this is known as the infinite sites assumption. In a number of cases (such as when working with vira with short genomes and high mutation-rates) this assumption is violated which in turn may lead to model misspecification. Our work investigates an alternative to the infinite sites model which admits recurrent mutation, but which still allows for analysing data under an assumption of parsimony (i.e. assuming that the number of mutation-events is minimal or near-minimal). We present a recursive characterisation of the likelihood similar to known formulae for other coalescent-based models along with a dynamical-programming algorithm for computing likelihoods. The results presented are the result of joint work with Alejandra Avalos Pacheco (Warwick), Paul Jenkins (Warwick), and Jotun Hein (Oxford)
  • Jon Cockayne (Warwick) - Bayesian Probabilistic Numerical Methods
    • Abstract: The emergent field of probabilistic numerics has thus far lacked rigorous statistical foundations. We establish that a class of Bayesian probabilistic numerical methods can be cast as the solution to certain non-standard Bayesian inverse problems. This allows us to establish general conditions under which Bayesian probabilistic numerical methods are well-defined, encompassing both non-linear models and non-Gaussian prior distributions. For general computation, a numerical approximation scheme is developed and its asymptotic convergence is established. The theoretical development is then extended to pipelines of numerical computation, wherein several probabilistic numerical methods are composed to perform more challenging numerical tasks. The contribution highlights an important research frontier at the interface of numerical analysis and uncertainty quantification, with some illustrative applications presented.
  • Nathan Cunningham (Warwick) - Particle Monte Carlo methods to cluster multiple datasets with applications to genomics data
    • Abstract: Cluster analysis is a popular technique for discovering group structure in a dataset. While it has been applied in a number of fields, it is often applied in clustering genomics data where identifying groups of genes can help uncover gene function and discover regulatory networks. Modern high-throughput technologies produce a vast array of data from disparate sources, however classic clustering algorithms, such as k-means and hierarchical cluster analysis, are typically geared towards continuous data and are not suitable for mixed data types. Previous work on integrating these data employed a Bayesian non-parametric model, where observations were assigned to clusters using a one-at-a-time approach. This was found to be able to uncover information which would be difficult or impossible to uncover from approaches considering a single dataset, however it was found to have slow mixing properties. We present an extension to the original MDI algorithm where cluster allocations are determined using a particle Gibbs approach, which has been proposed as a means of escaping this problem.
  • Jack Jewson (Warwick) - Robust Bayesian Decisions in the M-open world.
    • Abstract: Given a prior and a model, Bayesian statistics provides the tools to update prior beliefs given some data, and identifies the predictive distribution closest to the true data generating process in terms of Kullback-Leibler divergence. When this model is a correct representation of the true data generating process, Bayesian statistics learns in the optimal way (Zellner, 1988),'correctly' capturing the information available in the data and finding the true data generating process as n grows. However, in the M-open world, it is not believed that any model correctly represents the real world process, but is instead chosen as a convenient approximation. We show that in this scenario, blindly targeting the predictive distribution minimising the Kullback-Leibler divergence to the truth is not robust to misspecifications in the model. Instead we consider minimising different divergence measures and appeal to general Bayesian updating (Bissiri, Holmes and Walker (2016)), a method to produce a coherent Bayesian update in the absence of a data generating model, in order to produce more robust Bayesian decisions.
  • Matt Moores (Warwick) - Bayesian modelling and computation for Raman spectroscopy
    • Abstract: Raman spectroscopy is a technique for detecting and identifying molecules such as DNA. It is sensitive at very low concentrations and can accurately quantify the amount of a given molecule in a sample. Raman scattering produces a complex pattern of peaks, which correspond to the vibrational modes of the molecule. Each Raman-active molecule has a unique spectral signature, comprised by the locations and amplitudes of the peaks. The shift in frequency of the photons is proportional to the change in energy state, which is reflected in the locations of the peaks. The amplitudes of the peaks increase linearly with the concentration of the molecule. However, the presence of a large, nonuniform background presents a major challenge to analysis of these spectra. The estimated amplitudes are completely dependent on the position of the baseline and vice-versa. We introduce a sequential Monte Carlo (SMC) algorithm to separate the observed spectrum into a series of peaks plus a smoothly-varying baseline, corrupted by additive white noise. The peaks are modelled as Lorentzian, Gaussian or Voigt functions, while the baseline is estimated using a penalised cubic spline. This latent continuous representation accounts for differences in resolution between measurements. Our hierarchical model allows for batch effects between technical replicates. We incorporate prior information to improve identifiability and regularise the solution. The posterior distribution can be incrementally updated as more data becomes available, resulting in a scalable algorithm that is robust to local maxima. These methods have been implemented as an R package, using RcppEigen and OpenMP.
  • David Selby (Warwick) - PageRank and the Bradley–Terry model
    • Abstract: Network centrality measures are used to compare nodes according to their importance. Applications include ranking sports teams, ordering web pages in search results and estimating the influence of academic journals. Eigenvector-based metrics such PageRank derive these measures from the stationary distribution of an ergodic Markov chain, whereas techniques such as the Bradley-Terry model treat ranking as a statistical estimation problem. By using a quasi-symmetry representation, we show that the PageRank vector, suitably scaled, is a consistent estimator for the Bradley-Terry model. Scaled PageRanks can therefore be used, for example, to initialise iterative algorithms for Bradley-Terry maximum likelihood estimation, improving their performance on large datasets. We study the variance of scaled PageRank as an estimator, and find full asymptotic efficiency in some balanced situations of practical importance.
  • Rachel Wilkerson (Warwick) - Exploring bespoke causal graphs
    • Abstract: The semantics for causal relationships expressed in the study of Bayesian networks are powerful and widely used, but may be ill suited to the dynamics of a problem in a given domain. Extending established causal algebras to new classes of graphical models results in nuanced notions of causation.
  • Daniel Wilson-Nunn (Warwick) - A Rough Path Signature Approach to Arabic Online Handwritten Character Recognition
    • Abstract: Arabic handwriting poses a new set of challenges previously unseen in Latin and Chinese handwriting. These challenges include the strictly cursive nature of the script, the changing in shape of each character owing to its position within a word and numerous different ligatures that occur in handwritten Arabic. Building on recent breakthroughs in Chinese handwriting recognition using tools from Rough Path Theory, alongside state of the art Deep Learning techniques, this work introduces the foundations for a Rough Path Signature approach to Online Arabic character recognition. In the preliminary stages of this work, results are exceedingly promising, with a recognition rate approximately 5% higher than the current state of the art tools on the same data set. Keywords: Online handwriting recognition, Arabic handwriting recognition, rough path theory, rough path signature, random forests, decision trees, neural networks, LSTMs.