Skip to main content Skip to navigation

List of posters

  • Helen Webster (MetOffice/University of Exeter, UK)

Title: Accounting for meteorological errors in a Bayesian inverse modelling system for estimating volcanic ash emissions

Abstract: Volcanic ash in the atmosphere poses a significant hazard to aviation. To minimise risk, atmospheric dispersion models are used to predict the transport of ash clouds. Accurate ash cloud forecasts require a good estimate of mass eruption rates and ash injection heights. These parameters are, however, highly uncertain and inversion techniques, fusing model forecasts and observations, have been developed to better constrain the emission source term and improve ash cloud forecasts. The inversion modelling system InTEM for volcanic ash uses a Bayesian approach to obtain a best estimate of height- and time-varying ash emission rates. It combines satellite observations of the ash cloud, prior estimates of the ash emissions and an atmospheric dispersion model. Gaussian probability distributions are assumed and estimates of uncertainty in the observations and in the prior emission estimate are considered. Uncertainties in the atmospheric dispersion model, including uncertainty in the driving meteorological data, are not currently represented and this limits the success of the method when such errors are significant. This poster presents InTEM for volcanic ash and explores using ensembles of numerical weather prediction forecasts to account for meteorological errors. Discrepancies identified between modelled and observed ash clouds from the 2011 eruption of the Icelandic volcano Grímsvötn were previously attributed to errors in the input meteorological data. We use this eruption as a case study to illustrate an iterative method to identify a best meteorological dataset from within the ensemble, leading to improvements in ash cloud forecasting in an operational setting.

  • Simon Driscoll (University of Reading, UK)

Title: Machine Learning and Parameter Sensitivity Analysis of Sea Ice Thermodynamics

Abstract: Sea ice plays an essential role in global ocean circulation and in regulating Earth's climate and weather. Melt ponds that form on the ice have a profound impact on the Arctic's climate, and their evolution is one of the main factors affecting sea-ice albedo and hence the polar climate system. Parametrisations of these physical processes are based on a number of assumptions and can include many uncertain parameters that have a substantial effect on the simulated evolution of the melt ponds. We perform perturbed parameter ensembles on the state-of-the-art sea ice column physics model, Icepack. These perturbed parameter ensembles of Icepack's melt pond parameters show that Icepack's predictions of key variables such as sea ice thickness vary by a few metres after only a decade of simulation - where these parameters are perturbed within their known ranges of uncertainty. Sobol sensitivity analysis, a form of variance based global sensitivity analysis, is performed on this advanced melt pond parametrisation. Results show that model sensitivity to these uncertain parameter values vary both spatially and temporally. Given this uncertainty and source of prediction error, we propose to replace the melt pond parametrisation with a data driven emulator. Neural networks are shown to be capable of learning this parametrisation and replacing it in the Icepack model without causing instability or drift as a first step to understand the viability of this approach. Secondly, we train neural networks on observational and reanalysis data of both atmospheric and sea ice variables, targeting melt pond fraction and broadband albedo as output. We show that neural networks can learn to predict these targets, and thus discuss how data driven neural networks can replace the `parametric’ parametrisation approach applied not only in sea ice modelling but also more broadly in climate modelling.

  • Kamran Pentland (University of Warwick, UK)

Title: LinearParareal: accelerating the time-parallel simulation of linear IVPs

Abstract: Parareal is a well-studied time-parallel algorithm designed to integrate initial value problems (IVPs) by iteratively combining solutions from cheap (coarse) and expensive (fine) numerical integrators using a predictor-corrector (PC) scheme. To obtain high parallel speedup, a solution must be found in as few iterations as possible, something that Parareal struggles to do for many IVPs. This is because it uses each fine and coarse solution only once before discarding it. In this poster, we present LinearParareal, an easy-to-use time-parallel algorithm that instead uses all of the solution data to construct a linear operator which models the difference between the fine and coarse solutions within the PC scheme. By using this operator, we demonstrate that one can simulate (in parallel) a linear IVP in one iteration, obtaining parallel efficiencies above 80% (whereas Parareal rarely achieves 50%). In addition, solutions are shown to be accurate to (almost) within machine precision of the serially obtained fine solution, i.e. the high accuracy IVP solution.

  • Anna-Louise Ellis (MetOffice, UK)

Title: Enabling Multi-Fidelity, Multi-Modal Environment Data: Probablistic Machine Learning with Neural Processes ‘LITE’ and Resilient Kernels

Abstract: The use of multimodal datasets in deep learning are becoming increasingly common in an attempt to synthesise nonlinear interactions that may be present but not yet fully understood in some phenomena. Generally, significant volumes of high-fidelity datasets are required to model complex physical systems using these deep learning techniques. Indeed, where these approaches have demonstrated adequate success, datasets such as quality-assured ERA5 are evident. These datasets are scarce and those that do exist are frequently not of the volumes necessary for statistically robust deep learning.

  • Raiha Browning (University of Warwick, UK)

Title: AMISforInfectiousDiseases: an R package to fit a transmission model to a prevalence map

Abstract: We present in this work an R package to integrate a geostatistical map with infectious disease transmission models using an adaptation of adaptive multiple importance sampling (AMIS) to obtain parameter estimates at a subnational level. For a particular infectious disease, the main function in the package takes as input a disease transmission model and a map of the disease prevalence, and subsequently weights samples generated from the transmission model according to their similarity to the disease prevalence at each location. The weighting step is conducted via Adaptive Multiple Importance Sampling (AMIS) . The function then outputs a weighted sample of parameters from the transmission model that can be used for forecasting analysis etc. The function also allows for multiple time points if there are multiple prevalence maps available for different timepoints. The AMIS algorithm then attempts to find suitable parameter combinations for all time slices. We also present a case study using the package in practice.

  • Francesca Basini (University of Warwick, UK)

Title: Trajectory Inference with neural networks for applications in cell differentiation

Abstract: In developmental biology, one of the main interests is to gain insight on the mechanisms that characterise the differentiation process of cells as they evolve from young, unspecialised ones into mature and specialised. Gene expression plays a crucial role in this, determining the characteristics of the cell and ultimately its cell type. In this context, scRNAseq data represent partial noisy measurements over multiple time points (i.e. developmental days) with high-dimensional features (i.e. genes) on each cell unit. Our proposed approach involves modelling the transition between time points as as a series of conditional diffusion processes using a neural SDE. In particular, we learn the potential function, associated to the system, by using the energy score as objective loss function and show the performance in a few scenarios.

  • David Huk (University of Warwick, UK)

Title: Censored Spatial Copulas via Scoring Rule Minimisation

Abstract: This work develops a novel method for generating conditional probabilistic rainfall forecasts with temporal and spatial dependence. A two-step procedure is employed. Firstly, marginal location-specific distributions are modelled independently of one another. Secondly, a spatial dependency structure is learned in order to make these marginal distributions spatially coherent. To learn marginal distributions over rainfall values, we propose a class of models termed Joint Generalised Neural Models (JGNMs). These models expand the linear part of generalised linear models with a deep neural network allowing them to take into account non-linear trends of the data while learning the parameters for a distribution over the outcome space. In order to understand the spatial dependency structure of the data, a model based on censored copulas is presented. It is designed for the particularities of rainfall data and incorporates the spatial aspect into our approach. In addition, we introduce a novel method for optimising probabilistic models by relying on Scoring Rules and apply it to our censored copula. Uniting our two contributions, namely the JGNM and the Censored Spatial Copulas into a single model, we get a probabilistic model capable of generating possible scenarios on short to long-term timescales, able to be evaluated at any given location, seen or unseen. We show an application of it to a precipitation downscaling problem on a large UK rainfall dataset and compare it to existing methods.

    • Rilwan Adewoyin (University of Warwick, UK)

    Title: TRU-NET: a deep learning approach to high resolution prediction of rainfall

    Abstract: Climate models (CM) are used to evaluate the impact of climate change on the risk of floods and heavy precipitation events. However, these numerical simulators produce outputs with low spatial resolution that exhibit difficulties representing precipitation events accurately. This is mainly due to computational limitations on the spatial resolution used when simulating multi-scale weather dynamics in the atmosphere. To improve the prediction of high resolution precipitation we apply a Deep Learning (DL) approach using input data from a reanalysis product, that is comparable to a climate model’s output, but can be directly related to precipitation observations at a given time and location. Further, our input excludes local precipitation, but includes model fields (weather variables) that are more predictable and generalizable than local precipitation. To this end, we present TRU-NET (Temporal Recurrent U-Net), an encoder-decoder model featuring a novel 2D cross attention mechanism between contiguous convolutional-recurrent layers to effectively model multi-scale spatio-temporal weather processes. We also propose a non-stochastic variant of the conditional-continuous (CC) loss function to capture the zero-skewed patterns of rainfall. Experiments show that our models, trained with our CC loss, consistently attain lower RMSE and MAE scores than a DL model prevalent in precipitation downscaling and outperform a state-of-the-art dynamical weather model. Moreover, by evaluating the performance of our model under various data formulation strategies, for the training and test sets, we show that there is enough data for our deep learning approach to output robust, high-quality results across seasons and varying regions.

    • Hussain Abass (WMG, University of Warwick, UK)

    Title: Multi-fidelity Bayesian optimisation of discontinuous thermoplastic composite tapes

    Abstract: Discontinuous long fibre (DLF) thermoplastic composites are produced by chopping unidirectional (UD) tape into ‘flakes’ or ‘chips’ of a prescribed length. These materials offer attractive mechanical properties, low cycle times and high formability compared to continuous fibre composites, providing a promising light-weight alternative for metal replacement in high-volume applications. DLFs provide a trade-off solution between the excellent mechanical properties of continuous fibre composites and processability of short fibre composites. The current work presents a novel method to maximise the mechanical properties of two subset architectures of DLFs – perfectly aligned DLFs and DLFs with variable orientation. Significant versatility is attainable in the performance of DLFs due to the variability of the microstructure, meaning performance can be tailored by controlling fibre orientation and discontinuities by varying manufacturing methods and parameters. A novel Bayesian optimisation (BO) framework is proposed to maximise the properties of the DLFs. In this work, a previously developed finite element analysis (FEA) progressive damage model is combined with a data-driven BO routine to maximise the mechanical properties of DLFs for both single and multi-objective design cases. One manufacturing method for aligned DLFs, intentionally inducing discontinuities in pre-impregnated material, is optimised for two variables – the geometric patterns for inducing discontinuities across the laminate, and optimisation of the distance between discontinuities in adjacent plies. The theoretical performance of variable orientation DLFs is optimised for the probabilistic orientation distribution when placing the fibres of a compression moulding charge. The results of optimising the aligned DFCs shows an increase in strength of ~50% from an unoptimised baseline structure. In the case of variable orientation DLFs, Pareto fronts are provided for multiple performance objectives, demonstrating the trade-offs between performance attainable for different loading conditions. The optimisation framework uses Gaussian processes (GPs) as surrogate models for FEA models at several fidelities for various objectives. An expected hypervolume improvement acquisition function is used to select candidates for the next objective function(s) based upon maximising the information gained per unit resource cost. This method ensures efficient evaluation of maxima and Pareto fronts for single and multi-objective optimisation respectively. The performance of this framework is compared to single-fidelity optimisation, showing a significant improvement in convergence costs. The framework is the first to combine multi-fidelity damage modelling in a data driven BO routine for composites.