This field studies large systems of particles and uses statistical mechanics to investigate how particle interactions affect the system's behavior. We also research phase transitions and fundamental laws that govern atomic and molecular behavior, as well as large scale behaviour in wihch one may examine things such as clustering, mean-field behaviour and stability. Examples include Ising model, Voter models, exculsion models, percolation structures, and many more. Models with random environments can also be considered. These introduce an additional layer of randomness which influences the rates at which events or interactions occur, and present demanding mathematical challenges.
Integrable probability is a research area that focuses on the study of probability models that arise in integrable systems, such as random matrix theory, interacting particle systems, and stochastic partial differential equations. The main goal is to understand the behavior of these models and derive exact solutions using techniques from integrable systems theory, such as the Bethe ansatz and Riemann-Hilbert methods. Integrable probability has applications in physics, statistics, and computer science.
Particle processes in which particles are assigned a mass and randomly move around in space but also undergo random splitting are called branching processes. More generally, one may think of random tree-like structures in space and time with. Examples include branching Markov processes, continuous-state branching processes, superprocesses, coalescent and (growth) fragmentation processes, Fleming-Viot processes and continuous random trees. One is interested in both the propogation of mass and the evolution of population density.
In recent years, there has been a focus on Markov processes with path discontinuities. Many such processes, at their heart, are based around an underlying Poisson-type point process on different structural spaces. These include Lévy processes, self-similar Markov processes and Markov additive processes. Of particular interest are path decompositions that decompose the path of the process into smaller correlated constituent parts, thus allowing a characterisation of large and small-scale behaviour, in space and time.
Stochastic models of evolution predict patterns of genetic diversity due to evolutionary forces such as random mating, mutation, natural selection, and fluctuations in population size. They form the foundations of statistical inference methods for DNA sequence data, and are also drivers of research in probability and stochastic processes. Examples include the Wright-Fisher diffusion and the Kingman coalescent, as well as more general (potentially measure-valued) jump diffusions and branching-coalescing random graphs.
Stochastic analysis broadly includes the study of the properties and behaviour of continuous stochastic process. The theory of rough paths provides a path-wise framework for the study of stochastic differential equations. It builds on concepts such as the signature of a path, which are not only of paramount mathematical interest, but have far-reaching implications in statistics and machine learning. Particular interest lies also in the study of hypoelliptic diffusions, which lies at the interface between stochastic analysis and differential geometry, revealing links between geometric features of the state space and probabilistic aspects.
Stochastic partial differential equations (SPDEs) are mathematical models used to describe physical phenomena that exhibit random behavior over time and space. In particular, critical equations are at the heart of the research at Warwick. These are SPDEs where the balance between the nonlinear and linear terms in the equation is delicate, leading to critical phenomena such as power-law behavior and scaling limits. The study of SPDEs is an active area of research in mathematics and has applications in physics, biology and finance.
Monte Carlo methods are ubiquitous in Statistics, Machine Learning and Computer Science more broadly, Physics, Biology and many other areas. The main aim is to analyse the convergence properties of general simulation algorithms (such as Markov Chain Monte Carlo), widely used in practice, and design new approximate and exact simulation algorithms in order to develop Monte Carlo methods with superior convergence properties.
The main aim of this core area in applied probability is to ascertain long term behaviour of stochastic systems. In the Markovian setting, a fundamental result concerns convergence in law to a unique equilibrium distribution, under ergodicity. The study of mixing times quantifies the rate of this convergence, and determines the first time at which the law is close to equilibrium. Examples include random walks on random or dynamic graphs, card shuffles and interacting particle systems; applications are found in randomised algorithms and beyond. Questions concerning convergence and rates of convergence extend to non-Markovian stochastic evolutions as well.
Stochastic control appears in many areas of engineering, biology and economics in which an agent, either by choice or by the action of inherent laws, aims to optimise the trajectory of a random dynamical system in order to minimise a target objective functional of its path. Strategies may include stopping optimally at a random time, or adjusting the trajectory of the random dynamical system. In some instances there maybe more than one agent, each minimising its own target functional, in which case, the control problem becomes a stochastic game.
Counting and understanding how many structures exist with a particular collection of properties has many applications in discrete geometric structures such as graphs and networks. Adding in the extra feature of randomness leads us to probabilistic combinatorics, for which there are many applications in physics, biology and computer science. Mathematical questions are centred around the large-scale behaviour of random graphs and tree-like structures. The ineherent combinatorial structure informs how to characterise geometriic limits.