Skip to main content Skip to navigation

Abstracts

Keynote Talks

Livia Bartok-Partay (University of Warwick) Computational thermodynamics: what can we learn about an interatomic potential?

In recent years we have been working on adapting the Bayesian statistical approach, nested sampling, for studying atomistic systems. Nested sampling automatically generates all the relevant atomic configurations, unhindered by high barriers, and one of its most appealing advantages is that the global partition function can be calculated very easily, thus thermodynamic properties, such as the heat capacity or compressibility becomes accessible. Nested sampling provides an unbiased and exhaustive exploration, starting from the high energy region of the potential energy landscape (gas phase configurations) and progressing towards the ground state structure (crystalline solid) through a series of nested energy levels, estimating the corresponding phase space volume of each. This way the method samples the different basins proportional to their volume, and instead of providing an exhaustive list of the local minima, it identifies the thermodynamically most relevant states without any prior knowledge of the structures or phase transitions.

The use and advantages of the nested sampling method has been demonstrated in sampling the potential energy landscape of several model systems, allowing us to calculate their complete pressure-temperature phase diagram. These often result in the discovery of previously unknown thermodynamically stable solid phases, highlighting that the macroscopic properties of interatomic potential models can be very different from what is expected or intended. For example, in a recent work we have shown how nested sampling can be used to identify weaknesses of a machine learned model, and then guide the process of choosing training configurations for improving the model's performance.

Alex Robertson
(University of Warwick)

Partnering modelling and experiment to yield fundamental insights into defects in 2D materials

Defects in materials strongly influence their properties, a fact that can be used to tailor them to fit a desired application. This is used to great effect by the semiconductor industry, with atomic dopants allowing for the manipulation of the electronic properties in microelectronics. Understanding the relationship between defects and properties requires the coordination of both experimental and modelling approaches, where we can ideally harness the mutually complementary advantages of each. I will present some of my research into atomic defects in two dimensional materials, where I have used atomic resolution imaging via transmission electron microscopy (TEM) to resolve the nature of defects. This experimental dataset can then serve as a foundation for modelling to be reliably built upon, with the modelling in turn allowing us to understand the influence these defects have on the material’s properties. I will show then some research where we have used this approach to understand water desalination and the development of new catalysts.

Presentations from HetSys Cohort 2

Connor Allen

Development of workflows for modelling bulk materials


Sampling the potential energy surface to create ab initio datasets for machine learned interatomic potentials (MLIP) is an expensive process. A framework has been developed using non-diagonal supercells (Lloyd-Williams and Monserrat 2015 Phys. Rev. B 92 184301), allowing one to efficiently sample perturbative properties in the small displacement regime. Also, a workflow has been produced for calculating the melting point, using interface pinning (Pedersen et al 2013 Phys. Rev. B 88 094101), that will acquire quantum accuracy through the combination with MLIPs. This is ultimately towards the building of an accurate multiphase MLIP model for Ti.

Adam Fisher First Principle Validation of Energy Barriers in Ni3Al

Precipitates in Nickel-based superalloys form during heat treatment on a time scale inaccessible to direct Molecular Dynamics simulation, but can be studied using kinetic Monte Carlo (KMC).
This requires reliable values for the barrier energies separating distinct configurations over the trajectory of the system.
In this study, we validate barriers found with the activation relaxation technique nouveau (ARTn) method in Ni3Al with a published potential for the atomic interactions against first-principles methods. For this, we use the density functional theory (DFT) code CASTEP. We thus create a continuous validation chain from first principles to large-scale KMC.
Peter Lewin-Jones Computational Modelling to Predict Transitions in Drop Impact


Collisions and impacts of drops are critical to numerous processes, including raindrop formation, inkjet printing and spray cooling. When drops will bounce or contact depends complexly on the properties of the drops and the gas layer between them. We have developed a novel computational model for the collision and impact of drops. This uses a lubrication framework, incorporating gas kinetic effects. Our simulations show strong agreement with experiments of impacts and collisions. Our model enables us to explore the parameter space, with the aim of predicting the minimum gas film thickness and the critical impact speed for contact to occur.

Charlotte Rogerson

Framework for optimizing free parameters in ICF Hydrodynamic simulations using Gaussian Process Surrogate models

Nuclear fusion provides access to clean, unlimited, and reliable power source. One of the ways in which to achieve fusion is through Inertial Confinement Fusion (ICF). The design and interpretation of ICF experiments depends on accurate, predictive numerical calculations, along with well-defined uncertainties. Success of these calculations depends on high-quality equation of state (EoS) data. Out of several competing EoS models which currently exist, none of these do a good job at matching all the experimental data and only perform well in a subset of relevant parameter space. The hydrodynamic codes used to simulate ICF experiments also have many free parameters which
need to be optimised with experimental data and to quantify the uncertainty associated with them. The work presented here details the development of a framework which implements a Gaussian Process (GP) model in order to speed up parameter optimization used in ICF ensemble parameter simulations. The method has been applied to experimental data from the OMEGA Laser facility, and example applications from spherical [1] and planar [2] shock-timing data will be presented. These shock-timing experiments are simulated using the 1D hydrodynamics code Freyja, but the method is equally applicable to multidimensional codes where the computational speed-up of using
a GP becomes more apparent. We will discuss possibilities for combining 1D and multidimensional training sets. As the propagation of shocks in the early stages of an ICF implosion is sensitive to the equation of state (EoS) model used, this framework has also been applied to a variety of different EoS models to assess the ability of each fit to experimental data. The GP surrogate model will be trained on the resulting shock-merger time and the resulting fit to the experimental shock-velocity profile will be accessed.
[1] D. Cao et al, Phys. Plasmas 25, 052705 (2018)
[2] V. N. Goncharov et al, Phys. Plasmas 13, 012702 (2006)
Iain Best
Uncertainty Quantification in Atomistic Simulations using Interatomic Potentials
Atomistic simulations in several fields frequently rely on interatomic potentials (IPs) to predict physical output quantities of interest (QoIs). This is due to the vast decrease in cost of simulation when compared to ab-initio approaches – such as DFT – and correspondingly a far greater accessibility to more complex QoIs in both time- and length-scales. However, any move away from first principles methods will incur a loss in accuracy and an increase in uncertainty of QoIs, since we cannot expect a parameterised IP to reproduce the true potential energy surface of a given system. This uncertainty due to utilising an IP can arise from several sources: the parametric uncertainty from not knowing the ‘true’ parameters, uncertainty from incomplete/noisy training data in the case of machine leaning IPs, or from model form error arising from limitations in the functional form of the model, amongst others.
A complete method of placing meaningful error bars on output QoIs, by propagating uncertainties in a model through simulation to output(s), remains an important goal in multiscale materials modelling. Previous work towards this has been focused on ensemble/committee methods [1], producing a set of likely parameters by minimisation of a loss function aimed at matching DFT forces and energies.
We approach this problem from a robust statistical viewpoint by recasting model calibration as a Bayesian inverse problem; assuming prior distributions for IP parameters, formulating our likelihood based on observed data and model parameters, and finally forming a posterior distribution of coefficients. Sampling from this distribution, we form ensembles of possible models, perform simulations for each model and in this way, estimate the uncertainty in output QoIs. Furthermore, for models/quantities which do not easily admit a Bayesian approach, we can ensure calibrated uncertainties in QoIs via conformal prediction [2].
This approach can be utilised for both fixed form IPs and for machine learning IPs, for a wide range of possible QoIs, however recent focus is concentrated on relatively simple quantities such as elastic constants, vacancy formation energies and bulk moduli of silicon, using different atomic cluster expansion (ACE) [3,4] potentials.
[1] - Sarah Longbottom and Peter Brommer. “Uncertainty quantification for classical effective potentials: An extension to potfit”. In: arXiv (2018), pp. 0–12.
[2] - Angelopoulos, Anastasios N., and Stephen Bates. "A gentle introduction to conformal prediction and distribution-free uncertainty quantification.". In: arXiv (2021).
[3] - Ralf Drautz. “Atomic cluster expansion for accurate and transferable interatomic potentials”. Phys. Rev. B 99, 014104 (2019).
Alisdair Soppitt

Model for the simulation of reaction-mixing processes at boundary layers

We present a computational algorithm to model the turbulent mixing and transport of scalars through a channel in the presence of reactive boundary conditions, and obtain a pdf of their concentrations through time and space. A Lagrangian stochastic particle technique is used to
model scalar transport. We consider a system with a partially adsorbing boundary condition modelled through the coarse-graining of small scale linear reactions. We present a nonlinear dependence of the boundary reaction (modelling sorption) rate on the turbulent frequency, and a comparison with DNS data produced by a Lattice Boltzmann type method.

Lakshmi Shenoy

Modelling Fracture and Defects in Steel

Irradiation and extreme temperaturesin nuclearreactor pressure vessels (RPV) lead to ageing and embrittlement of itsstructural steels.As it is challenging tostudyageing processes inside RPVsexperimentally, there is interest in modelling thematomistically.While ab-initio calculations can give us high accuracy predictions, usingDFTdirectly to simulate large-scale phenomena like segregation of point defects or brittle fracture are prohibitively expensive.Machine learned interatomic potentials such as Gaussian approximation potential (GAP) and the Atomic Cluster Expansion allow us toaccess larger length scales with close to ab-initio accuracy, at a fraction of the computational cost.The firsthalfof the talk will be on developinga GAPfor prototypical austenitic steel Fe70Cr20Ni10 andusing it to studybulk and point defects in this concentrated alloy.The second halfof thetalk will be onusing an ACE for α-iron to studycrack propagationalong differentorientations. The stability range of fracture will be mapped via the numerical-continuation technique(NCFlex)proposed by Buze et. al. [1]. Preliminary tests on incorporating crack-dislocation interactionintoNCFlexwill be discussed. Bothcase studiesarestepstowardsatomisticallymodellingthe roots ofageing phenomenain RPV steelswithhigheraccuracy.

Omar-Farouk Adesida

Exploring the phase behaviour of hard sphere dimers: a nested sampling approach

Models of simple organic molecules, such as alkanes have been shown to display a rich variety of phase behaviour despite their relatively simple chemistry. However, elucidating phase diagrams and thermodynamic properties from these models can be quite tedious, requiring a several different techniques in order to resolve them.

Even simple models employing hard sphere models still intricate behaviour. However, in this work, we demonstrate an approach employing the nested sampling algorithm to quickly generate phase diagrams for these molecules; integrating over the partition function without a need for prior knowledge of the system.
As a preliminary approach to this problem, we take the simplest possible model for an alkane, a hard-sphere dimer consisting of two spheres at a fixed distance from each other, and resolve the thermodynamic properties for this system, while varying the separation between the two spheres.
Joe Gilkes

Advances in automated long-timescale chemical breakdown simulations

Computational kinetic modelling of long-term chemical breakdown processes is a valuable tool in materials design. It requires construction of large networks of kinetically relevant chemical reactions, each requiring accurate kinetic data that is challenging to obtain. We showcase an automated iterative network exploration algorithm coupled to a machine learning model for predicting activation energies, and how symbolic numeric modelling of these generated networks allows for flexible, efficient computation of kinetic profiles over continuously variable temperature regimes. We discuss the problems encountered with this approach and demonstrate a discrete kinetic approximation for greatly reducing the computational cost of long-timescale kinetic modelling.

Steven Tseng

A unified approach to solubility prediction: combining graph representations with molecular descriptors

Graph neural networks (GNNs) that learn representations of a molecule from its structure have shown great potential for molecular property prediction. This presentation will demonstrate how we’ve harnessed GNNs to encode the structural information of molecules and combined them with molecular descriptors that capture physicochemical properties for the purpose of solubility prediction. We will provide a summary of the theory and techniques employed and then highlight results from computational experiments. Our exploration we hope will illustrate the effectiveness of this framework for solubility prediction and facilitate more accurate models in the future.

Presentations from HetSys Cohort 3

Matt Nutter

TBC

The plasticity of Tungsten is largely determined by the movement of screw dislocations through the crystal. Under typical experimental conditions, the dislocations propagate by thermally activated nucleation and propagation of kink-pairs, which requires simulations cells well beyond the limits of DFT. We are building upon an existing GAP, with the aim of obtaining close to quantum accurate results at a reasonable cost. In the case of milder experimental conditions, the rare event nature of the nucleation prohibits the use of direct MD, necessitating the use of a higher scale model which is parameterised (mostly) on the results of NEB calculations.

Geraldine Anis

 

Dislocation dynamics in Ni-based superalloys from atomistic simulations

Ni-based superalloys are important materials for high temperature applications. Nanoscale precipitates in their microstructure hinder dislocation motion, which results in the extraordinary strengthening effect at elevated temperatures. In the present work, we study the motion of dislocations in these materials using molecular dynamics (MD) simulations. From our simulations, we extract the locations of edge dislocations moving under shear in pure Ni and pure Ni3Al. These are used to fit the parameters of an equation of motion using Differential Evolution Monte Carlo (DE-MC). This is the first step towards building a surrogate model to study dislocation-precipitate interactions.

Thomas Rocke

Atomistic failure in III-V semiconductors

III-V alloys are common in semiconductor optoelectronic devices. The high energy conditions during device operation lead to growth of crystal dislocations, which cause loss of efficiency and eventual device failure. Using GAP, we hope to atomistically model absorption of point defects into the dislocation core at near-quantum accuracy in order to better understand the mechanisms behind dislocation growth in these devices.

Ben Gosling

Determining the presence of laser-plasma instabilities in particle in cell simulations

Using high-intensity lasers in Inertial Confinement Fusion (ICF) schemes leads to the presence of Laser-Plasma instabilities in the form of three-wave coupling interactions.
Particle in Cell methods simulates the physics inside the coronal plasma where these LPI arise. We will briefly discuss how we have identified the presence of particular instabilities, such as Stimulated Raman Scattering (SRS) and Two-Plasmon-Decay (TPD), using the resonance matching conditions and fluid theory. We will then look at some of the large-scale simulations we have performed to look at the role of LPI in generating the 3/2 harmonic signal, as seen in the PALS laser experiment.
Oscar Holroyd

Linear quadratic regulation control for falling liquid films

We propose a new framework based on linear-quadratic regulation (LQR) for stabilising falling liquid films via injecting and removing fluid from the base at discrete locations. Our methodology bridges the gap between the reduced-order models accessible to the LQR controls and the full, nonlinear Navier-Stokes system describing the fluid flow. We find that not only is this technique successful, but that it works far beyond the anticipated range of validity of the reduced order models. The proposed methodology increases the feasibility of transferring robust control techniques towards real-world systems, and is also generalisable to other forms of actuation.

Ziad Fakhoury

Generating protein folding trajectories using contact-map-driven walks

Recent advances in machine learning methods have had a significant impact on protein structure prediction, but accurate generation and characterization of protein-folding pathways remains intractable. Here, we demonstrate how protein folding trajectories can be generated using a directed walk strategy operating in the space defined by the residue-level contact-map. This double-ended strategy views protein folding as a series of discrete transitions between connected minima on the potential energy surface. Subsequent reaction-path analysis for each transition enables thermodynamic and kinetic characterization of each protein-folding path. We validate the protein-folding paths generated by our discretized-walk strategy against direct molecular dynamics simulations for a series of model coarse-grained proteins constructed from hydrophobic and polar residues. This comparison demonstrates that ranking discretized paths based on the intermediate energy barriers provides a convenient route to generating physically-sensible folding ensembles. Importantly, by using directed walks in the protein contact-map space, we circumvent several of the traditional challenges associated with protein-folding studies, namely long time-scales required and unknown order parameters. As such, our approach offers a useful new route for studying the protein-folding problem.

Matyas Parrag

MemPrO: Membrane Protein Orientation in lipid bilayers

Membrane proteins play an important role in many vital systems of a cell, such as transport of ions and raw materials, communication between adjacent cells, and antibiotic resistant behaviors. Correctly orienting membrane proteins is almost always the first step in the molecular simulation and analysis of membrane-protein systems. The method presented aims to orient a wide range of proteins that interact with the membrane. This method enables the analysis of properties such as the width of intramembrane spaces, whilst also streamlining the setup of MD simulations for double membrane spanning proteins.

Anas Siddiqui

Machine-learned interatomic potentials for transition metal dichalcogenides Mo(1-x)W(x)S(2-2y)Se(2y alloys

Machine Learned Interatomic Potentials (MLIPs) combine the predictive power of Density Functional Theory (DFT) with the speed and scaling of interatomic potentials, enabling theoretical spectroscopy to be applied to larger and more complex systems than is possible with DFT. In this work, we train an MLIP for quaternary Transition Metal Dichalcogenide (TMDC) alloy systems of the form Mo1−xWxS2−2ySe2y, using the equivariant Neural Network (NN) MACE[1]. We demonstrate the ability of this potential to calculate vibrational properties of alloy TMDCs including phonon spectra for pure monolayers, and VDOS and Raman spectra for alloys, retaining DFT- level accuracy while greatly extending feasible system size and degree of sampling over alloy configurations. We are able to characterise the Raman active modes across the whole range of concentration, particularly for the “disorder induced” modes. This potential can serve as a tool to aid experimentalists in studying and designing TMDC alloys for future applications.

Jeremy Thorn
Modelling Disorder in Amorphous Pharmaceuticals with NMR
Nuclear Magnetic Resonance (NMR) experiments provide an exquisite, experimentally obtainable fingerprint of the local structure and dynamics of an atomic system. NMR practitioners use Gauge Including Projector Augmented Wave (GIPAW) DFT calculations regularly to assist in the interpretation, refinement, and prediction of NMR spectra [1]. In particular, GIPAW calculated spectra can be used to refine partially resolved model structures. Recently, machine learned (ML) surrogate models have been developed for the GIPAW code [2]. These models use the Smooth Overlap of Atomic Positions (SOAP) [3] descriptor to describe the local atomic environment of an atom and exploit this locality to enable them to make predictions for very large systems that would have previously been too computationally intensive for plane-wave DFT codes. One such class of systems is that of amorphous materials, which have traditionally been a challenge to model due to the large simulation cells required to capture their intrinsically disordered nature. Amorphous phases of molecular solids are of key interest in the field of pharmaceuticals. This is because such phases are usually much more efficacious than their crystalline counterparts but are typically high free-energy states [4].
These new surrogate ML models open up a new world of modelling techniques for amorphous systems. We propose a molecular dynamics inspired method to sample ensembles of models corresponding to a set of experimental or high accuracy computational features, such as the hydrogen NMR spectra or radial distribution function (RDF). By using the gradients of these features with respect to atomic positions, we are able to obtain computational savings over traditional inverse monte-carlo techniques, which are predicted to only grow as the complexity of the systems of interest increases. I will present the results of the method in its current state, and discuss the challenges we face along with our proposed solutions.
[1] Jonathan R. Yates, Chris J. Pickard, and Francesco Mauri. Physical Review B 76, 024401 (2007)
[2] Manuel Cordova, Edgar A. Engel, Artur Stefaniuk, Federico Paruzzo, Albert Hofstetter, Michele Ceriotti, and Lyndon Emsley. The Journal of Physical Chemistry C 2022 126 (39), 16710-16720
[3] AP Bartók, R Kondor, G Csányi. Physical Review B 87 (18), 184115, 2013
[4] Edina Vranic. Bosn J Basic Med Sci. 2004 Aug; 4(3): 35–39.
Dylan Morgan

Simulation of X-Ray Spectroscopy for Condensed Matter Systems

First principles simulations of x-ray photoemission spectroscopy (XPS) and near-edge x-ray absorption fine-structure (NEXAFS) crucially support the assignment of surface spectra composed of many overlapping signatures. Core-level constrained Density Functional Theory calculations based on the ΔSCF method are commonly used to predict relative XPS binding energy (BE) shifts but often fail to predict absolute BEs. The all-electron numeric atomic orbital code FHI-aims enables an accurate prediction of absolute BEs. However, the legacy code lacked computational scalability to address large systems and robustness concerning localisation of the core hole. We have since redesigned the legacy code, removing over 3000 lines of redundant code, and demonstrated massive improvements in the scaling whilst retaining the same functionality. Refactoring the code has allowed us to begin simulating core-level spectroscopic fingerprints of graphene moiré superstructures and provided an extensible platform to continue development for improved localisation techniques. Future plans involve adsorbing single atoms and nanoclusters of Ni and Pt at the defect sites. As spin-orbit coupling and relativistic effects are especially prevalent in these systems, we intend to use and continue developing the quasi-four-component method in FHI-aims to account for these effects in these systems. The ultimate long-term goal for this project is to create an accurate black box simulation toolkit, where the user can input a system, select a specific electronic orbital to eject or excite and produce a spectrum for XPS or NEXAFS complete with values for calculated binding energies.