Probert; Lakkur; Fonnesbeck; Shea; Runge; Tildesley; Ferrari (2019) Context matters: using reinforcement learning to develop human-readable, state-dependent outbreak response policies. Phil Trans Roy Soc B. 374 10.1098/rstb.2018.0277
Probert, WJM; Lakkur, S; Fonnesbeck, CJ; Shea, K; Runge, MC; Tildesley, MJ; Ferrari, MJ (2019) Context matters: using reinforcement learning to develop human-readable, state-dependent outbreak response policies. Phil Trans Roy Soc B. 374 10.1098/rstb.2018.0277
The number of all possible epidemics of a given infectious disease that could occur on a given landscape is large for systems of real-world complexity. Furthermore, there is no guarantee that the control actions that are optimal, on average, over all possible epidemics are also best for each possible epidemic. Reinforcement learning (RL) and Monte Carlo control have been used to develop machine-readable context-dependent solutions for complex problems with many possible realizations ranging from video-games to the game of Go. RL could be a valuable tool to generate context-dependent policies for outbreak response, though translating the resulting policies into simple rules that can be read and interpreted by human decision-makers remains a challenge. Here we illustrate the application of RL to the development of context-dependent outbreak response policies to minimize outbreaks of foot-and-mouth disease. We show that control based on the resulting context-dependent policies, which adapt interventions to the specific outbreak, result in smaller outbreaks than static policies. We further illustrate two approaches for translating the complex machine-readable policies into simple heuristics that can be evaluated by human decision-makers.