JQ Smith and L Dodd
Regulating Autonomous Agents facing Conflicting Objectives: A Command and Control Example
Abstract: UK military commanders have a degree of devolved decision authority delegated from command and control (C2) regulators, and they are trained and expected to act rationally and accountably. Therefore from a Bayesian perspective they should be subjective expected utility maximizers. In fact they largely appear to be so. However when current tactical objectives con.ict with broader campaign objective there is a strong risk that .elded commanders will lose rationality and coherence. By systematically analysing the geometry of their expected utilities, arising from a utility function with two attributes, we demonstrate in this paper that even when a remote C2 regulator can predict only the likely broad shape of her agents.marginal utility functions it is still often possible for her to identify robustly those settings where the commander is at risk of making inappropriate decisions.