Skip to main content Skip to navigation

Scaling Trust: An anthropology of cyber security

A project supported by a Future Leaders Fellowship from UK Research and Innovation
Duration: Oct 2019 - Sept 2026
PI: Matt Spencer

UKRI  logo

Scaling Trust: An Anthropology of Cyber Security

With growing dependency on digital infrastructure, vulnerability to cyber disaster becomes a defining context for social life. In 2017, the Wannacry crypto-ransomware infected computers across large parts of the UK's National Health Service, leading to thousands of cancelled medical appointments; weeks later the NotPetya malware caused chaos across many industries and continents. Later that year, the Equifax hack compromised the details of 140 million people, and in 2018, an outage at the UK bank TSB left thousands of customers defrauded. Behind each failure—to patch systems, to secure networks, to implement good governance—is a problem of scales: the smallest “weak link” can end up compromising the security of the whole system. And because complete security is unattainable in practice, living well with infrastructures has become a question of trust.

It is the premise of this project that trust is not a “user’s problem”. Behind the services and utilities that we rely on in daily life, we can find an array of professional cyber security practices aiming to win and maintain trust, to question it and manage it across scales. Understanding how they go about doing that, their successes and failures, is the purpose of this study.

The Fellowship

Through interviews, ethnographic fieldwork and participatory workshops, the project examines the social processes through which knowledge and trust are negotiated in the security profession: how practitioners imagine the trust implicit in their cyber security evaluations, the ways in which they make trust explicit, or call things into question as technologies and processes demanding further evaluation.

The project examines the nature of assurance in cyber security, its history and the contemporary policy landscape. Assurance is examined as a problem explicitly formulated by practitioners, but also as an implicit, situated aspect of cyber security knowledge, this latter dimension being examined in and through Trust Mapping participatory workshop methodology, developed as the project.

The Trust Mapping methodology is designed to help participants to visualise their perspective on the trustworthiness of technology, by mapping out the a space of agents, flows of information and forms of knowledge in which they find themselves. In addition to serving as a research methodology, Trust Mapping is intended to be of wider use to the professional community, and resources will be made freely available on this page in the future.

A parallel strand of research questions the nature of security models, through a systematic empirical comparative study of 'Zero Trust' and 'Distributed Trust' paradigms. In collaboration with Dr Daniele Pizio, Postdoctoral Research Fellow on the project, we examine the history and social conditions of these models, and engaging with the philosophy of science, we interrogate what a security model might be supposed to be.

Future research under the fellowship is planned, with a focus on deception, trust in complex systems, and the production of the research monograph: Scaling Trust.

Outputs


Creative Malfunction: Finding Fault with Rowhammer

Cyber security aims to make technical systems responsive to an uncertain environment of new and previously unanticipated forms of malfunction, new kinds of vulnerability and techniques for exploiting them. This paper analyses security vulnerability research, working from a close reading of the Rowhammer problem with Dynamic Random Access Memory (DRAM). The history of Rowhammer's discovery and subsequent research provides an exceptionally clear case study for exploring the historicity of vulnerability: the very nature of the problem, and how it might be fixed, remained uncertain and provisional for many years as security practitioners explore its implications. From a philosophical point of view, these pragmatic challenges generate insights into the nature of technical function and normativity, and thus what it means for things to malfunction and to be repaired.

http://computationalculture.net/creative-malfunction-finding-fault-with-rowhammer/


Engines, Puppets, Promises: The Figurations of Configuration Management

One of the principle challenges for managing complex technical architectures is configuration: ensuring component parts are in their appropriate states. In this paper I examine the history and philosophy of the discipline of IT configuration management. Since the 1990s, configuration management grappled with the problem of configuration on a fundamental level, reimagining not just what state things should be in but what kind of relation pertains between a source of truth and a recipient system. The need to address infrastructures at scale led not only to the development of decentralised systems for automated configuration management, but also to creative thinking about the nature of human-machine and machine-machine relations, most notably in the notion of 'smart intentional infrastructures' elaborated in Mark Burgess's Promise Theory. The essay draws on theories of figuration in order to bring the technical philosophy of configuration into dialogue with social science of infrastructures.

Forthcoming


Characterising Assurance: Mistrust and Narrative in Cyber Security

This paper presents an analysis of recent transformations in cyber security assurance, a field of evaluation that aims to establish of whether technical products are secure. Cyber security assurance has a history dating back to the 1970s, but has been subject to regular initiatives of reform. The paper examines current transformations of assurance in the UK context through an analysis of practitioners’ discourse, the stories told and retold about what the problems are that define the field. Such stories that not only describe the problem, but also challenge the capacity of assurance certifications to be interpreted as objective assessments of security.

Mistrust, it is argued, can be understood in terms of the capacities of sceptical narratives to efface the power of certifications to be taken on ‘face value.’ A communication-centred view of mistrust is thus developed that is distinct from the disposition-centred view that is conventional (Carey 2017, Mühlfried 2018, 2019). The paper develops this point through the analysis of a series of narrative excepts from interviews with cyber security practitioners, examining how assurance is characterised in them: the kinds of agents that we find within it and their relations to the production of objective evaluations of security.

An analysis of characterisation draws attention to the limitations of the palette of characters featuring in cyber security discourse, something that bolsters notions of a pressing need for experts and a ‘cyber skills gap’. Examining characterisation offers the possibility of drawing out what I call ‘counter-characterisation’, rendering problems in terms of characters that would otherwise be absent, and in closing the paper, I offer a comment on the critical potential of characterising assurance in terms of ‘caring’ characters.

Under review


In Fruitless Pursuit of Lemons: A critical perspective on information asymmetry in security economics

(with Madhav Raghavan)

This paper presents a critical perspective on information asymmetry in security economics. We examine what we call the ‘cyber lemons claim’, a commonly made claim that security problems can be understood as resulting from a particular kind of market failure. Drawing on George Akerlof’s well known model, proponents of the cyber lemons claim argue that various technology markets are dominated by insecure products due to the effects of information asymmetry: a ‘market for lemons’ emerging as a result of the difficulties buyers have in judging product quality and the resulting incentives of sellers to misrepresent the quality of their products. Characterising cybersecurity in this way leads to a range of policy recommendations designed to correct the information asymmetry. However, we suggest that the claim is not nearly so well justified as its proponents assume. We examine how authors in security economics justify the claim, and, drawing on the philosophy of economics, we argue that it is, at best, an abductive inference to a possible explanation, and that even this rests upon non-trivial and insufficiently examined assumptions. We conclude that security economics should pay greater attention to the practices of modelling at the heart of the epistemic tradition of microeconomics. A more nuanced understanding of model-based inference would support more carefully justified policy recommendations, and in avoiding blanket explanations, it would help to open up the field of cybersecurity to diverse interdisciplinary perspectives.

Under review