Skip to main content Skip to navigation

INFAIME Workshop

Ethical futures: Preparing for the impact of AI on the values and practice of medicine

25-26th September 2019, University of Warwick.

This workshop will address the ethical and societal implications of AI disruption to the work of doctors. We will investigate how AI assistance may change our understanding of the practice of medicine and the potential impact of AI on the core professional values that protect patients in their interactions with health services.

Work previously undertaken by doctors is now performed by other practitioners (nurse consultants, physician assistants etc.) and patients themselves (e.g. online research). This trend aimed to release medical time for the very tasks now targeted by AI. Moreover AI is out-performing doctors in diagnosis and prognosis, and is enabling increasingly autonomous surgical robots. AI assistance is likely to expand exponentially, bringing significant changes to existing medical roles, and generating a need for new kinds of healthcare workers. Our first task will be to address the timely question of what it is doctors uniquely do that could (or should) not be AI-assisted or replaced, and why.

Medicine is defined both by the skills doctors possess and their practice of these skills according to established medical ethics. The prerequisite skills and ethics are derived from a shared, though sometimes contested, understanding of the scope and purpose of medical practice. AI threatens to disrupt these fundamental components.

A change of this potential magnitude requires ethical preparedness.

The workshop will set the necessary research agenda to ensure that any AI-related paradigm shift in medicine has strong ethical foundations.

Furthermore, AI-assistance in healthcare has the potential to create a new schism between the regulation, practice and delivery of medical services in technologically endowed and non-technologically endowed countries. The workshop will explore how this divide will impact international professional guidelines and conventions, and the potential consequences for professional practices that transcend national boundaries, such as the conduct/regulation of clinical trials and the governance of humanitarian medical intervention.

Ethical scrutiny of the implications of AI assistance in healthcare often focuses on two areas of concern: the use of personal data; and the need to maintain public and professional trust. Our concern is the latter, noting that trustworthiness is a specifically human, relational quality, not applicable to machines or algorithms (which are better characterised in terms of reliability than trust). Accordingly, we will ask whether there is anything about the work doctors do that could or, importantly, should only be performed by a human, and explore the impact of potential answers on the values that should guide how AI-assisted healthcare is delivered. For example, the very concept of being a ‘patient’ implies a human-human relationship that is maintained by, and regulated according to, the norms and values that condition the expectations of both parties; such relationships may not exist in a recognisable form as healthcare is increasingly delivered with AI assistance.

Our overarching aim is, therefore, to set a new research agenda regarding the ethical implications of the integration of AI into medicine, focusing on the impact of AI on the values that inform and shape medical practice, professional roles and relationships.

 

Programme

Background material

Plenary abstracts

Venue at the University of Warwick

Venue FAQs