Skip to main content Skip to navigation

ARC WM Blog Content

Show all news items

Commercial Evidence of the Limitations of AI in Studying Medical Notes

Early artificial intelligence (AI) was based on simple Boolean rules. For example, if a pregnant women’s previous baby weighed over 4.5kg, then offer a glucose tolerance test or equivalent. But with the increasing success of artificial intelligence, for example in chess and then in the game of Go,[1] there has been a flurry of excitement concerning the use of AI in medicine. There were some early successes, particularly with respect to image analysis,[2] but this was low-hanging fruit. It was soon realised that many medical outcomes are very ambiguous; sepsis for example. People became aware that AI could simply replicate psychological or moral biases prevalent in society.[3] Most difficult of all, is the disorganised and variable nature of the medical record itself.

At first, people thought that it would be possible to get around the problem by defining ontologies. That is to say, all data items of any importance would be classified by the context in which they arise. A great flurry of activity took place and the famous AI program, Deep Mind, was conscripted into medical service and fed with routine NHS data.[4]

As careful academic studies revealed, AI does not overcome the problem of scrambled data. Doctors thought that the ‘magic’ of AI could untangle the non-coded parts of the medical record where the clinical logic unfolds. Data specialists naively thought that these diffuse records could be coded by means of ‘ontologies’. Neither side understand the complexity of the problem. Careful studies eventually revealed the underlying difficulty - that something much more laborious than ontologies would be required to ‘wrangle’ information from the medical record. [5] This will be a long-term undertaking. It should proceed in short steps. We must be patient and accept incremental gains. So excitement waned in the academic world, as reflected in previous articles in your news blog.[6-7] This was mirrored commercially. Deep Mind was first sold to Google in 2014 and more recently the firms AI assistant for doctors, Streams, has been discontinued. AI is suddenly not the quick fix that some people thought it would be.

Richard Lilford, ARC WM Director


References:

  1. Lilford RJ. Computer Beats Champion Player at Go – What Does This Mean For Medical Diagnosis? NIHR CLAHRC West Midlands News Blog. 8 April 2016.
  2. Becker AS, et al. Deep Learning in Mammography: Diagnostic Accuracy of a Multipurpose Image Analysis Software in the Detection of Breast Cancer. Invest Radiol. 2017; 52(7): 434-40
  3. Lilford RJ. How Accurate Are Computer Algorithms Really? NIHR CLAHRC West Midlands News Blog. 26 January 2018.
  4. Powles J, Hodson H. Google DeepMind and healthcare in an age of algorithms. Health Technol (Berl). 2017;7(4):351-67.
  5. Gokhale KM, Chandan JS, Toulis K, et al. Data extraction for epidemiological research (DExtER): a novel tool for automated clinical epidemiology studies. Eur J Epidemiol. 2021; 36: 165–78.
  6. Lilford RJ. We Have Frequently Argued Against Excessive Hype Regarding Artificial Intelligence in Health Care. NIHR CLAHRC West Midlands News Blog. 21 June 2019.
  7. Lilford RJ. More On Why AI Cannot Displace Your Doctor Anytime Soon. NIHR CLAHRC West Midlands News Blog. 15 June 2018.
Fri 19 Aug 2022, 13:00 | Tags: Artificial intelligence, Machine learning, Richard Lilford