Skip to main content Skip to navigation

Poster Competition Abstracts

Ryan Samarakoon

Interventions to improve efficiency and patient outcomes in emergency medicine

Large language models are neural networks that have learned the structure of a language. Some examples of LLMs are virtual mobile assistants, autocomplete, content writers/generator, chatbots and automated translation services. Despite these models being engineered by researchers, they resemble a black box i.e. it is often not possible to interpret a model's reasoning. My aim is to investigate the generalisability of specific shortcuts these models employ to see if they are similarly exploited outside of purely image recognition tasks. Polysemanticity is the phenomenon in image recognition models whereby unrelated and distinct concepts become represented by a single neuron within the network rather than one neuron matching cleanly to a single entity. Following a guide to building the popular language model GPT-2 from scratch, I aim to isolate a specific portion of the neural network for inspection and show the same phenomenon can also occur within language models. My findings would ideally show that polysemanticity is not isolated to image recognition but rather a broader attribute of neural networks. If so, it would represent a hurdle to the overall interpretability of machine learning models. Future research may consider which steps to take to diminish polysemanticity to improve interpretability and transparency of models, to what extent polysemanticity can be reduced efficiently or whether the phenomenon has trade-offs with interpretability that are worth considering. Practically, the research would aid in the regulation of deployment of these learning systems in sensitive contexts (healthcare, education, defence etc.) where transparency and scrutability are crucial.



Victoria Yardley

AI-based detection of bowel sounds for monitoring of postsurgical recovery

As it performs its function of peristalsis, i.e. muscle contractions and relaxations to move ingested food through the digestive system, the bowel generates sounds. Colorectal surgery, for example to treat bowel cancer, causes a temporary cessation of this function. In uncomplicated recovery, normal bowel function returns quickly and the patient can be discharged 3-5 days after surgery. However, around 20% of patients experience significant delays in return of normal function; this can be accompanied by distressing symptoms including nausea and bloating. Previous research indicates a lower rate of bowel sounds accompanying the reduced bowel function.

We have developed an acoustic sensor that can be attached to the abdomen using a skin-safe patch to monitor bowel sounds during post-surgical recovery. To analyse the bowel activity from the digital audio obtained using the sensor, we have trained a convolutional neural network model to automatically detect the characteristic sounds emitted by the bowel, which can then be analysed statistically. In the poster, we present the sensor and model, and demonstrate how the model compares to the use of traditional digital signal processing, particularly in noisy environments.


Josh Hill and Luke Johnson

UK Graduate Entry Medical Students’ (GEMSs) Attitudes and Perceptions Towards Artificial Intelligence in Health Care: A Mixed-Method Study

Purpose: The use of artificial intelligence (AI) in medicine is rapidly increasing, leading to trepidation among medical students regarding their eventual careers. This study aims to determine how graduate entry medical students (GEMS) perceive AI in health care.

Methodology: GEMS of Warwick and Swansea Medical Schools were recruited to complete a survey on Qualtrics. Willing students then returned for focus groups. Thematic and statistical analysis was carried out on the qualitative and quantitative data respectively


Results: The survey was completed by 42 students. Seven of these returned for focus groups. Students are aware of the many uses of AI and expect it to improve efficiency and patient care while reducing human error. However, students have concerns regarding inbuilt biases in AI and the data harvesting of medical information. Students disagreed on how job prospects would be impacted but were in agreement that specialities predominantly involving data processing and administrative tasks would be disproportionately affected by AI.


Conclusion: GEMS have strong but mixed opinions on the impact of AI. They also have legitimate concerns about perpetuating health inequities, reducing patient contact, and the legal ramifications of AI. Students may benefit from additional teaching on AI to ensure that they are sufficiently prepared


Linh Tran

The Challenges of Autonomous Surgical Robots

Surgical robots have already demonstrated the innovative enhancements made possible when conducting a variety of procedures minimising invasiveness and allowing for a higher degree of fidelity. In combination with Artificial Intelligence (AI), these prospective enhancements can be further expanded in the operating theatre, as well as in diagnostics and preoperative planning to improve safety, accuracy, efficiency, and a reduced reliance on the surgeon themselves. However, there are still challenges and limitations faced before Autonomous Surgical Robots (ASR) can become more widespread in surgical practices. These limitations include the ASR’s adaptability to a dynamic in vivo environment, their level of autonomy and ethical challenges. This poster aims to assess and summarise the current ARS technologies available and their successes, the current standing challenges, their potential solutions, and the future of ARS. Robots such the Smart Tissue Autonomous Robot (STAR) have already conducted successful in vivo laparoscopic anastomosis involving some surgeon intervention. However there is ongoing debate as to whether the aim of further developing ARS should be achieving full automation or enhancing current surgical practices without the removal of the surgeon. Further development should place emphasis on improving current ARS technologies as well as further implementing them within modern surgical practices with appropriate regulations to tackle ethical issues.



Abdullah Alsalemi

Early Detection of Head & Neck Pre-Cancerous Conditions using Artificial Intelligence

Head and neck cancer is amongst the most common cancers globally with high prevalence (8th) in the UK, with rising frequency coupled with limited prognosis. According to Cancer Research UK, over 12,000 yearly cases have been recorded between 2016 and 2018, a quarter of which ended in mortality. However, between 46% and 88% of those cases are preventable, partly by early detection of potentially malignant conditions. Oral Epithelial Dysplasia (OED) is a common pre-malignant condition described by harmful mutations in the lining cells (epithelium) of the mouth. OED lesions can be visually documented via clinical images and nasoendoscopes videos, where early identification can lead to better patient outcomes. Henceforth, Artificial Intelligence (AI) can be used to develop an automated detection pipeline, where OED severity can be autonomously graded through the classification of clinical images. In collaboration with the University of Sheffield, this work aims to develop AI systems that can produce reliable predictions of head and neck malignancy through OED grading. Current research focuses on establishing an image classification process that takes advantage of transfer learning to train a model to recognise the visual features of various lesions including colour, texture and shape using minimal data. Initial results include an early stage of the data pipeline with promising OED detection performance. In future, we will obtain further data for model refinement and implement multi-class grading of mild, moderate, and severe OED, which in turn can be a step towards more accurate and early head and neck cancer diagnosis.


    Peter Woods

    ChatGPT for Mutual Praise: A Resource for Parent-Child Relationships

    This poster introduces an application of ChatGPT, OpenAI's conversational AI model, to produce personalized praise for both parents and children. This tool strives to support families during challenging periods following instances of negative behavior by offering specific, positive feedback based on age, interests, and actions. The poster explains the mechanism of the AI model, showing how it combines understanding of developmental stages and individual interests to generate praise that is meaningful and impactful for both parents and children. It presents case studies to highlight how this approach can contribute to a healthier and more harmonious atmosphere following behavioral disruptions. Moreover, the poster discusses the broader potential of this tool for enhancing parenting practices and parent-child relationships. It proposes that the application of AI to generate mutual praise can be a valuable tool for navigating difficult situations and promoting a positive family dynamic. By introducing this AI-driven approach to mutual praise, the poster suggests a new, beneficial pathway for strengthening the bond between parents and children.


    Muhammad Dawood

    Cancer drug sensitivity prediction from routine histology images

    Drug sensitivity prediction models can aid in personalising cancer therapy, biomarker discovery, and drug design. Such models require survival data from randomized controlled trials which can be time consuming and expensive. In this proof-of-concept study, we demonstrate for the first time that deep learning can link histological patterns in whole slide images (WSIs) of Haematoxylin & Eosin (H&E) stained breast cancer sections with drug sensitivities inferred from cell lines. We employ patient-wise drug sensitivities imputed from gene expression based mapping of drug effects on cancer cell lines to train a deep learning model that predicts sensitivity to multiple drugs from WSIs. We show that it is possible to use routine WSIs to predict the drug sensitivity profile of a cancer patient for a number of approved and experimental drugs. We also show that the proposed approach can identify cellular and histological patterns associated with drug sensitivity profiles of cancer patients.



    Johnathan Pocock

    OpenHistologyMap – The OpenStreetMap of Histology

    TIAToolbox is a Python library for tissue image analysis that provides a comprehensive set of tools for various tasks such as stain normalization, segmentation, feature extraction, classification and visualization. In this poster, we present the main features and functionalities of TIAToolbox. We also show examples of its applications in full histopathology research pipelines, including whole slide classification and visualization of results. In this poster we provide short example code snippets linked to extensive online documentation which includes a full API reference, example notebooks, and instructions for users to integrate their own PyTorch models with the toolbox. TIAToolbox is an open-source project that aims to facilitate the development and deployment of tissue image analysis solutions for computer science researchers and pathologists alike.


    Matthew Macpherson

    Maybe? Visual AI explainability to address clinician, regulator and patient concerns.

    AI systems can derive a surprising amount of patient information from a chest x-ray with high accuracy, including patient age, sex, ethnicity, and patient matches in public databases. This has potential implications for patient confidentiality, particularly in the public datasets relied on by AI researchers. In our work we use a visual 'explainable AI' solution to show the main features used by the model to predict patient identity, age and ethnicity; this gives intuitive insight into the apparently super-human abilities of the model. We show that changes in a patient's model-perceived identity over time give a useful longitudinal signal for abnormality emergence which can improve predictive performance in a multi-factor model.


    Kesi Xu

    Auto-NuClick: A dual-stage neural network for nuclear instance segmentation

    In computational pathology, cell-based features are often extracted from the digital Haematoxylin and Eosin (H&E) stained histology images and used in the downstream explainable models. We introduce Auto-NuClick, a lightweight and fast dual-stage neural network for automatic nuclear instance segmentation, to address the challenge of sourcing time-consuming and expensive manual dotting of nuclei in histology images. It shows promising results on the largest publicly available dataset.



    Mir Omer Ali

    Introduction: Osteoporosis is a bone disorder affecting 1 in every 3 women at menopause. Osteoporosis weakens bone by causing loss of bone mass and architecture. This loss is due to reduced estrogen levels in menopausal women. Hormone replacement therapy has been shown to improve bone osteoporotic changes but is also reported to have deleterious effects such as deep vein thrombosis and endometriosis. Coumestrol is a naturally occurring phytoestrogen, that mimics the biological activity of estrogen. Studies have shown that Coumestrol is beneficial to patients with bone resorption disorders. However, the mechanisms underlying the action of Coumestrol is not clarified.

    Methods: In this study, Coumestrol was compared to 17β- oestradiol (E2, positive control). Two weeks after ovariectomy, rats were randomly divided into 5 groups (n=6). The treatment group received Coumestrol at two doses (10 and 20 mg/kg/day) and 17β oestradiol at 0.2μg/kg/day subcutaneously for 2 weeks. At the end of treatment, rats were sacrificed, and samples from the woven bones were immediately collected and preserved for histopathological studies. Bone tissue was decalcified in 10 % (w/v) of ethylenediaminetetraacetic acid (EDTA; pH 7.4) for 3 weeks and then it was embedded in paraffin.

    Results: Treatment with Coumestrol at 20mg/kg/day improves bone trabeculae and collagen deposition. Coumestrol 20mg/kg/day also causes decreased protein localization of RANKL with associated increased localization of OPG and RANK in the bones of estrogen-deficient ovariectomized osteoporotic rats as compared to non-treated ovariectomized rats. Additionally, expression of ER-α was increased in ovariectomized rats treated with Coumestrol 20mg/kg/day.

    Conclusion: Phytoestrogen Coumestrol can reduce osteoclast differentiation via the downregulation of RANKL protein in the bones of estrogen-deficient rats. Moreover, it promotes osteoblast differentiation via the upregulation of OPG and RANK expression in the bones of estrogen-deficient ovariectomized rats. Coumestrol can be used as an adjunct in preventing osteoporosis-related complications in estrogen-deficient states.


    Manuela Trejo

    The study aimed to explore the feasibility of employing neural networks for recognizing finger grasps during activities of daily life. The ability to accurately identify and quantify grasps is crucial for designing effective finger prosthetics that can enhance the quality of life for individuals with upper limb impairments.


      Nathan Hodson

      INTRODUCTION: Artificial intelligence (AI) large language models (LLMs), including ChatGPT-4 and Bard have recently emerged. Some suggest LLMs could replace existing relationships, including psychotherapeutic relationships. Cognitive behavioural therapy (CBT) contributes to treating common childhood mental health disorders including anxiety and depression by helping clients understand, identify and reframe unhelpful thoughts. As demand outstrips therapist availability, AI that could help children understand and challenge cognitive biases would be valuable. We aimed to assess whether ChatGPT-4 and Bard could a) generate clear examples of cognitive distortions, b) identify cognitive distortions, and c) reframe cognitive distortions.


      METHODS: We piloted prompts and identified an independent CBT therapist. In stage 1 we prompted both LLMs to generate a list of examples of 10 common cognitive biases and asked the therapist to identify which bias each referred to. In stage 2 the LLM was prompted to identify biases in 10 examples produced by the therapist. In stage 3 the LLM was prompted to reframe the 10 examples and the therapist appraised whether the reframing was successful.


      RESULTS: 8/10 ChatGPT-4-generated examples and 7/10 Bard-generated examples were accurately identified. Both ChatGPT-4 and Bard identified 7/10 cognitive biases correctly. Both ChatGPT-4 and Bard reframed all 10 biases effectively.


      DISCUSSION: Existing LLMs did not reliably give clear illustrations of common cognitive biases or identify cognitive biases. However, they were effective at reframing harmful thoughts. These findings indicate that AI LLMs may be able to contribute to CBT.



      Owain Cisuelo

      Hyperglycaemia Detection for Paediatric Type-1 Diabetes via Wearable Sensors and Deep Learning

      Type-1 diabetes (T1D) is a chronic autoimmune disorder characterized by elevated blood glucose levels. There is no cure for T1D, therefore, the development of tools that enable effective management can be pivotal in reducing the risk of adverse events. This study focuses on detecting hyperglycaemic events using continuous electrocardiogram (ECG) signals acquired through wearable sensors. Previous studies employed traditional machine learning techniques to extract limited features from the ECG signal, predominantly in the time domain. In contrast, we propose a novel approach utilizing deep learning and the short-time Fourier transform (STFT) to transform the one-dimensional time-series ECG signal into a two-dimensional time-frequency representation. This approach allows for the exploration of complex, high-dimensional properties of physiological signals. We developed subject-specific multi-layer convolutional neural networks using an original dataset obtained from children with T1D in real-life conditions. The networks were trained at the beat level to discriminate based on glycaemic status. An overall hyperglycaemia detection accuracy of 96.9% was achieved for 8 subjects. By leveraging deep learning and time-frequency analysis, our model aims to enhance real-time detection of glycaemic events, potentially improving the management of T1D and reducing associated risks.

      Wanzi Su

      A personalized toolbox for ophthalmological and neurological research

      Eye movement disorders (e.g., affecting pupil contraction, nystagmus, etc.) are effective proxies of neurodegenerative disease onset, severity and progression. Recent literature has been investigating and finding links between eye movement disorders and long-covid patients.
      The status of neurodegeneration can be assessed in different ways, including traditional subjective assessments, invasive biomarkers measurements (e.g., blood or cerebrospinal fluid), and novel automated eye feature evaluation. The latter is a more objective and non-invasive method which has great sensitivity and accuracy and can be used in situations where cooperation with the patient is difficult (e.g., patients without intact verbal or motor functions).
      Different approaches are present in literature in terms of automated eye feature evaluation, from the more traditional fMRI or videotaping or electromyography, to the latest ones relying only on smartphones and their high-quality cameras. The latter have different advantages including their easiness to use, the limited number of components and their resilience to harsh environments.
      A previous study of ours demonstrated the feasibility of using smartphones as an alternative to commercial-grade pupillometers. We are currently aiming to develop a personalized toolbox for ophthalmological and neurological research and eHealth. A two-phased trial is currently underway, collecting eye signals from healthy subjects (Phase1) and neurological disorder patients (Phase2). This data will be processed via ad-hoc image processing algorithms and artificial intelligence, which will allow assessing different eye features (e.g., saccade latency) to evaluate the presence, severity and progression of neurodegenerative diseases.