Skip to main content Skip to navigation

Department Events

The department runs a variety of seminars, workshops and colloquia. See upcoming events below. You are also welcome to sign up to the seminar mailing list.

For visiting the department, see the map of campus, directions, and accommodation recommendations.
(Be reminded that the University of Warwick is not, surprisingly, located in the town of Warwick.)

 

Select tags to filter on
  Jump to any date

Search calendar

Enter a search term into the box below to search for all events matching those terms.

Start typing a search term to generate results.

How do I use this calendar?

You can click on an event to display further information about it.

The toolbar above the calendar has buttons to view different events. Use the left and right arrow icons to view events in the past and future. The button inbetween returns you to today's view. The button to the right of this shows a mini-calendar to let you quickly jump to any date.

The dropdown box on the right allows you to see a different view of the calendar, such as an agenda or a termly view.

If this calendar has tags, you can use the labelled checkboxes at the top of the page to select just the tags you wish to view, and then click "Show selected". The calendar will be redisplayed with just the events related to these tags, making it easier to find what you're looking for.

 
Mon 17 Feb, '25
-
TIA Centre Seminar Series: Lucy Godson (National Pathology Imaging Co-operative, Leeds)
MB 2.23

Title: Predicting melanoma patient outcomes using digital pathology

Abstract: Melanoma is the most aggressive form of skin cancer and fifth most common cancer in the UK. Identifying novel early-stage prognostic biomarkers and determining effective treatments are two key challenges for helping melanoma patients get better outcomes. Previous studies have analysed genetic data from tumours to stratify patients into immune subgroups, which were associated with differential melanoma specific survival and potential predictive biomarkers. However, this genetic analysis is not carried out in current clinical workflows, whereas haematoxylin and eosin (H&E) stained slides are routinely used in patient diagnosis. This talk will present our work on how deep learning models can be used to classify whole slide images (WSIs), into these molecular immune subgroups. I will discuss the application of different multiple instance learning (MIL) frameworks and examine how image resolution, feature extraction methods and aggregation strategies can affect model performance. I will also argue that graph representations can be used to encode spatial and contextual information within WSIs to improve immune subtype classifications. Finally, I will present our work on survival graph neural networks, for discovering new patient risk groups based on melanoma specific survival.

Bio: Lucy currently works as a Digital Pathology AI Scientist at the National Pathology Imaging Cooperative (NPIC). Her work focuses on developing advanced AI tools for better understanding melanoma patient outcomes. This involves creating image analysis pipelines and collaborating closely with pathologists to design tools that can improve melanoma treatment and patient care. Before starting her role at NPIC, Lucy carried out her PhD with the Centre for Doctoral Training (CDT) for Artificial Intelligence in Medical Diagnosis and Care at the University of Leeds. Her research, titled “Predicting melanoma patient outcomes using digital pathology” investigated the use of multiple instance learning, graph neural networks and survival analysis techniques to classify whole slide images.

How to attend: Either turn up to the event on the day, or if you want to attend online then please contact Adam Shephard (adam.shephard@warwick.ac.uk) for more details.

Mon 3 Mar, '25
-
TIA Centre Seminar Series: Zhilong Weng (University Hospital Cologne)
CS 1.04

Title: GrandQC: A comprehensive solution to quality control problem in digital pathology

Abstract: Histological slides contain numerous artifacts that can significantly deteriorate the performance of image analysis algorithms. Here we develop the GrandQC tool for tissue and multi-class artifact segmentation. GrandQC allows for high-precision tissue segmentation (Dice score 0.957) and segmentation of tissue without artifacts (Dice score 0.919–0.938 dependent on magnification). Slides from 19 international pathology departments digitized with the most common scanning systems and from The Cancer Genome Atlas dataset were used to establish a QC benchmark, analyzing inter-institutional, intra-institutional, temporal, and inter-scanner slide quality variations. GrandQC improves the performance of downstream image analysis algorithms. We open-source the GrandQC tool, our large manually annotated test dataset, and all QC masks for the entire TCGA cohort to address the problem of QC in digital/computational pathology. GrandQC can be used as a tool to monitor sample preparation and scanning quality in pathology departments and help to track and eliminate major artifact sources.

Bio: Z. Weng is a PhD student at the University of Cologne and the University Hospital Cologne, supervised by Yuri Tolkach. He got his master’s degree in Computational Engineering from the Technical University of Darmstadt in Germany, where he focused on computer vision research, including traditional image processing and deep learning-based image detection. His current research focuses on advancing artificial intelligence applications in computational pathology.

Paper Link: GrandQC: A comprehensive solution to quality control problem in digital pathology | Nature CommunicationsLink opens in a new window

How to attend: Either turn up to the event on the day, or if you want to attend online then please contact Adam Shephard (adam.shephard@warwick.ac.uk) for more details.

Mon 10 Mar, '25
-
TIA Centre Seminar Series: Jinxi Xiang (Stanford University)
MB 2.24

Title: A vision–language foundation model for precision oncology

Abstract: Clinical decision-making is driven by multimodal data, including clinical notes and pathological characteristics. However, the scarcity of well-annotated multimodal datasets in clinical settings has hindered the development of useful models. We developed the Multimodal transformer with Unified maSKed modeling (MUSK), a vision–language foundation model designed to leverage large-scale, unlabelled, unpaired image and text data. MUSK was pretrained on 50 million pathology images from 11,577 patients and one billion pathology-related text tokens using unified masked modelling. It was further pretrained on one million pathology image–text pairs to efficiently align the vision and language features. With minimal or no further training, MUSK was tested in a wide range of applications and demonstrated superior performance across 23 patch-level and slide-level benchmarks, including image-to-text and text-to-image retrieval, visual question answering, image classification and molecular biomarker prediction. Furthermore, MUSK showed strong performance in outcome prediction, including melanoma relapse prediction, pan-cancer prognosis prediction and immunotherapy response prediction in lung and gastro-oesophageal cancers.

Bio: Jinxi Xiang is a multidisciplinary researcher specializing in signal processing and machine learning for healthcare applications. He integrated machine learning with medical imaging during his doctoral study at Tsinghua University (09/2016-06/2021). At Tencent AI Lab (07/2021-01/2024), he developed AI tools for clinical pathology using the techniques of image/video coding and multimodal learning. Since January 2024, he has been a postdoctoral researcher at Stanford University, focusing on computational pathology for cancer diagnosis and personalized treatment.

Paper Link: A vision–language foundation model for precision oncology | NatureLink opens in a new window

How to attend: Either turn up to the event on the day, or if you want to attend online then please contact Adam Shephard (adam.shephard@warwick.ac.uk) for more details.

Mon 24 Mar, '25
-
TIA Centre Seminar Series: Theodore Zhao (Microsoft Research)
MB 2.24

Title: A foundation model for joint segmentation, detection and recognition of biomedical objects across nine modalities

Abstract: Biomedical image analysis is fundamental for biomedical discovery. Holistic image analysis comprises interdependent subtasks such as segmentation, detection and recognition, which are tackled separately by traditional approaches. Here, we propose BiomedParse, a biomedical foundation model that can jointly conduct segmentation, detection and recognition across nine imaging modalities. This joint learning improves the accuracy for individual tasks and enables new applications such as segmenting all relevant objects in an image through a textual description. To train BiomedParse, we created a large dataset comprising over 6 million triples of image, segmentation mask and textual description by leveraging natural language labels or descriptions accompanying existing datasets. We showed that BiomedParse outperformed existing methods on image segmentation across nine imaging modalities, with larger improvement on objects with irregular shapes. We further showed that BiomedParse can simultaneously segment and label all objects in an image. In summary, BiomedParse is an all-in-one tool for biomedical image analysis on all major image modalities, paving the path for efficient and accurate image-based biomedical discovery.

Bio: Theodore Zhao is a Senior Applied Scientist at Microsoft Health and Life Sciences Research, working on multimodal biomedical imaging models as well as biomedical natural language processing. Theodore earned his PhD in Applied Mathematics degree from University of Washington, where his research applied machine learning, stochastic modeling and optimization to applications in finance and healthcare. His research interests focus on machine learning, self-supervised learning, multimodal models, and mathematical modeling.

Paper Link: A foundation model for joint segmentation, detection and recognition of biomedical objects across nine modalities | Nature MethodsLink opens in a new window

How to attend: Either turn up to the event on the day, or if you want to attend online then please contact Adam Shephard (adam.shephard@warwick.ac.uk) for more details.

Placeholder