Department Events
The department runs a variety of seminars, workshops and colloquia. See upcoming events below. You are also welcome to sign up to the seminar mailing list.
For visiting the department, see the map of campus, directions, and accommodation recommendations.
(Be reminded that the University of Warwick is not, surprisingly, located in the town of Warwick.)
Mon 16 Jun, '25- |
TIA Centre Seminar Series: Tapabrata Chakraborty (UCL)FAB 2.48Title: Personalised Predictions with Transparent AI on Multimodal Health Data Abstract: AI based decision systems have reached near human performance in a range of unimodal tasks, hence the next immediate frontier for AI in the long road towards artificial general intelligence is multimodal AI, that is AI that can handle input and/or output multiple data types seamlessly. There has been significant progress in this area in the past couple of years, but for such systems to be widely used in high-risk applications like healthcare, they need to be transparent and personalised. This talk will present an overview of the recent methods that have been developed in this area from Dr. Chakraborty’s Transparent and Responsible AI Lab (TRAIL). Bio: Rohan is a Theme/Group Lead at the Alan Turing Institute and a Principal Research Fellow at UCL Cancer Institute. He leads TRAIL, the Transparent and Responsible AI Lab that develops multimodal pan-cancer predictive models. Rohan is an invited expert in Responsible AI with the Global Partnership on AI and an Editor with Springer Nature Computer Science. Rohan has a PhD in Computer Science and worked as a postdoctoral researcher at the University of Oxford where he continues to an Honorary Fellow of Linacre College. How to attend: Either turn up to the event on the day, or if you want to attend online then please contact Adam Shephard (adam.shephard@warwick.ac.uk) for more details.to attend: Either turn up to the event on the day, or if you want to attend online then please contact Adam Shephard (adam.shephard@warwick.ac.uk) for more details.ens in a new windowLink opens in a new windowLink opens in a new windoLink opens in a new windowRohan is a Theme/Group Lead at the Alan Turing Institute and a Principal Research Fellow at UCL Cancer Institute. He leads TRAIL, the Transparent and Responsible AI Lab that develops multimodal pan-cancer predictive models. Rohan is an invited expert in Responsible AI with the Global Partnership on AI and an Editor with Springer Nature Computer Science. Rohan has a PhD in Computer Science and worked as a postdoctoral researcher at the University of Oxford where he continues to an Honorary Fellow of Linacre College.Link opens in a new windowopens in a new windowLink opens in a new window |
|
Mon 23 Jun, '25- |
TIA Centre Seminar Series: Edwin D. de Jong (Aignostics)FAB 2.48Title: Quantifying Pathology Foundation Model Robustness against Medical Center Variation: the Robustness Index Abstract: Pathology Foundation Models (FMs) hold great promise for healthcare. Their clinical application is challenged by well-documented variations between medical centers, which can potentially lead to biased downstream models. To enable the use of pathology FMs in clinical practice, it is essential to ensure they are robust to such variation. This requires the ability to measure and quantify robustness. We measure how strongly current pathology FMs encode biological features like tissue and cancer type, and how strongly they encode confounding medical center signatures introduced by staining procedure and other differences. The relation between these two factors defines the Robustness Index. This novel robustness metric quantifies to what degree biological features dominate confounding features. We evaluate the robustness of current pathology FMs. We find that all FMs encode medical centers to some degree, and that medical center variation can dominate the organization of the embedding space, depending on the application domain. The influence of this confounding information on the embedding space varies, and significant differences in the robustness index are observed. The robustness index provides a critical new benchmark for evaluating and improving pathology FMs, accelerating progress towards their safe and reliable clinical adoption. Bio: Edwin de Jong has been fascinated by how humans and machines can produce intelligent behavior for over three decades. He studied at Delft University of Technology and received his PhD from the VUB AI Lab in 2000. As a postdoctoral researcher at Prof. Jordan Pollack’s DEMO Lab at Brandeis University, he started a research line in representation learning and open-ended self-improvement which he continued in The Netherlands at Utrecht University. In 2004/2005, he co-founded Adapticon, among the first companies worldwide to apply LSTM in industry. After other roles in the tech industry, as of 2018 Edwin's focus is on helping AI transform healthcare. At ScreenPoint Medical, he contributed to the Transpara algorithm which was found to substantially reduce radiologist workload and identify 29% more cancer cases in the MASAI randomized controlled trial; a milestone recognized as a Notable Advance by Nature Medicine. His current focus is on developing robust pathology and multimodal foundation models as Principal Machine Learning Scientist at Aignostics, a leading European scaleup in AI-powered pathology, where he works to advance computational diagnostics for drug discovery and precision medicine. Paper Link: [2501.18055] Current Pathology Foundation Models are unrobust to Medical Center Differences How to attend: Either turn up to the event on the day, or if you want to attend online then please contact Adam Shephard (adam.shephard@warwick.ac.uk) for more details. |