2021-11-19
Explainable, Interpretable AI: The Future of Investment Management
19 November 2021
Machine learning (ML) may be the future for investment management, but most ML approaches suffer from a dangerous affliction: the black-box problem. You may be able to observe the inputs to an ML approach, but how the outputs are reached can be a mystery. If as an investment manager you cannot explain your investment decisions to Compliance executives, regulators or clients, you will be exposing your firm to unacceptable levels of legal and regulatory risk (CFA Institute 2019, 2020; IOSOC, 2021).
Our webinar brings together world authorities on the two solutions currently posed for the black-box problem: interpretable AI, where black-box approaches are rejected in favour of more simple, interpretable models; or explainable AI (XAI), where we attempt to explain the inner workings of black-box approaches.
Prof Cynthia Rudin, Prof Artur d’Avila Garcez and Dr Daniele Magazenni go head-to-head in making a case for each approach, with discussions of the problem as they see it and possible solutions. We follow their insights by spotlighting leading-edge research in the field from the University of Warwick, Warwick Business School, University College London the Turing Institute and more.
02:00 pm – 02:15 pm |
Opening remarks: Prof Ram Gopal WBS and Dr Dan Philps introduce one of the hottest topics in finance |
|
Interpret or Explain? |
02:15 pm – 02.40 pm |
Interpretable AI: Professor Cynthia Rudin, Duke University |
02.40 pm – 03.10 pm |
Explainable AI: Dr Daniele Magazzeni, JP Morgan XAI centre of excellence |
03:10 pm – 03:30 pm |
Hybrid Systems: Professor Artur d'Avila Garcez, City, University of London |
|
Research Spotlights |
03.30 pm – 03.40 pm |
Applied Interpretable AI: Dr Timothy Law, Rothko Investment Strategies Tim PresentationTim |
03.40 pm - 03.50 pm |
XAI: Dr Adriano Koshiyama, UCL |
03. 50 pm – 03.55 pm |
Graph Nets: Dr Pasquale Minervini, UCL |
03:55 pm – 04.00 pm |
Closing Remarks |