Skip to main content Skip to navigation

Impact of AI on Fair Trial Rights Conference

14th Conference on the Future of Adversarial and Inquisitorial Systems, May 7h -9th, 2025


Early Career Scholars’ Day | May 7th

Kicking off the conference, early career scholars will present their projects during the first day. Panels will address key topics such as algorithmic profiling, automated penal orders or money laundering alerts, and the rationalization of sentencing through AI.


Book Launch | May 7th

A book launch will highlight the importance of explaining the Law of Human-Robot Interaction and will pay tribute to the publication of Human-Robot Interaction in Law and Its Narratives, which explores the legal challenges posed by robots in society with an examination of substantive and procedural law, addressing issues like criminal liability and evidentiary reliability, and discussing at the same time how legal narratives shape our understanding of human-robot interactions.


Main Conference | May 8–9th

During the main conference, expert panels will analyze the possible AI impact on our current concept of a fair trial. The presentations will address, for instance, how automated decision making could determine legal outcomes and why the lack of transparency could make it difficult to challenge decisions effectively, thereby limiting the right to appeal and due process. A special focus will be on AI-systems used to obtain or assess evidence in criminal trials and the expected ramifications on defense rights in inquisitorial and adversarial proceedings. Another topic will evaluate AI-based profiling and discriminatory risks resulting from the collection and analysis of vast amounts of data and the classification of individuals based on perceived risk levels. Notoriously, if AI models are trained on biased historical data, they risk perpetuating systemic discrimination. In the US, for instance, algorithms that assess recidivism rates have been criticized for disproportionately labeling defendants from minority backgrounds as high-risk, thereby affecting sentencing severity and parole decisions. This not only undermines the presumption of innocence but also erodes the principle of equality before the law.

The overall aim is to identify novel ways to safeguard a fair trial in the digital era. What could be robust safeguards? The general demand for transparency (and AI decision making processes to be explainable and open to scrutiny) or judicial authorities retaining meaningful human oversight to prevent over reliance on automated decisions might fall short when it comes to ensuring that parties to a criminal trial can contest algorithmic assessments effectively and the public can be sure that a fair trial is granted. Balancing innovation with fundamental rights is essential to guarantee that AI serves justice rather than undermining it.