Skip to main content Skip to navigation

Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model Collaboration

Project Overview

The document presents a comprehensive exploration of how generative AI, particularly through the Precedent-Enhanced Legal Judgment Prediction (PLJP) framework, can significantly improve legal decision-making in education and practice. By leveraging both large language models (LLMs) and specialized domain models, the PLJP framework enhances the accuracy of legal judgments through the sophisticated analysis and retrieval of relevant legal precedents. It emphasizes the critical role that precedents play in shaping legal outcomes and addresses the limitations of existing legal judgment prediction methods. The findings from experiments conducted on real-world datasets illustrate the effectiveness of this innovative approach, showcasing its potential to revolutionize legal education by providing students and practitioners with tools that not only predict outcomes but also deepen their understanding of the law through precedent analysis. Overall, the document underscores the transformative potential of generative AI in the legal field, highlighting its capacity to streamline legal processes and enhance educational methodologies.

Key Applications

Precedent-Enhanced Legal Judgment Prediction (PLJP)

Context: The legal domain, particularly for legal professionals and systems involved in predicting court judgments based on case facts.

Implementation: The framework combines LLMs with domain-specific models to retrieve and interpret relevant precedents, improving the accuracy of judgment predictions.

Outcomes: Achieved state-of-the-art performance in legal judgment prediction tasks, demonstrating the effectiveness of incorporating precedents into the prediction process.

Challenges: Existing models struggle with understanding abstract legal labels, and the integration of LLMs with domain-specific models can be complex.

Implementation Barriers

Technical Barrier

LLMs have limitations in prompt length and struggle with complex legal terminology.

Proposed Solutions: The proposed PLJP framework addresses this by combining LLMs with domain models to provide candidate labels and context.

Data Barrier

Potential data leakage during model training due to overlapping datasets.

Proposed Solutions: Creation of a new test set (CJO22) to prevent data leakage and ensure fair evaluation.

Project Team

Yiquan Wu

Researcher

Siying Zhou

Researcher

Yifei Liu

Researcher

Weiming Lu

Researcher

Xiaozhong Liu

Researcher

Yating Zhang

Researcher

Changlong Sun

Researcher

Fei Wu

Researcher

Kun Kuang

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Yiquan Wu, Siying Zhou, Yifei Liu, Weiming Lu, Xiaozhong Liu, Yating Zhang, Changlong Sun, Fei Wu, Kun Kuang

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies