Trialling an assessment protocol for LLM-powered careers advice
Careers advice is a critical component of UK education and work transitions, particularly for disadvantaged groups who face uneven support. AI has the potential to enhance access, quality, and sustainability – but only if due care is taken to ensure tools are only recommended in appropriate circumstances and quality assured.
Policy makers, service managers, and practitioners have no high-quality evidence on the performance of these tools. However, research indicates that students already use large language models (LLMs) for personalised career guidance. The careers sector is not currently equipped to manage this trend, with low awareness and mixed technical expertise. Without evidence-based assessments, there is a risk of AI tools being adopted based on cost or convenience rather than efficacy.
The research team aim to address this gap by creating an evidence base to guide responsible use of AI tools, ensuring they enhance accessibility, quality, and outcomes in career advice.
The main research questions being addressed are:
- What nature of assessment protocol would the careers sector accept for assessing careers advice powered by LLMs?
- Within the protocol, what is the appropriate balance between evaluation methods; panel review; case study comparisons; and a randomised field trial? What thresholds are appropriate to inform practical guidelines and specific policies on tool usage?
- Based on pilot studies of a specific set of AI-powered tools, selected according to criteria such as relevance, appropriateness, tone accuracy, and actionability, what are the operational requirements for the evaluations?
- Based on the pilot studies, what hypotheses might be formed on the degree of quantitative performance difference between interventions or key performance drivers/subgroup differentials?
Project Team:
Peter Dickinson - Principal Investigator
Project Duration:
January 2026 - December 2027
Project Funder: