Skip to main content Skip to navigation

Assessing Large Language Models in Agentic Multilingual National Bias

Project Overview

The document explores the role of generative AI, particularly large language models (LLMs), in education, focusing on their applications and the challenges they present. It highlights the use of LLMs for providing personalized advice related to university applications, travel recommendations, and city relocations. However, the document identifies a significant concern regarding biases, particularly in multilingual contexts, where local language bias can affect the quality and fairness of the AI-generated recommendations. This issue underscores the necessity for further research aimed at improving inclusivity and fairness in AI-driven educational applications. The findings suggest that while generative AI holds potential for enhancing personalized learning experiences, addressing these biases is crucial to ensure equitable access and support for diverse student populations. Overall, the document calls for a more comprehensive understanding of the implications of LLMs in education to maximize their benefits while mitigating inherent biases.

Key Applications

LLMs for personalized recommendations and advice

Context: Providing personalized advice and recommendations for university applications, travel destinations, and city relocations based on user queries in multiple languages.

Implementation: LLMs are prompted to analyze and evaluate various options (universities, travel destinations, cities) based on user inputs, rating them according to user needs and preferences across different languages.

Outcomes: ['Identification of biases in recommendations based on language and local contexts.', 'Improved understanding of how LLMs rate different options based on user queries.', 'Insights into cultural influences affecting the outputs of LLMs.']

Challenges: ['Presence of local language bias affecting recommendations.', 'Inconsistent outputs across languages potentially reinforcing stereotypes.', 'Cultural biases impacting the fairness of advice and recommendations.']

Implementation Barriers

Bias and Cultural Sensitivity in AI Systems

Local language bias and cultural insensitivity can affect the consistency and fairness of recommendations provided by LLMs, leading to possible inequitable treatment of users based on language and perpetuating stereotypes in multilingual contexts.

Proposed Solutions: Research on multilingual bias mitigation strategies; incorporation of diverse cultural perspectives in LLM training; regular audits of LLM outputs for bias; improvements in LLM training and evaluation to minimize biases.

Project Team

Qianying Liu

Researcher

Katrina Qiyao Wang

Researcher

Fei Cheng

Researcher

Sadao Kurohashi

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Qianying Liu, Katrina Qiyao Wang, Fei Cheng, Sadao Kurohashi

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies