Skip to main content Skip to navigation

Zero-Shot Recommendations with Pre-Trained Large Language Models for Multimodal Nudging

Project Overview

The document explores the application of generative AI, particularly large language models (LLMs), in education to deliver zero-shot recommendations for multimodal content, emphasizing the importance of personalized learning experiences. By utilizing diverse input data, including text, images, and user profiles, this approach aims to enhance user engagement and satisfaction through machine learning techniques. The personalized recommendations serve as tailored nudges that encourage positive behavioral changes among students, such as promoting offline activities to reduce screen time. The findings underscore the potential of generative AI to not only improve educational content delivery but also foster a more engaging and effective learning environment by adapting to individual needs and preferences. Overall, the document highlights generative AI's transformative role in education, showcasing its capacity to create meaningful interactions that support student well-being and learning outcomes.

Key Applications

Zero-shot recommendation system using pre-trained LLMs

Context: Screen time management application targeting users who exceed their screen time limits.

Implementation: Utilizes generative AI to create a synthetic nudging environment, with user-generated data and tailored notifications involving messages and images.

Outcomes: Achieved a high success rate of 83% in providing appropriate recommendations that align with user preferences and demographic characteristics.

Challenges: Matching disparate content types (text, images, user data) effectively poses a significant challenge, requiring sophisticated solutions for cross-modal comparisons.

Implementation Barriers

Technical barrier

The challenge of effectively matching disparate content types due to inherent differences in modalities.

Proposed Solutions: Utilization of zero-shot learning techniques to generalize across different modalities without extensive retraining.

Bias concern

Potential bias in recommendations if the same LLM is used for generating and matching data.

Proposed Solutions: Employ different LLMs for various tasks (user generation, message generation, image captioning) to ensure non-trivial data matching.

Project Team

Rachel M. Harrison

Researcher

Anton Dereventsov

Researcher

Anton Bibin

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Rachel M. Harrison, Anton Dereventsov, Anton Bibin

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies