Skip to main content Skip to navigation

Use Me Wisely: AI-Driven Assessment for LLM Prompting Skills Development

Project Overview

The document examines the role of generative AI, particularly large language models (LLMs), in education, emphasizing their potential to enhance prompting skills among students. It highlights key applications of LLMs in improving learning assessments and fostering critical thinking through tailored prompting guidelines. However, the findings also reveal considerable challenges, such as the necessity for domain-specific knowledge and the complexities involved in prompt engineering, which can hinder effective utilization. Despite these barriers, the integration of LLMs in educational contexts shows promise for enriching the learning experience, provided that educators and learners can navigate the intricacies of generative AI effectively. Overall, while generative AI holds transformative potential for education, careful consideration and strategic implementation are essential to overcome existing obstacles and fully realize its benefits.

Key Applications

AI-Driven Assessment Framework for LLM Prompting Skills

Context: Educational settings for learners with varying backgrounds, particularly those unfamiliar with generative AI.

Implementation: A framework that integrates few-shot learning and LLMs to assess learners' prompting strategies through customized guidelines and feature detection.

Outcomes: Improved learner interaction with LLMs, enhanced understanding of effective prompting, and increased AI literacy.

Challenges: Variability in learner responses, the need for extensive training data, and the complexity of the LLM’s behavior and output reliability.

Implementation Barriers

Technical Barrier

The inherent unpredictability and complexity of LLM outputs make it difficult for users to anticipate and control responses.

Proposed Solutions: Development of clear prompting guidelines and training resources to help users understand effective prompting techniques.

Educational Barrier

Learners often struggle with crafting effective prompts and understanding the nuances of LLM interactions due to a lack of prior experience.

Proposed Solutions: Implementing structured learning activities and workshops focused on teaching prompting strategies.

Project Team

Dimitri Ognibene

Researcher

Gregor Donabauer

Researcher

Emily Theophilou

Researcher

Cansu Koyuturk

Researcher

Mona Yavari

Researcher

Sathya Bursic

Researcher

Alessia Telari

Researcher

Alessia Testa

Researcher

Raffaele Boiano

Researcher

Davide Taibi

Researcher

Davinia Hernandez-Leo

Researcher

Udo Kruschwitz

Researcher

Martin Ruskov

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Dimitri Ognibene, Gregor Donabauer, Emily Theophilou, Cansu Koyuturk, Mona Yavari, Sathya Bursic, Alessia Telari, Alessia Testa, Raffaele Boiano, Davide Taibi, Davinia Hernandez-Leo, Udo Kruschwitz, Martin Ruskov

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies