Skip to main content Skip to navigation

Methodological Foundations for AI-Driven Survey Question Generation

Project Overview

The document explores the application of generative AI (GenAI) in education, emphasizing the creation of a dynamic AI-driven survey tool that leverages Large Language Models (LLMs) for adaptive question generation. It highlights the advantages of this integration, such as scaling and personalizing survey prompts, improving data quality, and fostering better communication and collaboration among students. By utilizing GenAI, the tool can analyze interactions among students, offering insights that enhance team dynamics and individual contributions. However, the document also addresses challenges, including bias, data privacy, and the necessity for iterative prompt design. To ensure the effectiveness of AI-generated questions, it introduces the Synthetic Question-Response Analysis (SQRA) framework, which evaluates these questions prior to human use. This framework, grounded in activity theory, aids in understanding the interactions between AI tools and participants, thereby reinforcing the importance of thoughtful implementation in educational settings. Overall, the findings suggest that generative AI has the potential to transform educational research and collaborative learning experiences, while also necessitating careful consideration of ethical implications and design processes.

Key Applications

AI-driven interaction and survey analysis tool

Context: Applied in various higher education settings, including introductory courses like experimental physics laboratories and upper-level biomedical engineering courses, as well as in team-based project collaborations across disciplines. The tool supports both survey data collection and interaction analysis among students.

Implementation: The tool integrates AI technologies, utilizing APIs such as OpenAI for dynamic question generation during surveys and analyzing AI-to-Human and AI-to-AI interactions to assess communication effectiveness. It adapts in real-time based on participant responses and measures team dynamics.

Outcomes: Enhanced participant engagement, personalized data collection, improved communication effectiveness, increased inclusiveness in discussions, and enhanced individual contributions to group projects.

Challenges: Challenges include issues with redundant phrasing, double-barreled questions, bias in question generation, unequal workload distribution, domination of discussions by certain individuals, and the need for fostering interpersonal rapport.

Implementation Barriers

Ethical

Concerns regarding bias, data privacy, and transparency in AI-generated content.

Proposed Solutions: Implementation of prompt auditing, transparent documentation, and adherence to ethical guidelines such as GDPR and FERPA.

Technical

Challenges related to the accuracy and relevance of AI-generated content, including the 'black-box' nature of LLMs and difficulty in accurately interpreting and analyzing nuanced human interactions.

Proposed Solutions: Development of the SQRA framework for iterative testing and refinement of AI-generated questions before deployment. Refining AI algorithms and incorporating more diverse training data to improve understanding of team dynamics.

Social

Challenges related to team member assertiveness and willingness to engage in open discussions.

Proposed Solutions: Training sessions on effective communication and teamwork strategies.

Project Team

Ted K. Mburu

Researcher

Kangxuan Rong

Researcher

Campbell J. McColley

Researcher

Alexandra Werth

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Ted K. Mburu, Kangxuan Rong, Campbell J. McColley, Alexandra Werth

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies