Skip to main content Skip to navigation

MAILS -- Meta AI Literacy Scale: Development and Testing of an AI Literacy Questionnaire Based on Well-Founded Competency Models and Psychological Change- and Meta-Competencies

Project Overview

The document explores the significant role of generative AI in education, emphasizing the need for AI literacy as a crucial skill in both academic and professional environments. It presents a newly developed questionnaire designed to assess AI literacy, which encompasses not only traditional competencies such as understanding and using AI but also psychological aspects and ethical considerations. The modular nature of the questionnaire allows for a comprehensive evaluation of AI literacy, targeting key areas like AI understanding, application, detection, and ethical implications. The findings underscore the importance of measuring AI literacy to facilitate effective integration and adaptation to AI technologies across diverse fields, particularly in adult education and the workforce. Overall, the document advocates for an enhanced focus on AI literacy as a means to prepare individuals for the evolving demands of an AI-driven world, ensuring they can navigate and utilize these technologies responsibly and effectively.

Key Applications

AI Literacy Questionnaire

Context: Work and adult education settings, targeting professionals adapting to AI technologies.

Implementation: Developed through empirical research involving online surveys and confirmatory factor analyses.

Outcomes: A validated measurement instrument for AI literacy that includes psychological competencies, aiding in the understanding, application, and ethical considerations of AI.

Challenges: Limited existing validated instruments for measuring AI literacy; no comprehensive approach considering psychological competencies.

Implementation Barriers

Measurement Limitations

Existing instruments for measuring AI literacy are often context-specific and lack validation across broader applications.

Proposed Solutions: Develop a modular questionnaire that can be adapted for different contexts while ensuring a solid theoretical foundation.

Psychological Factors

Negative emotional responses to AI innovations can hinder perceived behavioral control and adaptation to AI.

Proposed Solutions: Incorporate psychological competencies into AI literacy assessments to help users manage emotions and enhance self-efficacy.

Project Team

Astrid Carolus

Researcher

Martin Koch

Researcher

Samantha Straka

Researcher

Marc Erich Latoschik

Researcher

Carolin Wienrich

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Astrid Carolus, Martin Koch, Samantha Straka, Marc Erich Latoschik, Carolin Wienrich

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies