Skip to main content Skip to navigation

The Artificial Intelligence Disclosure (AID) Framework: An Introduction

Project Overview

The document explores the growing integration of generative AI tools in education and research, highlighting their transformative potential while underscoring the importance of transparency in their application. It presents the Artificial Intelligence Disclosure (AID) Framework, which establishes guidelines for clearly disclosing the use of AI tools in academic writing and research. This framework aims to enhance clarity and consistency about the contributions of AI, thereby promoting academic integrity and addressing shortcomings in traditional citation practices. By implementing the AID Framework, educational institutions can more effectively navigate the complexities of AI usage, ensuring that the benefits of generative AI are realized while maintaining ethical standards in research and scholarship. The findings suggest that with proper disclosure and understanding, generative AI can significantly enrich educational methodologies and research outcomes, although careful consideration is necessary to mitigate potential risks associated with its deployment.

Key Applications

Artificial Intelligence Disclosure (AID) Framework

Context: Higher education and academic research, including student research papers and instructional practices within educational settings, targeting both educators and students.

Implementation: The AID Framework is introduced to provide structured disclosure about the use of AI tools in academic writing, research processes, and instructional materials. This includes incorporating AID Statements in student assignments to promote clarity regarding AI tool usage.

Outcomes: ['Increased transparency and clarity in the use of AI tools, promoting academic integrity and ethical standards.', 'Facilitates transparency regarding AI contributions in student work, potentially enhancing learning outcomes and ethical considerations.']

Challenges: ['The current lack of detailed guidance on how to effectively disclose AI usage in academic contexts.', 'The inadequacy of traditional citation methods to capture the dynamic nature of generative AI outputs.', 'Adapting AID Statements for educational settings requires balancing detail with simplicity, ensuring they are not burdensome for students.']

Implementation Barriers

Guidance and Citation Barrier

There is a gap in guidance on how to disclose AI tool usage effectively in academic and research contexts, and traditional citation practices are insufficient for capturing the unique and variable outputs of generative AI tools.

Proposed Solutions: Develop detailed recommendations for AID Statements that specify what information should be included, and adopt frameworks like AID that provide structured and clear methods for disclosing AI contributions to academic work.

Project Team

Kari D. Weaver

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Kari D. Weaver

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies