Skip to main content Skip to navigation

AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities

Project Overview

Generative AI is increasingly being integrated into higher education, offering significant opportunities for personalized learning while simultaneously posing challenges related to academic integrity. Institutions are actively developing governance strategies to ensure responsible AI use, emphasizing the importance of tailored approaches for different units and roles within the educational ecosystem. However, the implementation of these guidelines often leads to complexity and confusion among users, highlighting the necessity for clear communication and active community engagement. A study focused on Big Ten universities reveals that effective AI governance requires transparency and collaboration to navigate the intricate landscape of AI applications in education. Ultimately, the findings underscore the dual nature of generative AI in education, which can enhance learning experiences but must be managed carefully to uphold academic standards and integrity.

Key Applications

Responsible AI Governance and Trustworthy Framework

Context: Higher education institutions (HEIs) targeting faculty, students, and staff, focusing on secure AI tool usage and promoting responsible AI practices.

Implementation: Multiple university units, including IT, Teaching & Learning, and Libraries, develop and issue comprehensive guidelines for responsible AI use. IT departments establish data-sharing policies and recommend 'Trustworthy AI' tools, ensuring that faculty, staff, and students are educated on AI tools, their responsible usage, and associated risks.

Outcomes: Improved understanding of AI within the university community, enhanced academic integrity, and increased security awareness regarding AI tool usage.

Challenges: Complex information structures may hinder understanding; faculty workload may increase due to added responsibilities; ongoing evaluation of AI tools is necessary, along with addressing potential gaps in user knowledge about AI risks.

Implementation Barriers

Organizational

Complexity in governance structure makes it difficult for users to find relevant AI guidelines.

Proposed Solutions: Create a centralized AI center for easier access to guidelines and organize information by user role.

Workload

Increased workload for faculty due to responsibilities surrounding AI usage among students.

Proposed Solutions: Develop clear roles and responsibilities for guiding AI use to prevent overwhelming faculty.

Knowledge and Training

Lack of understanding of AI implications and effective use among faculty and students.

Proposed Solutions: Implement targeted educational sessions and open forums to enhance AI literacy.

Project Team

Chuhao Wu

Researcher

He Zhang

Researcher

John M. Carroll

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Chuhao Wu, He Zhang, John M. Carroll

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies