Higher education assessment practice in the era of generative AI tools
Project Overview
The document evaluates the role of generative AI (GenAI) tools in transforming assessment practices within higher education, emphasizing their dual potential to enhance learning and present challenges. It highlights tools such as ChatGPT and Gemini, which demonstrate impressive knowledge and problem-solving capabilities; however, it warns that improper use can hinder educational outcomes. Through case studies in disciplines like data science, data analytics, and construction management, the research reveals that the effectiveness of GenAI varies by field, underscoring the necessity for tailored curricula and assessment methods that integrate these technologies responsibly. Overall, the findings suggest that while GenAI can enrich educational experiences, careful implementation and oversight are essential to maximize benefits and mitigate risks.
Key Applications
ChatGPT and Bard for assessment tasks
Context: Higher education, specifically master's level courses in data science, data analytics, and construction management. The AI tools were employed to perform assessments across various scenarios and subject matters, including data analytics tasks, machine learning algorithms, and construction management scenarios.
Implementation: AI tools like ChatGPT and Bard were utilized for a range of assessment tasks, including evaluations of data analytics, machine learning, and construction management projects. These AI tools were tasked with generating coherent responses and providing solutions based on specific assessment criteria.
Outcomes: The use of AI tools demonstrated subject knowledge and problem-solving skills, with many responses being coherent and relevant to the tasks. However, limitations were noted regarding the tools' ability to handle complex project-specific analyses and to fully comprehend context, leading to inconsistencies in meeting assessment design criteria.
Challenges: Challenges included the potential for misuse in academic dishonesty, variability in performance across different disciplines, and difficulties faced by the AI tools in engaging with complex thinking and project-specific requirements.
Implementation Barriers
Ethical
Potential for students to misuse AI tools for cheating, undermining academic integrity.
Proposed Solutions: Recommendations include redesigning assessments to limit reliance on AI-generated solutions and providing reflective learning opportunities.
Technical
Inconsistencies in AI performance across different disciplines, particularly in complex and project-based assessments.
Proposed Solutions: Incorporating AI tools into teaching with proper supervision and guidance, and adapting curricula to better accommodate AI capabilities.
Project Team
Bayode Ogunleye
Researcher
Kudirat Ibilola Zakariyyah
Researcher
Oluwaseun Ajao
Researcher
Olakunle Olayinka
Researcher
Hemlata Sharma
Researcher
Contact Information
For information about the paper, please contact the authors.
Authors: Bayode Ogunleye, Kudirat Ibilola Zakariyyah, Oluwaseun Ajao, Olakunle Olayinka, Hemlata Sharma
Source Publication: View Original PaperLink opens in a new window
Project Contact: Dr. Jianhua Yang
LLM Model Version: gpt-4o-mini-2024-07-18
Analysis Provider: Openai