The use of AI technologies has ethical implications that should not be ignored. For example, ChatGPT builds economic value by making use of data generated by others, including its own users. OpenAI, the company that makes ChatGPT, has received billions of dollars of investment fundingLink opens in a new window from a range of investors, including Microsoft, Elon Musk and others.
This means that decision-making and design choices are not free from political and economic influence, and that the way ChatGPT works is not accessible to the public in order to protect its intellectual property. Further, running ChatGPT consumes huge amounts of energyLink opens in a new window and is, therefore, environmentally destructive.
At the same time, there are ethical implications of not using (or stopping others from using) AI technologies, such as ChatGPT. Such technologies are already being embedded into everyday software and devices (e.g. within Microsoft Office, Google Maps, SMS) and it is becoming difficult if not impossible to avoid them. Educators have a responsibility to support students to learn how to navigate the present and the future.
There are no simple answers to whether and how AI should be dealt with in relation to learning, teaching and assessment. It is also important for staff and students to inform themselves about the opportunities and risks of these technologies and, where possible, to discuss them with students in relation to their particular unit and educational context.