Using simulations to assess learning
A simulation is a recreation of a real-world situation or task, which is designed to explore key elements of that situation. Simulations are ‘‘interactive events’’ in which ‘‘the environment … is simulated ... but the behaviour is real.’’ (Jones 1995, 7). Simulation might include role-play among students or with facilitators, playing games, re-creations, imaginative exercises, etc. There are interesting possibilities for simulations involving computer technology, particularly with regard to augmented and virtual reality (VR). Well-established practices of simulation for learning and assessment include simulated patients, mock trials, mock negotiations and in-tray exercises. Simulation is related in some ways to the use of games and play in learning, and shares some of the same challenges and advantages.
Simulation offers a simplified representation or imitation of an object or process which may not be directly accessible due to issues of scale, time, risk or complexity. Simulations therefore can bridge the gap between classroom learning and real-life experience, offering a safe and repeatable environment for students to learn and to demonstrate competencies. Simulation Based Education (SBE) is used extensively in healthcare and medical education and training to evaluate professional curricula. Game-based simulations have a long history in experiential teaching and learning to develop knowledge acquisition, understanding of complex relationships and the development of behaviours, skills, competencies in social science disciplines, particularly business and management, and politics and international relations, although are less often used as assessment practices in these fields.
What can simulations assess?
Simulations offer performance-based assessment which can be used to evaluate across all learning dimensions: cognitive, behavioural and affective depending upon the intended learning outcomes to be measured and the nature of the simulation developed. They lend themselves especially well to assessing professional competencies, communication and interpersonal skills, application of knowledge, and decision-making.
It is crucial to clearly delineate the intended learning outcomes, the purpose of the task and to ensure that these are strongly aligned. It is also important that students understand the process and the benefits of the task.
When students are assessed through simulation the primary consideration is that the simulation itself works. There is a wealth of literature on designing simulations in medical education, political science and international relations, and in business and management upon which to draw.
Simulations may be undertaken in physical or digital spaces, singularly or in combination, and the use of VR in simulations is growing.
Asal and Blake (2006) writing for social science contexts suggest that the design process answers the following questions:
- what are your educational goals?
- what kind of time and technological limitations will you face?
- will you use a real or fictional case?
- what is your level of complexity?
- how many participants will you have and how will they be organised?
- what will the decision-making process be (within and between teams)?
- how will you use actions? Will they change the negotiation environment?
- what kinds of outcomes will you have, structured or open-ended?
- will there be any constraints on participants? If so, what kind? (Asal and Blake 2006: 13)
When designing your simulation you will need to create balance between the ‘realism’ of the simulation and the requirements of assessment for all students to have opportunities to demonstrate their skills (Obendorf and Randerson 2013: 359).
There are three stages to a simulation: preparation, interaction and debriefing, and all or any of these might be assessed as discrete elements.
Preparation: might require students to undertake independent research; apply theoretical and content knowledge to the experiential learning of the simulation; establish negotiating position/strategy. For example students may be asked to prepare a policy briefing to distribute in advance of the simulation (Druliolle 2017).
Interaction: you will be assessing how participants in the simulation perform against pre-defined and clearly communicated criteria. You may choose to focus on process [how they work] or on the product [which may be a physical output, or a verdict, resolution, agreement etc.].
Debriefing: offers opportunity for critical reflection from a position of neutrality outside the simulated scenario. This might require students to demonstrate their understanding of connections between content and experience. A class-based debrief might be used to scaffold engagement with any post-simulation reflective assessment activity.
If you are designing simulation-based training within healthcare settings the Association for Simulated Practice in Healthcare (ASPiH) have published Simulation-Based Education Standards Framework with accompanying guidance to best practice and evidence from the literature.
The principles for assessment are:
- the assessment is based on the intended learning outcomes of the exercise, with clarity regarding the knowledge, skills and attitudes and appropriately tailored to professional curricula to be evaluated
- the psychological safety of the learner is considered and is appropriately supported
- faculty have a responsibility for patient safety and to raise concerns regarding learner performance within educational settings, including SBE interventions.
Game-based simulations have a longer history as a teaching method rather than an assessment method. They can be a powerful mechanism to change perspectives, understand complexity, and offer improved understanding of others’ mind-sets. Debriefing sessions and feedback are crucial if these functions are to be fulfilled, and therefore these may work most effectively as formative assessment methods. Gamed simulations can draw upon factual and / or fictional scenarios and cases.
In sum those who would assess the learning taking place in any given gamed simulation have two questions to answer:
- are participants learning the model of the real world that the game is simulating?
- does the model of the real world have verisimilitude? (Chin et al 2009; 559)
Simulations can assess individuals or groups.
Assessing a simulation
What you are going to assess will be determined by your intended learning outcomes, the constraints within which you are operating the simulation, and which aspects of the simulation you plan to assess.
Preparation: assessing preparatory materials produced for the simulation [e.g. position paper; briefing memo] will enable you to assess aspects of student research, such as data collection and analysis, synthesis, critical thinking, evaluation. Asking students to submit a portfolio of research resources will enable you to evaluate research processes.
Interaction: the agreed and communicated criteria against which you assess interaction with or within the simulation will depend upon the nature of the simulation and intended learning outcomes. It might include: mastery of technical skills, adherence to rules of procedure, participation in formal debate, communication, effective representation, negotiation and compromise, decision-making, leadership, teamwork and/or inter-professional communication.
Some simulation mechanisms may offer outcomes / feedback - technologically or in person [e.g. a standardised patient] which can be used to evaluate performance. Even with clear assessment criteria evaluating participation may admit a degree of subjectivity and you will need to be aware of the potential for assessor bias.
Debrief: post-simulation you might assess students’ critical reflections on the experience of the simulation. Petranek, Corey and Black posited a reflective model for simulation based on consideration of the four ‘e’s: events, emotions, empathy, and explanation (1992, 174). Assessing student reflection may be particularly powerful in formatively assessed work.
Diversity & inclusion
Assessment of oral participation and interaction may disadvantage some students.
By combining theoretical and content knowledge with experiential learning the potential for cheating is decreased. Evaluating student performance of technical, professional or clinical skills in a simulation is a reliable method of assessment. (Click here for further guidance on plagiarism .)
Student and staff experience
Research shows that students respond positively to simulations, finding them enjoyable and stimulating.
The use of simulations in clinical education is well-researched. The ability of simulations to train and evaluate technical, clinical and professional competencies and have a positive impact on patient outcomes is well-established.
Game-based simulations offer student-centred and active learning, and push students into higher order thinking requiring them to apply knowledge, synthesise ideas, and think critically and analytically through a series of decision-making processes.
As they simulate the ‘real world’ - they can also promote authentic learning and assessment.
Simulations can also be a source of frustration and anxiety, especially if there are administrative difficulties, which will be heightened if assessment is summative and high stakes. As relatively unusual learning and assessment environments students can find simulations, especially in their initial phases, confusing if the purpose of the simulation is difficult to grasp.
If the simulation assesses group performance you will need to be particularly mindful of all aspects of group assessment, especially if your simulated scenario includes competition. A mixture of individual and group marks, perhaps at different stages in the preparation, interaction and debrief, might be helpful to mitigate against inequity in distribution of tasks or perceptions of unfairness.
Assessment of student learning within simulations can be more complex than other methods of assessment. The constraints of space (room availability) and time (timetabling, preparation, multiple iterations for larger cohorts) may be an issue. Access to resources may also present a barrier.
Some simulations may require an appropriate level of conflict in order to be effective and this will need to be carefully managed by the person running simulation.
Student absence will be problematic. Absence of any students may compromise the experience of the other students within the simulation. Many simulations will not be replicable and therefore you will need to consider how to re-assess should student be absent or need to remedy failure.
There is evidence that the demands of the simulation may consume students’ time and energy outside of class in ways that are not productive for learning - providing students with outlines or instructional supplements can mitigate against this. (Raymond and Underwood 2013, 163).
The manageability of assessment using simulations will depend upon what element (preparation, interaction, debrief) you are going to assess. Summatively assessing participation during the simulation will require the session to be captured in some way for audit purposes and for the external examiner. The size of the simulation, and the number of participants involved may also present logistical issues; each assessor can only watch 5 people sufficiently closely and so you may need a number of assessors in the room, or run the simulation multiple times.
University of New South Wales
University of Pennsylvania
The Science Education Resource Center at Carleton College [https://www.carleton.edu]