Skip to main content Skip to navigation

Basic Tools for CAA


  An entry level CAA tool for Warwick

The e-guide on CAA, ' An introduction to objective testing and computer aided assessment', explores some of the issues in more detail and is aimed at tutors who are considering using CAA at Warwick.

This document on basic tools presents a brief summary of the issues involved in choosing a Computer Assisted Assessment (CAA) system to inform decision-making at Warwick.

The University of Warwick currently supports a commercial package, Question Mark Perception, but this is expensive and its sophistication is offset by a fairly steep learning curve. A practical first step to CAA at Warwick is to use a simpler CAA system offered as a service to all at no direct cost to the users. This allows those new to objective testing and CAA to explore the use of CAA and pilot some simple question types with their students. Such a system might be a lightweight commercial application, a non commercial application such as CASTLE or a system developed in-house by the e-lab development wing of IT Services.

  Fitness for purpose

CAA can be used in a range of ways to support learning at a number of points in a course:

  • Diagnostic
  • Self Diagnostic
  • Drilling
  • Formative assessment
  • Interactive assessment
  • Summative assessment
  • Course evaluation

Note that from a course perspective, any of these uses can inform curriculum design. In particular, those aspects of assessment which are embedded in the course for formative feedback can be used to modify the teaching and learning during rather than after the course. Some uses may not impact on the course design at all.

  CAA Software Packages

There are numerous systems for delivering CAA over the Web ranging from single simple interactive Web pages that give immediate feedback but store no results - useful for quick self diagnosis - to sophisticated software systems like Perception that can support large scale, secure, summative assessment.

Packages that record no data are clearly of no use for summative assessment but also cannot be used for diagnostic assessment where data is required in order to analyse a student's starting level or for 'interative' assessment, where the purpose is tailor the course 'on the fly' to a particular cohort. Systems that collect no data can be used in a self-diagnostic, drill or formative assessment mode. Without data on how students perform in the tests, however, the tests themselves cannot be improved and the tutor is missing an opportunity to gather valuable feedback on the effectiveness of the course as a whole.

If data is required, analysis will be performed so the systems either have to support extraction of detailed data into external analysis software and/or contain analysis tools.

Conversely if the purpose is purely self diagnosis/formative in which the tutor has no interest in the students' performance (or in improving the tests), data capture and analysis features just get in the way of producing quick results.

These simple systems might seem ideal for introducing tutors to CAA. However, most of the simpler systems are also limited in the range of question types they offer and the formatting flexibility available to question authors. Many systems are limited to multiple-choice and multiple-response questions and these tend to be limited to testing factual knowledge. This limits the potential of the system to demonstrate the full potential of objective tests - questions that test higher levels of learning cannot be constructed.

Some systems offer the option of using multimedia in the questions and this can extend the flexibility of question types and allow for the testing of higher orders of learning but other advanced question types are purely text based e.g. assertion-reason or multiple true/false.

A simple system should be either so simple that there is very little time invested in learning how to use it or have elements of the author's experience that are transferable to the more complex system. If possible, the questions constructed in the simple system should be transferable to the more complex system and standards to enable this are now becoming more widely used.

I have so far encountered no simple CAA system that will support question types suitable to test higher orders of learning. This means that simple systems cannot be used to demonstrate the full potential of objective testing and there is a danger that promoting simple systems will give the message that objective testing is simple about testing factual knowledge.

Due to this and contrary to my original opinion, I would recommend that we do not provide a simple CAA system but do design a route by which tutors can get quickly up to speed with Question Mark Perception and ensure that training in objective test design is provided and that staff proposing to use CAA are strongly encouraged to take advantage of it. I would suggest that this requires expertise from both ITS and LDC and that a group be set up to explore strategic deployment of CAA at Warwick.

If Perception is to act as an entry-level CAA tool as well as a top-level tool, it must be available to all. A site license is therefore essential. This is currently being considered in the light of current departmental licence purchases.