A review of methods used by national educational development projects (Wisdom, 2002) suggest a move away from widely distributed questionnaires, which have not always produced a high quality data. Smaller-scale but richer activities have instead been favoured, such as focus groups and semi-structured meetings and interviews. Increasingly, telephone and email are used to extend their range. Observation of staff and students is also a popular technique. Many projects have made use of existing committees and groups to gather professional, critical feedback.
Assessment of student learning is a key component of the evaluation of an e-learning approach used within a course activity or module. Assessment may be wholly tutor-based or include peer or even self-assessment. However, assessment of performance or achievement of learning outcomes is only one of the factors that influence the effectiveness of the intervention. It provides one form of evidence about the outcomes of using a particular educational approach.
By no means an exhaustive list, the prototypical evaluation design suggested by TILT (2001) provides a good approach for combining data gathering techniques:
- A pre-task questionnaire to discover aspects of what each student brings to the session, e.g. prior experience, personal motivation (also: one minute questionnaires, student profiles)
- Confidence logs after each kind of activity
- A learning test (quiz). Ideally a version of this would be administered at the start of the session, at the end of the session, and after a delay of some weeks.
- Access to subsequent exam (assessment) performance on one or more relevant questions
- Post-task questionnaire to elicit personal reactions to the experience, and to ask about the relative value each individual put on various resources or activities
- Interviews of a sample of students (also: focus group)
- Observation and/or videotaping of one or more individuals (also: peer observation/co-tutoring).
Quantitative data is gathered by objective methods to provide information about relations, comparisons and predictions. Needs and sampling sizes will be influenced by those questions you have about variables in the group/popluation, such as different kinds of students (non-native speakers, part timers, male/female etc.), different kinds of courses (foundation, Masters, work-based module etc.), or different forms of learning resources (web-based, CD-ROM, print-based).
Social and cultural influences will require more qualitative, inquiry-based approaches. Qualitative data can be gathered through open-ended questions that provide direct quotations. For some evaluations, you may be interested in qualitative feedback on students’ feelings and values about using e-learning, but it might also be helpful to quantify certain elements, for example, what proportion and types of students felt a particular way. For others, it might be desirable to gather comparative, qualitative evidence, such as provided by pre- and post- tests.
The LTDI evaluation cookbook is an excellent place to explore the range of methods and consider whether they are appropriate to your needs. Shorter overviews of specific evaluation methods, which may also be helpful and are available on the web, have been produced by Compton (1997) for LTDI and the Bristol LTSS evaluation guide – see resources section.