Jackson (1998) suggests there are three stages in innovative projects about which judgements can be made: intentions, implementation and outcomes. He argues that the match between the outcomes and intentions provides a measure of the success of the innovation, its “fitness for purpose”, which reflects also whether the initial assumptions are correct or implementation strategies are appropriate. Jackson offers an interesting route through the issues associated with the use of technology to enhance learning (particularly by encouraging deep learning) based on the Biggs’ (1996) ‘SOLO’ taxonomy– level descriptors for the range of performances produced by learners in attempting a particular academic activity. However, he also advocates that evaluation of e-learning must be situated in the context in which the technology is used: evaluation of implementation.
A useful distinction is to consider whether the focus of evaluation is on the use of the technology or on the learning that is supported by the technology. Compton (1997) suggests two possible domains for evaluating e-learning implementation:
- The evaluation of the IT intervention in isolation
- The evaluation of the IT intervention within the course itself.
He argues that with (a), evaluating the e-learning tool or resource in isolation will tend to focus inwardly on various aspects of the technology or software itself. You may be interested in the usability of a particular software packages or online tool, such as accessibility, design and navigation. You may be interested in the extent to which the students have access to a computer or the network or develop IT skills through using the e-learning approach. Evaluation might seek examine the scope of the coverage of the material content.
With (b), Compton suggests the evaluation of the course itself will allow us to examine other factors, associated with the successful integration of the e-learning object within the overall teaching and learning context. He provides a useful list as a backdrop to what an evaluation might take into account:
- Educational setting
- Aims and objectives of the course
- Teaching approach
- Learning strategies
- Assessment methods
- Implementation strategy
Diercks-O’Brien (2000) presents a model that enables micro level evaluations (a specific e-learning intervention) to be located within a wider macro level framework (context of use). She suggests four domains for evaluation:
|Instructional context||integrative||all taught activities, self-study & online learning|
|Perceptions||phenomenographic||experience, attitudes, beliefs, expectations & motivations|
|Communication||discursive||conversation models; discourse analysis, psychological & social dimensions|
|Skills and knowledge||constructivist||knowledge construction, learner interactions|
In looking at course evaluation, Garrison and Anderson (2003) make use of a model of proactive assessment developed by Sims (2001). This offers seven areas:
- determining the strategic intent of the e-learning programme
- looking closely at the content of the courses
- examining the interface design
- identifying the amount of interactivity supported by the course
- evaluating the quality, quantity and thoroughness of the assessment of student learning
- measuring the degree of student support during an e-learning based course
- assessing the degree to which outcomes have been met.
A VLE is an integrated software package that provides a ‘virtual’ learning environment, incorporating essentially web publishing tools, discussion forums, and in many, computerised assessment or quizzes. The evaluation of VLEs specifically has been a hot topic over recent years. A number of theoretical models have been applied which describe the learning process and how it should be managed in terms of learners and content:
- Cognitive Apprenticeship Model based on the work of John Seely Brown and colleagues (Collins et al, 1991; see also JohnSeelyBrown.com website).
- Conversational model for learning processes based on Diana Laurillard’s work (1993, 2002)
- Organisational model for learning environments Beer’s Viable System model (summarised in Schwaninger, 1998)
These are critically reviewed in a comprehensive report by Sandy Britain and Oleg Liber (1999).
Biggs, J.B. and Collis, K.F. (1982) Evaluating the quality of learning: the SOLO Taxonomy. Academic Press: New York.
Britain, S. and Liber, O. (1999) A Framework for Pedagogical Evaluation of Virtual Learning Environments. JISC report available on the web: http://www.jisc.ac.uk/uploaded_documents/jtap-041.doc
Collins, A. Seely Brown, J. and Holum, A. (1991) Cognitive Apprenticeship: Making Thinking Visible. Available in 21st Century Learning Initiative archive on the web at:http://www.21learn.org/site/archive/cognitive-apprenticeship-making-thinking-visible/
Compton, Philip (1997) Evaluation: A practical guide to methods, Philip, in LTDI Implementing Learning Technology: http://www.icbl.hw.ac.uk/ltdi/implementing-it/eval.htm
Diercks-O’Brien (2000) Evaluation of Networked Learning, International Journal for Academic Development, pp.156-165.
Garrison, D.R. and Anderson, T. (2003) E-Learning in the 21st Century: A Framework for Research and Practice. Routledge: London.
Jackson, Barry (1998) Evaluation of learning technology implementation. In Mogey, N. (Ed.), Evaluation Studies, LTDI resource. http://www.icbl.hw.ac.uk/ltdi/evalstudies/esevalimp.htm
Laurillard, D. (1993).Rethinking University Teaching: A Framework for the Effective Use of Educational Technology, Routledge: London.
Seely Brown, J. Website http://www.johnseelybrown.com/
Sims, R. (2001) From art to alchemy: Achieving success with online learning, IT Forum, 55. Available on the web: http://it.coe.uga.edu/itforum/paper55/paper55.htm