Evaluation is central to ensuring the quality of academic practice. The University’s Learning and Teaching Strategy refers to the importance of developing effective evaluation practices and “the introduction of robust mechanisms for monitoring and evaluating teaching quality”. All departments have curriculum review and student feedback procedures in place (e.g. SSLC, course evaluation forms) and many have established systems for peer observation.
In additional to the quality issues, e-learning involves high levels of investment that need to be justified. Evaluation has, therefore, also become important across the sector as a means to demonstrate effectiveness and cost-effectiveness of learning technologies.
However, there has been a gradual shift in the use of evaluation for developmental rather than judgemental purposes. Hounsell (1999: 161) suggests evaluation is best seen not only as a necessary adjunct to accountability, but also as an integral part of good professional practice -contextualisation rather than standardisation. Evaluation can provide a vehicle for reflective practice: thinking critically about curriculum development and encouraging continuous improvement of one’s own teaching performance. Some academic staff have used evaluation of innovative approaches as evidence for recognising teaching achievements in the promotion process.
E-learning, by its nature, is innovative: it introduces new modes of teaching, learning and assessment. As you will be putting considerable effort into such developments, perhaps in terms of your own skills and understanding, you will generally wish (or are required by others) to be able to say something about the outcomes. In introducing e-learning, you will be reflecting on whether what you are doing and how you are doing it, is meeting your intended aims and objectives. A well-designed evaluation should provide evidence as to the reasons why, and extent to which, a particular approach has been (or is likely to be) successful and (possibly) of potential value to others. David Baume (2004) distinguishes between “succeeding” to mean achieving goals and “going well” to mean adopting satisfactory or excellent processes. He argues that good process does not always correlate with attainment of goals, and vice versa.
Although the above is essentially “evaluation for understanding”, there are several different stages in designing and developing e-learning, that can help define the purpose of an evaluation. As a starting point, therefore, you might consider whether the evaluation is for needs analysis (diagnostic), developmental (formative) or monitoring (summative) purposes:
- diagnostic – learning from the potential users; to inform plans to change practice;
- formative - learning from the process; to inform practice; or
- summative - learning from the product; to inform others about practice.
The audience for whom the evaluation is primarily aimed (which may well be yourself) may well determine this, but you might also need to consider the questions and concerns of any other key stakeholders.