Evaluation should ideally be planned at the outset of any new development or modification. The purpose, focus and relevance of your evaluation are worth spending some time on during the planning period so you approach things in a systematic manner. Your initial evaluation plan is likely to be a preliminary analysis of aims, questions, tasks, stakeholders, timescales and instruments/methods. It should define “that which you are trying to investigate but also how you are going to go about it” (Crompton, 1997).
Diversity of purpose and audience for evaluation requires a variety of methods and interpretations (Morrison, 1999). Pinning down key stakeholders, your main questions and identifying a variety of sources of data can help to ensure the credibility of your findings. Romiszowski (1988) suggests differentiating between the scope (levels of evaluation) and the depth (levels of analysis).
In planning an evaluation, you should consider how much time and possibly budget you have available. This may determine the kinds of instruments that are used to collect data. The LTDI Evaluation Cookbook (see Resources section) provides an excellent table to assist in analysing resource implications of particular methods.
Although your evaluation should be seen as an evolving strategy, early initial planning is important. Some of the tasks may need to be undertaken before you start development or implementation, such as collecting baseline information (pre-test data) for later comparison with currently existing conditions. The purpose of formative evaluation is to inform on-going processes and practices. It is important therefore that the findings are ready in time to enable you to make appropriate changes to your approaches or recommendations – perhaps for a subsequent cohort of students or a departmental decision on a related issue.
Even with a well-planned evaluation methodology, you should ensure that you allow sufficient time to gather your data, collate and analyse it and to prepare materials for feeding back, reporting or disseminating. It might be useful to consider what opportunities there are for classroom observation (perhaps by an external evaluator/observer) and administering evaluation instruments (such as questionnaires, confidence logs, focus groups, interviews). Indeed, your schedule is likely to determine the scale of evaluation as well as your choice of data-gathering methods. Not everything may need to be evaluated; sampling and randomisation may be an advantage, or use of existing evaluation data (course feedback forms, SSLC reports, the “writing on the wall”, are all generally available.) Evaluation should be a planned, systematic but also open process; you should aim to incorporate opportunities for discovering the unexpected!
There may be scope for reducing the workload of evaluation by looking at a collaborative approach. You might consider whether other colleagues are trying out approaches similar to yourself. They may have many of the same goals and questions that you have. Consider whether there are existing methods you can draw upon, or whether you can share an (external) evaluator between projects. This may increase the value of both evaluations.
Bear in mind that evaluation can appear as an interrogation, judgemental and threatening, rather than as the desired opportunity for reflection and review. Introducing it to those involved as seeking to “appreciate” and “illuminate” rather than to “monitor” or “judge” is usually helpful. Some groups will suffer from evaluation fatigue, particularly surveys and questionnaires. As your evaluation questions and concerns will change as your development proceeds, you should plan to review your methods and choice of instruments from time to time.