Skip to main content

What are the Right Questions


For individual innovators, a pragmatic approach is to identify first and foremost what you really want to know about your use of e-learning, the impact on students’ learning and their experience of learning. The next step is to formulate the key question(s) that need answering. Your question(s) should relate back to your teaching goals and the overall context of learning while taking account of others who might learn from your experience. “If you don't have a question you don't know what to do (to observe or measure), if you do then that tells you how to design the study” (Draper, 1996). Some studies focus on the experience and outcomes of using e-learning resources, earlier called CAL (Computer Assisted Learning) or courseware packages (Draper, 1996; Cook, 2002). Nowadays, we are much more likely to be considering the use of both content and communication and the issues concerned with effective integration (or blending) of both online and offline methods and materials (Draper et al., 1994; 1996).

Key Areas of  Enquiry

Similar to the issues discussed above, Ehrmann (1996a) suggests that organisations often misguidedly seek “universal answers about the comparative teaching effectiveness and costs of technology”. He argues that “traditional” methods, materials and motives are neither uniform nor stable, which limits the context for reliable or valid comparison. Perhaps it is for this reason that grounded theory (an ethnographic research method) is an increasingly popular approach to evaluating e-learning, and for analysing the behaviours of e-learning communities in particular (McConnell, 2002). Grounded theory differs from the more usual object reference or hypothesis testing research in that the methodology and theory are allowed to develop gradually as data and interpretations accumulate. It facilitates a reflexive as well as reflective form of evaluation that allows the unexpected and unquestioned to emerge –‘looking’ rather than ‘looking for something’.

Laurillard (1993a) outlines the difficulties in answering questions like 'do learning technologies improve learning?'. Evaluation studies in the field avoid the issue and instead demonstrate whether they have the potential to do so. She suggests the only sensible answer is “it depends”. What the answer depends on is, of course, the context of learning.

This gives us several areas to explore in evaluation in terms of pedagogical, technical, economic or social worth of an e-learning approach. A critical review of common questions in evaluation studies is offered here for illustration purposes.

Evaluating the pedagogical effectiveness of technology

Here we get right to values and the way questions are worded is important. Evaluating “effectiveness” in itself might mean: does it improve test scores or does it increase efficiency? Pedagogical effectiveness must be evidenced in terms of student learning. Often evaluation will focus on a e-learning resource (content) or tool (communication, collaboration). An area of enormous pedagogical value is evaluating the ‘nature of engagement’ by the students, both with the “object” of e-learning being evaluated and the context in which it is used overall within a course.

If the focus of attention is on e-learning materials or ‘content’, issues concerning screen design or navigation are probably quite superficial in terms of pedagogical impact. More pertinent questions would concern the ways in which students interact with electronic content and importantly, how the e-resources are introduced into the learning design. For example, in resource-based work, students can access the Internet, but did they feel overwhelmed by the amount of information there, and how did they filter it? For problem exercises delivered on the web, what other provision did the students find it useful (face-to-face workshops, discussion forums, email)?

If the focus is on communication or collaborative tools, you might be interested in the ways in which the technology supports interactions between learners. Evidence might be in the form of communication between students, assessment issues, collaborative working, sharing resources, 

on the tools being used; if you just look at an online discussion ‘trace’, for example, some aspects are “invisible” and hard to evidence. You might be interested in which (combinations of) teaching methods best support the students in working on group tasks? A student might be able to access a discussion forum, but did they feel that it was a format that enabled them to express themselves in the group, or did they feel they had an opportunity to participate? In videoconferencing, students might feel they did not have a chance to participate, or felt uncomfortable with "appearing on TV". In this case, it would be worthwhile drawing other participants, such as guest lecturers, technicians or participating colleagues into the evaluation.

Comparing traditional and e-learning methods

This is shaky ground. While you can compare “traditional” and “new”, interpreting what the results mean is difficult. It is not straightforward to find data to compare. In terms of the students’ perception of the experience, for example, do students like it because it’s new, or hate it because it’s unfamiliar, or is it just that it’s great/rubbish? You might ask would the student wish to use the approach/technology again and what improvements would they like to see. In terms of student performance, is it possible to isolate the effect of the new medium or is any change in scores the result of having a different cohort of student?

You should bear in mind that students will not always have sufficient awareness or use the same language to express their feelings, preferences, goals, or any changes in their study behaviours. One group of students may find some key elements not worth mentioning while others may take some things for granted. Some may not wish to reveal these or may wish to say only good things to a lecturer, depending on how the student sample is picked (volunteers or random selection). All these factors may skew the evaluation. There may be cultural or gender issues that influence what students say and how they say them.

Evaluating e-learning against other methods is also compounded by the use of strategies that are no longer appropriate for the different styles and patterns of learning that new technologies facilitate. For example, networked and mobile technologies enable more spontaneous, unpredictable and informal learning to take place and pedagogies will be uneven compared to classroom type approaches.

Evaluating technology supported higher order learning

Working through an example. In higher education, we may particularly wish to encourage creativity and inquiry, developing in students “adaptive”, research-like capabilities (Dempster & Blackmore, 2002). E-learning would aim to support a collaborative and constructivist experience for learners, where understanding is developed within a critical community of inquiry (Garrison and Anderson, 2003). You decide to provide students with an opportunity to work through “seminar” topics in groups online. They submit and comment on each other’s work, e.g. a critique of a research paper on Marx, a piece of translation, a problem solving exercise. A key evaluation question is whether the approach resulted in students achieving the intended learning outcomes. But there will be lots of things going on here.

At one level, you might start to compare effectiveness of the approach in learning gains with its efficiency in terms of teaching efforts (a form of cost-effectiveness). At another level, you might be interested in whether the availability of other students work was the key element or whether participation in group discussion made the difference; indeed to tease out how these worked best together and in complement to the face-to-face sessions. You might wish to compare the quality of students’ critical thinking to a previous cohort, or the same group’s performance in a module or activity taught differently. You might wish to see how the experience and achievements correlate with specific variables: e.g. student background, nationality/language, gender, learning styles or disabilities, prior experience with IT or online learning. For instance: how confident were female students in discussing their own ideas online compared to their male counterparts

bullet Evaluating the use of technology

You might ask “what were the issues in how the students used the technologies?” This could accommodate fairly trivial and low-level issues, such as setting up accounts, ensuring equipment and technical support is available, making sure plug-ins are available, managing technological compatibility. What would be more interesting would be to evaluate whether students felt, or were, literate with these technologies – not just in this functional sense but also in a cultural sense of being able to do meaningful things with them.

You might ask “were any students disadvantaged by the use of technology and why?” This addresses usability issues, such as technical standards, human computer interface, and accessibility. “Were they” is certainly a good question to ask, but “did they feel” might also be important – they may have overcome a difficulty in spite of the odds, rather than because they were comfortable.

Evaluating cost-effectiveness

A major area of interest over recent years has been on finding ways to determine the costs associated with using e-learning (see the work of Doughty, 1996). Studies on cost-benefits and cost-effectiveness have not yielded conclusive evidence, as there are many “hidden” costs involved (Bacsich, 1999). Ehrmann (1996 a,b,c) warns against questions about whether technology X (be it old or new) allows something to be “taught more cheaply”. He argues that costs and accounting methods vary by institution and situation and are further complicated by the unpredictable changes in the prices and performance of technology.

It is therefore not straightforward or necessarily appropriate to weigh up pedagogical benefits (improved effectiveness of learning or efficiency of teaching) against investment values (cost/benefit, cost effectiveness, internal rate of return). The benefits of using e-learning are difficult to quantify, but may be of high social value. Tavistock (1998:p27) describe ‘cost-benefit analysis’ as a method to compare costs and outcomes of alternatives, and ‘cost-effectiveness’ as a method to assess outcomes in relation to a goal. The latter suggests a useful socio-economic model, in which the outcomes may include pedagogical, social and economic benefits. This may provide a reasonable starting point when it proves impossible to convert the outcomes into market or monetary terms, as is mostly the case with improvements in teaching and learning. It suggests a means to “trace the translation of pedagogic outcomes organisational outcomes, and then assess these organisational outcomes in economic terms (i.e. in terms of contribution to organisational performance or value added)”.

Question structures

First level questions about the use of e-learning might be extremely broad and difficult to answer in isolation. For example, questions 1, 2 and 3 below focus solely on the actual e-learning object itself (“it”).

  1. Does it work?
  2. Do students like it?
  3. How effective is it?


Questions 4 and 5 start to explore how a particular e-approach might be worth repeating or might be transferable outside the initial context.


  • How cost-effective is the approach?
  • How scalable is the approach?

More probing questions (extended version of the Bristol LTSS guidelines) might be:

  • Is there sufficient need for the innovation to make it worth developing?
  • What are the best ways of using e-learning resource X?
  • Will this approach be an effective way to resolve problem X?
  • How does this approach need to be modified to suit its purpose?
  • Do you think it would be better if ...(an alternative mode of engagement) was carried out?
  • What advice would you give another student who was about to get involved in a similar activity?

Second level questions might try to explore more deeply specific issues. For example (from Tavistock (1998) evaluation guide):

  • How do students actually use the learning system/materials?
  • How usable is the e-learning tool or material?
  • What needs are being met by them?
  • What types of users find it most useful?
  • Under what conditions is it used most effectively?
  • What are features of the support materials are most useful to students?
  • What changes result in teaching practices, learning/study practices?
  • What effects does it have on tutor planning, delivery and assessment activities?
  • What are the changes in time spent on learning?