Dr Kevin Moffat, Departmental of Biological Sciences, University of Warwick
In a three-way collaboration between Leicester, Newcastle and Warwick a simple assessment method using e-discussion groups on a variety of student cohorts has been trialled. This makes good use of available Virtual Learning Environments (VLEs), promotes deep versus superficial learning, and encourages higher level learning competencies and inclusiveness. Based on the described experiences, practical guidelines for a model of online assessment are given.
My Aims and Objectives were:
- To increase student awareness of Developmental Biology
- To increase abilities in a particular part of the assessed examination for the course.
The pedagogic requirements for students to develop a “deep” level of understanding of any subject clearly requires more than an ability to memorise enough information to pass an examination. While there are varied ways to achieve this, it is clear from talking to colleagues that often with the simple the use of an intranet and their being able to create a virtual filing cabinet of their lectures, this is often considered enough. Evidence however, shows that just putting notes on the web does not improve student learning (Evans et al, 2004).
To address this others have tried to use email listservers or bulletin boards to engage students in online academic discussions, with the hope of facilitating subject knowledge and reflection (Cameron, 2002). As a Department we have used these several times, but these efforts were always unsuccessful. Indeed, there is little published evidence concerning the outcomes of these approaches in bioscience teaching. The predominant cause of failure we perceived to be the unwillingness of highly goal-directed bioscience students to engage with what is seen as a frivolous activity not directly related to assessment.
As an alternative, in collaboration, with Dr Alan Cann (University of Leicester) and Dr Jane Calvert (University of Newcastle) I have used an assessment methodology applied to discussion groups, funded by a grant from the Centre for Biosciences. We have been able to try a variety of techniques to different cohorts, made several mistakes and learnt a lot about how to achieve the intended outcomes (for a full discussion see Cann et al., 2006).
Alan Cann at the University of Leiester performed an initial study of a model for online discussion group assessment on a group of 34 final year bioscience, which was considered highly successful (Cann et al, 2006). Contributions to the discussion boards were explicitly linked to assessment, in this case contributing to 15% of the total module marks. Students were told that:
- Any original comment or discussion on the topics covered in the relevant lectures.
- A simple question in itself will not be regarded as an acceptable contribution, but a complete (and correct) answer to someone else's question is an acceptable contribution.
- Feel free to cite a relevant publication from WoK [Web of Knowledge] or PubMed, a book from the Library or a web page, but a citation or a url alone will not be regarded as an acceptable contribution unless you also describe in sufficient detail the content of the work and why it is relevant to this discussion.
- Any other original, non-plagiarised contribution relevant to the topics under discussion.
Prior to the commencement of any discussions, the entire class engaged in an online E-tivity, an icebreaker to promote group cohesion, in this case, construction of a homepage on a VLE to introduce themselves to other module participants (Salmon, 2002).
Assessed online discussions were trialled at Warwick, using Warwick Forums, for a larger third year student cohort, taking a course in animal developmental biology in October-December of 2005 and 2006, following the Leicester model. Differences were that we required three contributions to each discussion and that they were run one every two weeks. We also stipulated that each posting to the forum was required to end in a question, allowing other members of the forum to address the student raised points. However, module essays were still required. A full description of the various models has been previously published (Cann et al, 2006).
The Development course consists of 30 lectures, two tutorials, two assessed essays and a week long practical run through the autumn term. It is assessed 40% by course work and 60% by examination in April of the following year. Students traditionally consider this a tough course, particularly because of the assessed essays, each requiring the reading and interpreting of around 30 cutting-edge scientific papers. Additionally, running from October to December, it is somewhat distant from the examination period. Nevertheless, the numbers taking the course have grown from around 40 to 70. Staff concerns included the possibility that high student numbers could affect marks for assessed essays and examinations. Much credit has been given to the course, internally from our students, and externally from examiners and particularly from other institutions where students have gone on to do PhDs in Developmental biology. The high level of interaction with staff teaching the course has been facilitated through the small tutorial groups (4 students), allowing a fantastic level of interaction between staff and students, and between students. A major concern therefore was that as a teaching group we were now triple teaching this component, leading to “lecturer fatigue” and subsequent lowering of student expectations. The tutorial components require the students to demonstrate a fairly in depth understanding of a large number of areas within the subject.
To this end, the introduction of the student led/staff moderated discussions were hoped to aid their achievements (http://www2.warwick.ac.uk/fac/sci/bio/teach/3rdyr/lectures/bs301/discgroups/).
The students were randomly organised into five discussion groups. They were given permissions to view all other groups, but to contribute only to their own discussion. However, after negotiations with the course convenor the discussion groups were only assigned 3% of the total assessed marks for the course. Certainly looking at the hit statistics for the page, this level of assessment was enough to motivate the students to perform the task; over 90% of the hits during the term were centred on the discussion pages rather than the lecture notes provided by staff. Within the discussions, across the five groups, close to 17,000 hits were made. One example consisted of 3307 hits on a group forum over the term, of these just under half, 1618, were from members of other forums, demonstrating that students were reading each others postings. Monitoring the activity daily, on two occasions we added rules to the boards. First, introducing the requirement to end postings with questions. Second, requiring a certain “time-distance” between consecutive posts. Generally students performed very well with these tasks, with a grade average across all students of 72%. A complication was that for the first discussion, only two contributions were required. Thus for the first discussion students could score, 0, 50 or 100. Subsequently students were scored 0, 33, 66 or 100 for each of the discussion boards. One should remember that students were scored only on a contribution, essentially 33% for each contribution. An assessment was made simply whether it fell within the guidelines made above and was not graded.
Generally students either did very well or very poorly. Student feedback suggests that the low level of credit was a factor for them deciding not to contribute. Indeed this level of assessment received general criticism from the students.
Reflections and conclusions
Having run this exercise for two years it is clear that the low level of credit is a major factor influencing certain students. Nonetheless, the students agreed that it made them aware of the examination structure of the course and did increase their awareness of the subject area; 62% agreed in response to such a questionnaire statement.
While online discussion groups have been used many times before, their use as an assessment tool has been rather varied. In my department their use to facilitate understanding without assessment has always failed, although others (Hartford, 2005) have reported their successful use to facilitate group work within final year workshop driven modules. Other authors have proposed complex marking schemes for online discussions, e.g. Kay (2006). These are complex for the student and somewhat time demanding for staff. The model used for assessment described here has the advantage of being a very simple “completion-based marking scheme”, with only the requirement to check for plagiarism, warn and ultimately sanction any offender. In this context public warnings on the discussion boards themselves suffice. A single assessor was required to monitor two to three discussion boards each day, estimated at ten minutes per day, quickly scoring at the same time. In all the trials at Warwick, Leicester and Newcastle this has proved simple to administer, with feedback to the students easy to provide. While at present we have not replaced the in-depth tutorials, it is clear that as an alternative there would be significant time saving for the academics; one hundred minutes (in total) versus three hundred and sixty minutes (times four for each academic).
However, deep level learning is hard to measure in a quantitative metric. Further analysis will be required in an attempt to analyse the impact of online discussion boards on overall learning outcomes within the course. Often these are highly multi-factorial and depend more on examination performance than on in-course assessments. Qualitative feedback from module questionnaires shows that the assessed online discussions were not universally popular with students, perhaps reflecting student expectations and on how the groups are integrated into the overall structure of the course. In contrast, the involved academic staff found online discussions to be a valuable tool in motivating students to read and reflect on information presented in other formats. We were surprised to find for example that several students claimed to have spent over four hours on researching one point for the discussions.Comparing the three trials at Leicester, the two at Warwick and at Newcastle, a clear set of guidelines can be drawn for those who might wish to perform a similar exercise. While they may now, and particularly with hindsight, feel like stating the obvious, between the six trials we have run over three institutions in various different formats, they come out as the most significant.
Most importantly it helps to be in control of the course, or at least have the convenor onside, allowing easier integration and assessment level decisions
Recommendations for the Use of Online Discussion:
- Discussion Groups must be integrated correctly
- it musn’t look like an afterthought
- Promote Group cohesiveness
- an “E-tivity” worked well
- Give a reasonable level of a marks
- 10-15% of module mark works well as a motivating factor
- Consider dropping other elements of assessment within the module?
- Group size appears important
- 8 to 15 appear optimum
- Give immediate feedback
- Weekly on the discussions and daily if required to “steer” discussions
- Get trained help to inform about e-moderating
- Introduce rules as required
- Decide how to monitor and deal with plagiarism
This work was funded in part by a grant from the Higher Education Academy Centre for Bioscience. I would also acknowledge the enthusiasm of Alan Cann for initiating this project, and to Alan Cann (again) and Jane Calvert for their continued collaboration.
Citation for this Article
Moffat, K. (2007) Assessing Online Discussions in Biology Education. Warwick Interactions Journal 29. Accessed online at: http://www2.warwick.ac.uk/services/cap/resources/pubs/interactions/current/abmoffat/moffat