Module Evaluation Learning Circle (on-line resources)
Academic literature
Key journals are:
- Assessment and Evaluation in Higher Education (Taylor and Francis)
- Studies in Educational Evaluation (Elsevier)
Sections below group some relevant articles. Links are Warwick-only access links that should allow you to view the full article (you'll be prompted to sign-in).
Relevant Special issues
- New Directions for Institutional Research
Spring 2001: The Student Ratings Debate: Are They Valid? How Can We Best Use Them?"
http://0-onlinelibrary.wiley.com.pugwash.lib.warwick.ac.uk/doi/10.1002/ir.v2001:109/issuetoc - Studies in Educational Evaluation
"September 2017: Evaluation of teaching: Challenges and promises"
http://www.sciencedirect.com/science/journal/0191491X/54 - More?
Relevant literature
List of relevant academic literature very roughly organised into topic areas.
Recent additions to the topic
(2021) The changing topography of student evaluation in higher education: mapping the contemporary terrain, Higher Education Research & Development, 40:2, 220-233, DOI: 10.1080/07294360.2020.1740183
https://0-www-tandfonline-com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/07294360.2020.1740183
Borch, I., Sandvoll, R., & Risør, T. (2020). Discrepancies in purposes of student course evaluations: what does it mean to be “satisfied”?. Educational Assessment, Evaluation and Accountability, 32(1), 83-102.
https://0-link-springer-com.pugwash.lib.warwick.ac.uk/article/10.1007/s11092-020-09315-x
Bikanga Ada, M., Williamson, J., & Evangelopoulou, M. (2020). Feedback on Teaching: Non-standard Minute Paper Methods.
https://eprints.gla.ac.uk/237311/
General module/course evaluation
TODO. Need to identify more overviews of state of the art.
- Richardson, J. T. (2005). Instruments for obtaining student feedback: A review of the literature. Assessment & evaluation in higher education, 30(4), 387-415.
http://0-srhe.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930500099193 - Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of teaching: The state of the art. Review of Educational Research, 83(4), 598-642.
http://0-journals.sagepub.com.pugwash.lib.warwick.ac.uk/doi/abs/10.3102/0034654313496870 - Palmer, S. (2012). Student evaluation of teaching: Keeping in touch with reality. Quality in Higher Education, 18(3), 297-311.
http://0-srhe.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/13538322.2012.730336#.WkzEEq1KPOE
Modes
Many authors have studied the different modes of evaluation - whether paper or on-line. Issues such as selection bias and measurement errors arise in literature covering the shift to on-line, with many authors exploring the potential consequences. Evidence for the reduction in response rate when evaluations are moved on-line but many of these are small-scale studies with questionnable experimental design. Even though much of the literature concerns small-scale studies it nevertheless contain useful suggestions about managing the transition or dual delivery, e.g. Berk (2013), and Ravenscroft & Enyeart (2009). Nulty (2008) also gives useful practical advice on boosting response rates.
Authors such as Kordts-Freudinger & Geithner (2013) and Treischl & Wolbring (2017) emphasise that most studies confound change of mode and context (in class/out of class).
- Kordts-Freudinger, R., & Geithner, E. (2013). When mode does not matter: Evaluation in class versus out of class. Educational Research and Evaluation, 19(7), 605-614.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/13803611.2013.834613 - Risquez, A., Vaughan, E., & Murphy, M. (2015). Online student evaluations of teaching: what are we sacrificing for the affordances of technology?. Assessment & Evaluation in Higher Education, 40(1), 120-134.
http://0-srhe.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602938.2014.890695 - Treischl, E., & Wolbring, T. (2017). The Causal Effect of Survey Mode on Students’ Evaluations of Teaching: Empirical Evidence from Three Field Experiments. Research in Higher Education, 1-18
https://0-link-springer-com.pugwash.lib.warwick.ac.uk/article/10.1007/s11162-017-9452-4 - Selwyn, N., Henderson, M., & Chao, S. H. (2016). ‘You need a system’: exploring the role of data in the administration of university students and courses. Journal of Further and Higher Education, 1-11.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/0309877X.2016.1206852 - Nulty, D. D. (2008). The adequacy of response rates to online and paper surveys: what can be done?. Assessment & evaluation in higher education, 33(3), 301-314.
http://0-srhe.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930701293231 - Champagne, M. V. (2013). Student use of mobile devices in course evaluation: a longitudinal study. Educational Research and Evaluation, 19(7), 636-646.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/13803611.2013.834618 - Gamliel, E., & Davidovitz, L. (2005). Online versus traditional teaching evaluation: mode can matter. Assessment & Evaluation in Higher Education, 30(6), 581-592.
http://www.tandfonline.com/doi/abs/10.1080/02602930500260647 - Donovan, J., Mader, C. E., & Shinsky, J. (2010). Constructive Student Feedback: Online vs. Traditional Course Evaluations. Journal of Interactive Online Learning, 9(3), 283-296.
https://eric.ed.gov/?id=EJ938846 - Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic course evaluations: Does an online delivery system influence student evaluations?. The Journal of Economic Education, 37(1), 21-37.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.3200/JECE.37.1.21-37 - Fike, D. S., Doyle, D. J., & Connelly, R. J. (2010). Online vs. Paper Evaluations of Faculty: When Less Is Just as Good. Journal of Effective Teaching, 10(2), 42-54.
https://eric.ed.gov/?id=EJ1092118 - Berk, R. A. (2013). Face-to-face versus online course evaluations: A" consumer's guide" to seven strategies. Journal of Online Learning and Teaching, 9(1), 140.
https://search.proquest.com/openview/79304fb219dbc766a64965b2ba92a8fc/1?pq-origsite=gscholar&cbl=2030650 - Morrison, R. (2011). A comparison of online versus traditional student end‐of‐course critiques in resident courses. Assessment & Evaluation in Higher Education, 36(6), 627-641.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602931003632399 - Perrett, J. J. (2013). Exploring graduate and undergraduate course evaluations administered on paper and online: A case study. Assessment & Evaluation in Higher Education, 38(1), 85-93.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602938.2011.604123 - Stowell, J. R., Addison, W. E., & Smith, J. L. (2012). Comparison of online and classroom-based student evaluations of instruction. Assessment & Evaluation in Higher Education, 37(4), 465-473.
http://srhe.tandfonline.com/doi/abs/10.1080/02602938.2010.545869 - Ravenscroft, M., & Enyeart, C. (2009). Online student course evaluations: Strategies for increasing student participation rates. University Leadership Council, Washington, DC.
https://www.utc.edu/faculty-senate/archives/documents/ProfessionalStudyonOnlineStudentCourseEvaluations.pdf - Barkhi, R., & Williams, P. (2010). The impact of electronic media on faculty evaluation. Assessment & Evaluation in Higher Education, 35(2), 241-262.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930902795927
Questionnaires and questions
Several articles listed below, such as Kember and Leung (2008), analyse the questions and types of questions that can or should be asked for validity and reliability. Huxham et. al (2008) compare the questionnaire against other strategies for gathering student feedback such as focus groups, rapid feedback and reflective diaries.
- Kember, D., & Leung, D. Y. (2008). Establishing the validity and reliability of course evaluation questionnaires. Assessment & Evaluation in Higher Education, 33(4), 341-353.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930701563070 - Davies, M., Hirschberg, J., Lye, J., & Johnston, C. (2010). A systematic analysis of quality of teaching surveys. Assessment & Evaluation in Higher Education, 35(1), 83-96.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930802565362 - Huxham, M., Laybourn, P., Cairncross, S., Gray, M., Brown, N., Goldfinch, J., & Earl, S. (2008). Collecting student feedback: a comparison of questionnaire and other methods. Assessment & Evaluation in Higher Education, 33(6), 675-686.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930701773000 - Stark‐Wroblewski, K., Ahlering, R. F., & Brill, F. M. (2007). Toward a more comprehensive approach to evaluating teaching effectiveness: supplementing student evaluations of teaching with pre–post learning measures. Assessment & Evaluation in Higher Education, 32(4), 403-415.
http://0-srhe.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930600898536 - Zhao, J., & Gallant, D. J. (2012). Student evaluation of instruction in higher education: Exploring issues of validity and reliability. Assessment & Evaluation in Higher Education, 37(2), 227-235.
http://0-srhe.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602938.2010.523819 - Spooren, P., Mortelmans, D., & Denekens, J. (2007). Student evaluation of teaching quality in higher education: development of an instrument based on 10 Likert‐scales. Assessment & Evaluation in Higher Education, 32(6), 667-679.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930601117191 - Griffin, P., Coates, H., Mcinnis, C., & James, R. (2003). The development of an extended course experience questionnaire. Quality in Higher Education, 9(3), 259-266.
http://0-srhe.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/135383203200015111 - Huybers, T. (2014). Student evaluation of teaching: the use of best–worst scaling. Assessment & Evaluation in Higher Education, 39(4), 496-513.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602938.2013.851782 - Richardson, J. T. (2005). Instruments for obtaining student feedback: A review of the literature. Assessment & evaluation in higher education, 30(4), 387-415.
http://0-srhe.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930500099193#.WjwLdK24A-U - Stark, P. B., & Freishtat, R. (2014). An evaluation of course evaluations. ScienceOpen Research, 9, 2014.
http://www.specs-csn.qc.ca/site-com/qlp/2015-2016/2016-03-30/articles.pdf
Student perspectives and perceptions
Several studies have identified a link between student attitudes towards the evaluation and the success of the system. Most studies state that students are motivated to participate when there is an expectation of being able to provide meaningful feedback, and that the impact of that feedback can be observed.
Chen and Howsomer (2003) found that students consider an improvement in teaching quality the preferred outcome of an evaluation process. Improvements to course content and format was considered the second most preferred outcome. Using the evaluation to impact tenure/promotion/salary, or making the results available for students' decisions on course and instructor selection were considered less important by students.
Beran. et al (2009) has a useful discussion beginning on page 524 about what students consider important in rating courses.
- Spooren, P., & Christiaens, W. (2017). I liked your course because I believe in (the power of) student evaluations of teaching (SET). Students’ perceptions of a teaching evaluation process and their relationships with SET scores. Studies in Educational Evaluation, 54, 43-49.
http://0-www.sciencedirect.com.pugwash.lib.warwick.ac.uk/science/article/pii/S0191491X16300256 - Chen, Y., & Hoshower, L. B. (2003). Student evaluation of teaching effectiveness: An assessment of student perception and motivation. Assessment & evaluation in higher education, 28(1), 71-88.
http://0-srhe.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930301683 - Beran, T., Violato, C., Kline, D., & Frideres, J. (2009). What do students consider useful about student ratings?. Assessment & Evaluation in Higher Education, 34(5), 519-527.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930802082228 - Ginns, P., Prosser, M., & Barrie, S. (2007). Students’ perceptions of teaching quality in higher education: The perspective of currently enrolled students. Studies in Higher Education, 32(5), 603-615.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/03075070701573773 - Fisher, R., & Miller, D. (2008). Responding to student expectations: a partnership approach to course evaluation. Assessment & Evaluation in Higher Education, 33(2), 191-202.
http://0-srhe.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930701292514 - McCormick, A. C., Kinzie, J., & Gonyea, R. M. (2013). Student engagement: Bridging research and practice to improve the quality of undergraduate education. In Higher education: Handbook of theory and research (pp. 47-92). Springer Netherlands.
https://0-link-springer-com.pugwash.lib.warwick.ac.uk/chapter/10.1007/978-94-007-5836-0_2 - Marsh, H. W. (1984). Students' evaluations of university teaching: Dimensionality, reliability, validity, potential baises, and utility. Journal of educational psychology, 76(5), 707.
http://0-psycnet.apa.org.pugwash.lib.warwick.ac.uk/record/1985-10875-001 - Denson, N., Loveday, T., & Dalton, H. (2010). Student evaluation of courses: what predicts satisfaction?. Higher Education Research & Development, 29(4), 339-356.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/07294360903394466
- Wiley, C. (2019). Standardised module evaluation surveys in UK higher education: Establishing students’ perspectives. Studies in Educational Evaluation, 61, 55-65.
https://0-www-sciencedirect-com.pugwash.lib.warwick.ac.uk/science/article/pii/S0191491X1830141X
Staff perspectives and perceptions
Rienties (2014) gives a useful perspective on the Open University's transition to an on-line evaluation system and in particular staff attitudes. Rienties conducts a series of interviews with staff and discusses their cognitive understanding of why evaluation is now taking place on-line but juxtaposes with their emotional reaction and aversion to the transition.
Edström (2008) discusses the perception of course evaluation as a 'fire alarm' function, rather than having a course development role.
- Rienties, B. (2014). Understanding academics’ resistance towards (online) student evaluation. Assessment & Evaluation in Higher Education, 39(8), 987-1001.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/full/10.1080/02602938.2014.880777 - Surgenor, P. W. (2013). Obstacles and opportunities: addressing the growing pains of summative student evaluation of teaching. Assessment & Evaluation in Higher Education, 38(3), 363-376.
http://0-srhe.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602938.2011.635247 - Edström, K. (2008). Doing course evaluation as if learning matters most. Higher education research & development, 27(2), 95-106.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/07294360701805234
Moskal, Stein and Golding examine to what extent the technology influences staff engagement with evaluation and show how the practical elements of the solution influence overall engagement.
- Moskal, A. C., Stein, S. J., & Golding, C. (2016). Can you increase teacher engagement with evaluation simply by improving the evaluation system?. Assessment & Evaluation in Higher Education, 41(2), 286-300.
https://0-www-tandfonline-com.pugwash.lib.warwick.ac.uk/doi/full/10.1080/02602938.2015.1007838 -
Gravett, K., Kinchin, I. M., & Winstone, N. E. (2019). ‘More than customers’: conceptions of students as partners held by students, staff, and institutional leaders. Studies in Higher Education, 1-14.
https://0-srhe-tandfonline-com.pugwash.lib.warwick.ac.uk/doi/full/10.1080/03075079.2019.1623769
Dimensions
Student evaluation of modules and courses is multidimensional. The SEEQ (Students' Evaluation of Educational Quality) instrument was first published in early 1980's and has been researched extensively. The instrument aims to measure several dimensions including learning, enthusiam, organisation, group interaction, individual rapport and breadth. Much of the published literature refers to SEEQ or the Experiences of Teaching and Learning Questionnaire.
- Marsh, H. W. (2007). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases and usefulness. In The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 319-383). Springer Netherlands.
https://0-link-springer-com.pugwash.lib.warwick.ac.uk/chapter/10.1007/1-4020-5742-3_9 - Utriainen, J., Tynjälä, P., Kallio, E., & Marttunen, M. (2018). Validation of a modified version of the Experiences of Teaching and Learning Questionnaire. Studies in Educational Evaluation, 56, 133-143.
http://0-www.sciencedirect.com.pugwash.lib.warwick.ac.uk/science/article/pii/S0191491X1730113X - Burdsal, C. A., & Bardo, J. W. (1986). Measuring student's perceptions of teaching: Dimensions of evaluation. Educational and Psychological Measurement, 46(1), 63-79.
http://0-journals.sagepub.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1177/0013164486461006 - Marsh, H. W. (1982). SEEQ: A RELIABLE, VALID, AND USEFUL INSTRUMENT FOR COLLECTING STUDENTS'EVALUATIONS OF UNIVERSITY TEACHING. British journal of educational psychology, 52(1), 77-95.
http://0-onlinelibrary.wiley.com.pugwash.lib.warwick.ac.uk/doi/10.1111/j.2044-8279.1982.tb02505.x/full - Spooren, P., Mortelmans, D., & Denekens, J. (2007). Student evaluation of teaching quality in higher education: development of an instrument based on 10 Likert‐scales. Assessment & Evaluation in Higher Education, 32(6), 667-679.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930601117191
Bias
Many authors consider potential sources of bias including gender, expected grades & outcomes, class size, prior knowledge, difficulty and workload. Marsh (2007) has a good summary of findings but also notes "The voluminous literature on potential biases in SETs is frequently atheoretical, methodologically flawed, and not based on well-articulated operational definitions of bias, thus continuing to fuel (and to be fuelled by) myths about bias".
Marsh (1987) identifies four factors that were important to predicting a student's evaluation: prior interest of the student in the subject, expected grades, perceived workload and rationale for selecting the module.
- Marsh, H. W. (2007). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases and usefulness. In The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 319-383). Springer Netherlands.
https://0-link-springer-com.pugwash.lib.warwick.ac.uk/chapter/10.1007/1-4020-5742-3_9 - Marsh, H. W. (1987). Students' evaluations of university teaching: Research findings, methodological issues, and directions for future research. International journal of educational research, 11(3), 253-388.
http://0-www.sciencedirect.com.pugwash.lib.warwick.ac.uk/science/article/pii/0883035587900012
- Centra, J. A., & Gaubatz, N. B. (2000). Is there gender bias in student evaluations of teaching?. The Journal of Higher Education, 71(1), 17-33.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/00221546.2000.11780814 - Marsh, H. W., & Roche, L. A. (2000). Effects of grading leniency and low workload on students' evaluations of teaching: Popular myth, bias, validity, or innocent bystanders?. Journal of Educational Psychology, 92(1), 202.
http://psycnet.apa.org/record/2000-03003-018 - Centra, J. A. (2003). Will teachers receive higher student evaluations by giving higher grades and less course work?. Research in Higher Education, 44(5), 495-518.
https://0-link-springer-com.pugwash.lib.warwick.ac.uk/article/10.1023/A:1025492407752
There is a body of evidence claiming bias or impact on evaluation for a number of factors. Here are some examples. Mode of evaluation is also considered relevant but covered in an earlier section.
Factors related to teachers:
- Seniority:
Feldman, K. A. (1983). Seniority and experience of college teachers as related to evaluations they receive from students. Research in Higher Education, 18(1), 3-124.
https://0-link-springer-com.pugwash.lib.warwick.ac.uk/article/10.1007%2FBF00992080
Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty's teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42.
http://0-www.sciencedirect.com.pugwash.lib.warwick.ac.uk/science/article/pii/S0191491X16300323
Uttl et. al (2007) invalidate the Feldman's long-standing results, by showing that there is no significant correlation between SET ratings and learning. They claim evaluations cannot be used to indicate teaching effectiveness. - Gender:
Bennett, S. K. (1982). Student perceptions of and expectations for male and female instructors: Evidence relating to the question of gender bias in teaching evaluation. Journal of Ed
http://0-psycnet.apa.org.pugwash.lib.warwick.ac.uk/record/1982-24396-001
Winocur, S., Schoen, L. G., & Sirowatka, A. H. (1989). Perceptions of male and female academics within a teaching context. Research in Higher Education, 30(3), 317-329.
https://0-link-springer-com.pugwash.lib.warwick.ac.uk/article/10.1007%2FBF00992607 - Teaching models:
Kolitch, E., & Dean, A. V. (1999). Student ratings of instruction in the USA: Hidden assumptions and missing conceptions about good teaching. Studies in Higher Education, 24(1), 27-42.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/03075079912331380128
Literature needs further exploration on teaching models.
- Reputation:
McNatt, D. B. (2010). Negative reputation and biased student evaluations of teaching: Longitudinal results from a naturally occurring experiment. Academy of Management Learning & Education, 9(2), 225-242. - Age and personality:
Patrick, C. L. (2011). Student evaluations of teaching: effects of the Big Five personality traits, grades and the validity hypothesis. Assessment & Evaluation in Higher Education, 36(2), 239-249.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930903308258 - Prominence or attitude to research:
Bak, H. J. (2015). Too much emphasis on research? An empirical examination of the relationship between research and teaching in multitasking environments. Research in Higher Education, 56(8), 843-860.
https://0-link-springer-com.pugwash.lib.warwick.ac.uk/article/10.1007%2Fs11162-015-9372-0 - Cultural values, power distance, and individualism vs collectivism:
Arnold, I. J., & Versluis, I. (2019). The influence of cultural values and nationality on student evaluation of teaching. International Journal of Educational Research, 98, 13-24.https://0-www-sciencedirect-com.pugwash.lib.warwick.ac.uk/science/article/pii/S0883035519301363
Factors related to students:
- Gender of students:
Badri, M. A., Abdulla, M., Kamali, M. A., & Dodeen, H. (2006). Identifying potential biasing variables in student evaluation of teaching in a newly accredited business program in the UAE. International Journal of Educational Management, 20(1), 43-59
http://0-www.emeraldinsight.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1108/09513540610639585 - Personality traits:
Patrick, C. L. (2011). Student evaluations of teaching: effects of the Big Five personality traits, grades and the validity hypothesis. Assessment & Evaluation in Higher Education, 36(2), 239-249.
http://0-srhe.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602930903308258 - Students' Attitude to Evaluations:
Bassett, J., Cleveland, A., Acorn, D., Nix, M., & Snyder, T. (2017). Are they paying attention? Students’ lack of motivation and attention potentially threaten the utility of course evaluations. Assessment & Evaluation in Higher Education, 42(3), 431-442.
http://0-srhe.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/02602938.2015.1119801#.Wlzos624A-U
Literature needs further exploration on student attitude to evaluation.
- Absenteeism:
Wolbring, T., & Treischl, E. (2016). Selection bias in students’ evaluation of teaching. Research in higher education, 57(1), 51-71.
https://0-link-springer-com.pugwash.lib.warwick.ac.uk/article/10.1007%2Fs11162-015-9378-7
Likert scales
Revilla et. al (2014) test and discuss the impact of increased categories on the validity and relaibility. Their results show that agree-disagree scales should be offered with 5 options rather than 7 or 11 options which offer poorer quality results. Lozano et. al (2008) claim the optimum number is between four and seven options. Fewer than four the validity and reliaiblity decreases, and more than seven the relaibility doesn't increase significantly.
A general scan of the related research indicates the consensus is that four options are the absolute minimum to ensure reliability but some disagreement otherwise whether five or seven are preferable. In reality the difference in validity from either five or seven options is minimal.
- Revilla, M. A., Saris, W. E., & Krosnick, J. A. (2014). Choosing the number of categories in agree–disagree scales. Sociological Methods & Research, 43(1), 73-97.
http://0-journals.sagepub.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1177/0049124113509605 - Lozano, L. M., García-Cueto, E., & Muñiz, J. (2008). Effect of the number of response categories on the reliability and validity of rating scales. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 4(2), 73.
http://0-psycnet.apa.org.pugwash.lib.warwick.ac.uk/record/2008-06441-003
Middle option: Kalton et. al (1980) consider the impact of including a middle response and conclude that the presence of the middle option reduces the extreme responses given. Sturgis et. al (2014) look at whether the respondent selection of the middle option represents a neutral response or the lack of cognitive choice/no opinion. Their study follows up with respondents who chose the middle option concluding most often it represent a "don't know" option. Including/excluding middle results representing no cognitive choice can then significantly impact on the analysis of results.
- Kalton, G., Roberts, J., & Holt, D. (1980). The effects of offering a middle response option with opinion questions. The Statistician, 65-78.
http://0-www.jstor.org.pugwash.lib.warwick.ac.uk/stable/2987495?seq=1 - Sturgis, P., Roberts, C., & Smith, P. (2014). Middle alternatives revisited: how the neither/nor response acts as a way of saying “i don’t know”?. Sociological Methods & Research, 43(1), 15-38.
http://0-journals.sagepub.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1177/0049124112452527
Ordering: Hartley and Betts (2010) show how the order of the options impacts the outcome. A descending scale (10 to 0, agree to disagree) results in consistently higher ratings compared to an ascending scale.
- Hartley, J., & Betts, L. R. (2010). Four layouts and a finding: the effects of changes in the order of the verbal labels and numerical values on Likert‐type scales. International Journal of Social Research Methodology, 13(1), 17-27.
http://0-www.tandfonline.com.pugwash.lib.warwick.ac.uk/doi/abs/10.1080/13645570802648077