Skip to main content



These notes are based the material from the four journal articles listed in the references section. They are all from the journal ‘Quality Assurance in Education’ and it is a sign of the centrality of this topic that it has its own journal, which is not the only one devoted to the topic, and which has been in existence for over 15 years. There are individuals in every university and other educational organisations whose jobs would not exist were it not for the ‘quality’ imperative, which some people see as an ‘industry’ in its own right.

This module is not intended to be a guide to how to ‘do’ quality – control, assurance, improvement or standards achievement. Rather, this part of the module will encourage you to consider some of the areas of debate about concepts of and related to ‘ quality’ in higher education.


Examining these four journal articles has identified a number of dichotomies or contested issues which we can use to examine key concepts related to quality and to ways of assessing and enhancing effectiveness.

Incorrect perceptions

Doherty (2008) refers to the ‘pathological aversion’ that some academics have to what they perceive QA (Quality Assurance) to be. He states, however, that this aversion is misplaced as their perception is incorrect.

Houston (2008) states that some academics have been ‘ cautiously receptive’ to the idea that quality management could provide academics with a better understanding of teaching and learning.

Industry v Education/Services

‘The quality assurance (QA) methods currently used in education demonstrably derive from industrial applications. To many academics, this is an anathema’ (Doherty 2008, p 255)

He goes on to point out that while managers are necessary within universities, managing within education – even if learning is defined as the ‘product’ is very different to managing a factory . It requires imagination to find ways to apply the tools of manufacturing to services in general and education in particular.

People are not Widgets

It is possible to define criteria for a perfect, or excellent or acceptable widget (a washer, a car part, even a bar of chocolate) or batch of widgets. Even then, not everyone will agree as to what is the minimum acceptable. Quality measures are often applied not to individual items but to a production run. So, for example, a measure might be the number of faulty items in a batch of 1000. If the level rises too high then there will be an investigation into whether the quality of the raw materials has fallen, or the machinery needs to be recalibrated or the operators need further training.

This form of measurement becomes unrealistic when considering bespoke, one off items rather than production runs of thousands of identical elements. You can reasonably judge the quality in these terms of a batch of identical jumpers produced for a chain store. You cannot use the same terms in judging the quality of a handmade sweater which is unique.

The issues in applying these kinds of measures in any service setting, whether that is education or healthcare or banking are similar, because every interaction involving an individual is unique, since no two people are identical.

Critical examination of machine metaphor

It is the attempt to apply manufacturing methodology in an inappropriate manner to services which has caused anger and disillusionment among practitioners.

It is therefore necessary to remind ourselves that the idea of any organisation as a machine is a metaphor. We should critically examine this metaphor and discard it if it does not prove helpful.

Internal v External

Houston (2008) asserts that the concern for quality development in industry arose from within. In contrast, the quality improvement imperative in HE is externally driven from ‘the market’ and government.

Control, Compliance and Assurance v Change, Improvement, Innovation and Development

External agencies tend to define quality in terms of ‘ Assurance, Accountability, Audit and Assessment’ in contrast to ‘ Enhancement, Empowerment, Enthusiasm and Excellence’

A quality system should lead stakeholders to change their behaviour and reflect upon their professional values and attitudes. Quality accreditation may hinder such change as it may set certain processes in stone and prevent innovation and development.

Doing the right things right v Doing the Right Things Better

One definition of quality is ‘doing the right things right’ . In other words, identifying what needs to be done and carrying out those actions – nothing more, nothing less – to an agreed set of standards. There is no point in doing things to a high standard if it is not necessary – or is positively undesirable – to do them at all.

However this definition has been challenged on the grounds that it encourages a static compliance driven approach. Proponents of an alternative approach look to continuous improvement so that standards are always rising. This ‘stretching’ approach carries an inherently higher level of risk than simply continuing to follow well tried processes to maintain known standards with which stakeholders have become comfortable.

Definition of Customer

Students are often considered to be the “primary customers” as they are the direct recipients of the services provided. Therefore student perceptions have been a key preoccupation of managers. However the definition of customer-defined quality in HE is not straightforward as students are clearly not the only customers. Others could be employers, families, the local community, staff, the wider academic community, professional bodies, the government.

Houston (2008) believes that ‘Customer-focused definitions of quality fit the context of HE poorly’ (p63). At best, the identification of a student as a customer is partial.

Role of a University

Houston argues that the purpose of a university is to produce knowledge and capabilities – to bring about learning. Teaching, research and other activities should support this purpose.

Systems Perspective

Understanding systems is a key facet of quality thinking. However it is necessary to move away from the idea of a university as a ‘productive machine’ as this is alien to how many academics see their organisation.

A system can be seen as a “network of interdependent elements and the relationships between them working together to try to achieve the purpose of the system” (Houston 2008, p68) The whole is greater than the sum of its parts.


Ehlers (2009) believes that there is an emerging culture of quality in HE which is based on “shared values, necessary competencies and new professionalism”. (p343) He contrasts the work of Michael Porter with that of Henry Mintzberg as exemplars of the old and new approaches.

In the past, instruments and tools associated with quality management have been introduced without due regard for the cultural situation.

Top Down and Bottom Up approaches

Ehlers (2009) states that a combination is necessary for a successful development of a quality culture.

Underlying Assumptions

There will be underlying assumptions about what constitutes good teaching and learning. These will vary according to the culture of each University, though there will be commonalities.

Quality is Subjective

There is no single, universally accepted definition of ‘ quality’ even in the HE context.

Doherty (2008) reminds us that like beauty, quality is in the ‘eye of the beholder’. It is a subjective matter of personal judgement.

Some would define quality as ‘excellence’ but this just moves the debate one step as there is no accepted definition of ‘ excellence’ which is an equally subjective matter.

Can Quality be inspected?

Self assessment takes place in all successful and well-managed organisations. The quantity and rigour of inspections is no guarantor of quality output. It is necessary for action to take place to rectify defects and/or identify opportunities for improvement.

Activities defined by purpose not vice versa

It is important to acknowledge that the activities carried out should be determined by the purpose of the university. The activities should not define the purpose. Therefore the university should not be defined by activities such as teaching and research. Both of these activities should support the purpose of the university in bringing about knowledge.

Fitness to/for purpose

In order to judge quality or excellence by this means, both the purpose and the criteria for deciding whether that purpose has been met have to be determined. Both of these are more difficult in a service context than manufacturing. Doherty (2008) states that in the ‘fitness to/for purpose’ model, the purpose and the criteria are generally set within the organisation.

It is necessary to decide what is being measured. Some HE systems only consider perceptions of quality of academic components. Others include aspects of the whole student experience such as non-academic aspects of student life and, the reputation of the University. Another consideration is whether all the aspects being considered are of equal importance or whether some weighting should be applied.

Internal Evaluation

It therefore follows that the assessment of fitness for purpose will be determined by the achievement of purpose and standards in the organisation’s own terms. This is essentially the QAA model, where what is being assessed is whether the university is achieving what it sets out to achieve. There is no intention to judge whether that purpose is in itself realistic or desirable or to compare it with the purposes of other institutions.

Fitness of purpose

According to Doherty (2008) in this case the outcomes, purpose etc have been benchmarked against some standards which have been agreed externally.

External criteria/standards

Although there are plenty of these, set either within education such as QAA or Matrix standards, or those applying more widely such as ISO9000, it is not as easy to compare, for example, a degree, even in the ‘same’ subject awarded by two universities as it is to compare, say, two brands of washing powder or car parts from two manufacturers.

Criteria may measure efficiency, effectiveness or economy for example cost per student or staff/student ratios or retention rates. Other measures such as student satisfaction are more difficult to compare between different institutions. An overall measurement of added value has been long sought but this is likely to be unattainable since there are too many variables – not least the diversity of the people.

A model developed outside education may have more credibility with external stakeholders but it may not take account of the unique aspects of HE (Houston 2008) . External views may marginalise the views of those within the university.

Represent minimum threshold

Standards such as ISO9000 represent the minimum standard necessary. They only consider improvement in order to achieve the standard against which they are audited.

Comparing within vs. comparing between

Some measures can be used with caution to compare on university with another – for example staff/student ratios or retention rates. Comparing student satisfaction or any other measure which depends mostly on individual perception is less useful. However these measures can be used as an indication of quality improvement, that is they can be used to compare the performance of a particular university over time – comparing added value within an institution.

Customer expectations

Performance or performance vs. expectations

One way of assessing quality in a service context is to devise a “measure of how well the service level delivered matches the customer’s expectations” (Lewis and Booms, 1983, p100 cited in Brochado). This can be referred to as the ‘gaps model’ and one example of a measurement system is SERVQUAL.

The customer expectations act as reference points for their perceptions of actual service quality experienced.

However other measures of quality do not consider this aspect, they only measure perceived quality, experienced without comparing to the ‘reference point’ of expectations. One example is the SERVPERF system.


Refers to past

One of the drawbacks of many quality evaluation systems is that they are always backward looking. They are often out of date by the time they are published, and they cannot say what may happen in the future.

Act of reviewing causes improvement

In some cases, the act of reviewing a process will bring about improvement, as outdated ideas and practices are challenged, the views of a wide group of people are sought and best practice is identified and championed.

Ensuring value for customers

Even in HE, especially in these days of tuition fees, customers or stakeholders expect that there will be some mechanism in place to ensure the value of what they are paying for – whether that is at the level of the individual student (or parent!) or government or industrial/business funders.


Brochado, Ana “Comparing alternative instruments to measure service quality in higher education” QualityAssurance in Education 2009 Vol 17 Number 4 (pp 174-190)

Doherty, Geoffrey D “On quality in education” Quality Assurance in Education 2008 Vol 16 Number 3 (p255-265)

Ehlers, Ulf Daniel “Understanding quality culture”, Quality Assurance in Education 2009 Vol 17 Number 4 (pp343-363)

Houston, Don “Rethinking quality and improvement in higher education” Quality Assurance in Education” 2008 Vol 16 Number 1 (pp61-79)