Skip to main content

Quality and Impact

Impact is such a high profile concept in our times that in any consideration of it, it is helpful to pause and consider what we really mean. The term is used interchangeable to mean quite distinct things, for example:

1. How do we know that our provision is doing what it sets out to do?
2. How do we know our provision adds value?
3. How do we convince our stakeholders that our provision is worthwhile?

These are actually three very different questions. Lets look at each in turn:

1. How do we know that our provision is doing what it sets out to do? Or “Do we fulfil our purpose?”

To answer this question, we first need to know precisely how we see the purpose of our activities. This dovetails with strategic management in that the formulation and communication of mission statements and strategic objectives affect the available clarity and consensus about purpose.

It is also central to assuring the quality of our provision, something we are required to do by a range of organisational and professional drivers.

2. How do we know our provision adds value?

A further layer of complexity is involved in seeking to isolate the impact of our intervention, and it is here that distance travelled measures become significant.

Impact is aimed at measuring whether or not the educational experience/service is making any difference to what they do and how.

Impact is about effecting a change and is consequently difficult to measure, as we don’t know what else is making an impact. Patton and McMahon’s Systems Theory Framework how the range of factors within the 'organisational system' that are recursively influencing one another. The organisational system as represented here is the context in which your career development services takes place - relevant beyond the one to one setting represented here.

 therapeutic system

 

 

In the light of this complexity, the best that can be achieved is to find ‘strong surrogates’ for impact that provide a close approximation (Markless and Streatfield, 2006).

In a classic study of the evaluation of training and development, Kirkpatrick identified four different stages.

 kirkpatrick

This has been used by Vitae (2012) to develop a researcher development impact framework, which distinguishes between the input activity, the outputs and the outcomes, and maps these onto Kirkpatrick’s four stage.

vitae 0-4

Pause now and consider all the ways that your service evaluates impact and map those on to levels 0-4. Are your evaluation activities covering all levels?

For example:

Number of drop in appointments offered = level 0
Student evaluation completed at end of group session = level 1
Assessment of accredited career management module = level 2
No of students applying for placements = level 3
Destination data = level 4

If we are interested in the most complex area, level 4, Vitae break that down into more details as follows:

researcher_development_impact_levels_0-4.jpg

So, in career development terms it is interesting to attempt to map outcomes on to career development in the same way

Boxes A&B represent individual changes and C&D show changes beyond the individual level. It is worth noting that the movement from A-D will be quite a wiggly line.

understanding_level_4_career_development.jpg

So how can we understand what would take us on that wiggly line from A to D? That’s where career development theories can help us.

In 2009, CfBT published a review of the evidence on the impact of career development activities and identified the 10 key facts we ‘know for sure’, based on career development theories.

10 key facts

This work highlights the inherent difficulty in demonstrating impact and the importance of involve users & practitioners in the developing measures. They highlight the interplay of theory, research, policy and practice and note that qualitative (soft) outcomes are needed to reflect longer-term ‘customer’ journeys. (Hughes & Gration,2009).

Moving on then to our third question:

3. How do we convince our stakeholders that our provision is worthwhile?

This is an entirely different question, and depends on the stakeholder’s definitions of worthwhile. It is often a finely balanced two –pronged approach that demonstrates:

a) good custodianship of resources by showing that you have identified your purpose and can prove it is fulfilled, as well as

b) that you understand how a stakeholder is judging you and can provide evidence accordingly.

So, we might at the same time choose to highlight:

  • Our impact in league tables to our University governance stakeholders
  • Our good quality students to employers (defining quality as whatever a particular employer requires in terms of skills, knowledge, aspirations)
  • Our satisfied users to non user students
  • Our effective one to one and group work (defining effective in terms of the career development learning)

Key Performance Indicators

Taking all three questions together, we turn to the identification of the right key performance indicators for your service to consider and document.

Neely gives usa useful overview of terminology here:

• Performance measurement can be defined as the process of quantifying the efficiency and effectiveness of action;
• A performance measure can be defined as a metric used to quantify the efficiency and/or effectiveness of an action;
• A performance measurement system can be defined as the set of metrics used to quantify both the efficiency and effectiveness of actions.
• Effectiveness, the extent to which customer requirements are met
• Efficiency is a measure of how economically a company’s resources are utilised, provided a certain level of customer satisfaction is achieved
(Neely et al, 1995)

One measure commonly used is planned and actual expenditure. However, financial measures are limiting and may at worst be misleading. Financial measures are frequently lagging and they implicitly assume that the lessons learned from studying the past outcomes can be applied to current situations or even predicting the future:

“...Financial accounting, balance sheets, P&L, cost allocations, etc. are an X-ray of the enterprise's skeleton. But just as many of the diseases we commonly die from -heart disease, cancer, Parkinson's - do not show up on an X-ray, so too a loss of market standing or a failure to innovate do not register in the accountant's figures until the damage is done....“ (Drucker, 1998)

To try and overcome the inadequacies perceived in financial metrics Kaplan and Norton (1996) developed the Balanced Scorecard. Watch this video to understand the balance scorecard, and distinguish between lagging and leading measures.

To see an example at work, try this video:

Some have criticised, the balanced scorecard in relation to its practical application. Balanced performance in practice is very difficult.

Finding the right non-financial measures and then using these measures in combination with financial measures can be challenging. Meyer (2002) argues that Kaplan and Norton’s Balanced Scorecard provided no guidance about combining measures, and identifies seven purposes of performance measures.

meyer

Mapping our indicators to these seven purposes helps to develop an impact assessment framework appropriate to assuring quality of provision, adding the value that we have identified in our strategy and demonstrating this to our stakeholders.

References

Drucker, P. F. (1998), Managing in a Time of Great Change, Woburn, MA: Butterworth-Heinemann

Hughes, D. & Gration, G. (2009). Literature review of research on the impact of careers & guidance-related interventions. CfBT. Available at: http://www.cfbt.com/evidenceforeducation/pdf/Literature%20Review.pdf

Kirkpatrick, D.L., & Kirkpatrick, J.D. (1994). Evaluating Training Programs, Berrett-Koehler Publishers

Markless, S. & Streatfield, D.R. (2006) Evaluating the impact of your library service: the quest for evidence (Facet, London). Lea, J., Hayes,

Meyer, M. W. (2002), Rethinking Performance Measurement—Beyond the Balanced Scorecard, Cambridge: Cambridge University Press

Neely, A., Adams, C. and Kennerley, M. (2002), The Performance Prism, London: FT Prentice Hall

Neely, A., Gregory, M., Platts, K. (1995) Performance measurement system design: A literature review and research agenda, International Journal of Operations & Production Management, Vol.15, No.4, pp. 80 – 116

Patton, W. & McMahon, M. (1999). Career development and systems theory: A new relationship. Pacific Grove, CA: Brooks/Cole.

Vitae (2012) ‘The Impact Framework 2012: Revisiting the Rugby Team Impact Framework’ . Cambridge: Careers Research and Advisory Centre (CRAC) Limited

Further reading

AMOSSHE (2011) Value and Impact Toolkit: Assessing the value and impact of services that support students, London: AMOSSHE

Bimrose, J et al (2006) Quality assurance mechanisms for Information, Advice and Guidance: A critical review, Coventry, Institute for Employment Research

Barham, J. D. and Scott, J. H. (2006). Increasing accountability in student affairs through a new comprehensive assessment model. College Student Affairs Journal, 2006, 25(2), 209—219.

CHERI, (2008), Counting What is Measured or Measuring What Counts? League tables and their impact on higher education institutions in England, London, HEFCE.

Centre for Higher Education Research and Information (2010). Understanding and Measuring the Value and Impact of Services in Higher Education that Support Students: A Literature Review. London: AMOSSHE.

Centre for Higher Education Research and Information (2011). Value and Impact Toolkit : Assessing the Value & Impact of services that support students. London:AMOSSHE

HESA (2010) Benchmarking to Improve Efficiency :Status Report. HESA. Cheltenham

Hughes, D, Gration, G, (2006), Key performance indicators and Benchmarks in Career Guidance in the United Kingdom, Derby, CeGS.

International Centre for Guidance Studies and The Progression Trust (2013). Higher education outreach to widen participation: toolkits for practitioners. Evaluation Bristol: HEFCE

Jackson N and Lund H eds (2000) Benchmarking for Higher Education Buckingham

Jackson N Benchmarking in UK HE: An Overview. Quality Assurance in Education vol 9 2001 (pp. 218 ‐ 235)

JISC infoNet. Managing Strategic Activity http://www.jiscinfonet.ac.uk/infokits/managing-strategic-activity ( accessed January 2014)

JISC infoNet Infokits Benchmarking http://www.jiscinfonet.ac.uk/infokits/benchmarking (accessed December 2013)

Makela, J.P. and Rooney, G. (2012) Learning Outcomes Assessment Step-by Step:Enhancing Evidence –Based Practice in Career Services. National Career Development Association:Broken Arrow OK

Nijjar, A. K. (2009) Stop and Measure the Roses: How university careers services measure their effectiveness and success. HECSU:Manchester

Schuh, J. H.and M. L. Upcraft (2001). Assessment Practice in Student Affairs: An Application Manual (First Edition). San Francisco: Jossey-Bass.

Timm, D.M., Barham J. D. , McKinney K, and Knerr A R. (2013) Assessment in Practice: A Companion Guide to the ASK Standards. Washington, D.C.: American College Personnel Association.