Skip to main content Skip to navigation

Impact Analysis: Can the Guidance Community Learn Anything about Impact Research from Other Disciplines?

Impact Analysis: Can the Guidance Community Learn Anything about Impact Research from Other Disciplines?

This paper briefing introduces the concepts of impact analysis and evidence-based practice (EBP). Drawing on two empirical examples, it compares the use of randomised controlled trials to measure the impact of interventions in the medical and guidance professions, highlighting the benefits and limitations of the approach.

Impact Analysis.

Impact analysis is an evaluative process, designed to provide scientifically credible information to legitimise the existence of a service or use of an intervention, which is intended to make a difference or induce benefit[1]. In essence, impact analysis is a method of measuring outcomes in order to address the question ‘Are we making a difference?' There are various outcomes that can be measured, in both the medical and guidance fields, all of which cannot be addressed in this paper, but which could include:

providing value for money;
achieving organisational goals;
achieving government agendas;
providing a meaningful service which is of value and use to recipients;
benefiting an individual client/patient on a personal level.

This paper focuses on the impact analysis of specific interventions through a comparison between a medical intervention - diagnosis, treatment, prognosis, therapy, and medication, and a guidance interventions - computer assisted guidance. It considers the use of experimental and quasi-experimental measurements and concentrates on the impact of such interventions at the level of the individual client/patient.

Evidence-based Practice

Evidence-based Practice (EBP) has its origins in the medical field and emerged in the early 1990s. It has subsequently been adopted by many other disciplines throughout the UK, including the guidance community. Sackett et al. 1997:71 defined evidence-based medicine as

‘The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients, based on skills which allow the doctor to evaluate both personal experience and external evidence in a systematic and objective manner’.[2]

EBP is based on the premise that practice should be based on knowledge derived from scientific endeavour. By adopting an EBP approach, practitioners should be able to base their decisions in relation to a client/patient on readily available empirical research; thus providing the best possible treatment/service to that individual[3].

EBP is a cyclical process, the first ‘loop’ of which involves the identification of existing research evidence, an assessment of its relevance, validity and reliability and the application of appropriate research evidence to practice in order to inform decision making. The second loop of the cycle involves empirical evaluation and reflection on current practice, including the dissemination of findings so that they can inform the work of other practitioners[4].

Advocates of EBP espouse that an EBP approach to professional decision-making results in best practice. It empowers professionals and increases their confidence in their ability to ‘do a good job’.

‘Medicine is effective because of the application of biomedical science to the understanding of disease’. [5]

However, opponents claim that EBP can limit a practitioner’s autonomy and that it is over-simplistic, only providing evidence of questions that can be measured and controlled and of those that can be analysed using quantitative methods[6]. It has been argued that the approach does not take into account practitioners’ knowledge and expertise. Clinical decisions are not solely based on empirical research but on the practitioners’ judgement based on experience.

Impact analysis can, therefore, fulfil a significant role in informed professional decision-making. However, it is imperative that the evidence base used to inform practice is relevant, useful, valid, and is based on sound methodology[7]. The remainder of this paper will consider how far the lesson learned in the field of medicine in relation to EBP can be applied to the field of guidance.

Experimental/Quasi-experimental Methods.

Experimental/quasi-experimental methods are one method for assessing impact. Experimental methods and evidence based on randomised control trials (RCT) enjoy ‘near hegemony’[8] in the health and medical field and are regarded as the ‘gold standard’[9].

‘The classical and still standard methodological paradigm for outcome evaluation is experimental design and its various quasi-experimental approximations. The superiority of experimental methods for investigating the causal effects of deliberate intervention is widely acknowledged’. [10]

‘They [RCTs] are the standard and accepted approach for evaluating interventions. This design provides the most scientifically rigorous methodology and avoids the bias which limit the usefulness of alternative non-randomised designs’ [11]

A randomised control trial involves the random selection and assignment of participants into a ‘treatment group’ and a ‘comparison or control group’. Typically, the ‘treatment group’ receives a new intervention, while the ‘control’ group receives an existing intervention or a placebo. The purpose is to allow the investigator to evaluate the impact of the new intervention relative to the existing intervention or no intervention at all[12]. Neither the investigator, nor the participant know in advance which intervention the participant will receive.

The professionals and academics in the medical field have identified several limitations to randomised control trials, which correlate with the challenges faced by the guidance community when using these methods to assess and analyse impact. These limitations include:

the practicalities of implementing random assignment;
controlling external variables after assignment that could potentially influence the trial.[13]
Researcher bias - variation in processes which occur after assignment including:
- poor programme implementation;
- augmentation of the control group with non-programme services;
- poor retention of participants in programme and control conditions;
- receipt of incomplete or inconsistent programme services by participants; and
- attrition or incomplete follow up measurement. [14]

Ethical and legal restraints resulting from withholding services from otherwise eligible people;
Empirically measuring the impact of the questions and phenomena that cannot be controlled, measured and counted - a participant’s life, history and feelings cannot be easily translated to biomedical variables and statistics.[15]
Statistical prediction of the likely effect of an intervention is usually based on the ‘average’ outcome aggregated across all participants in the trial. Is it possible to generalise these findings for the wider population?[16]
Small samples can yield a small amount of information which cannot necessarily explain why certain effects were or were not found;
Decisions and methods of intervention are based on much more than just the results of controlled experiments. Professional knowledge consists of interpretive action and interaction – factors that involve communication, opinions and experience.[17]
Experimental methods produce evidence of cause and effect relationships. However, these methods take no account of the process or the context within which the intervention occurred and cannot explain why it occurred[18].

RCTs are used to measure impact in both the medical and guidance professions. Although the professions do face with some similar challenges, it could be argued that the use of these methods is, on balance, more applicable to medicine. The following provide examples of when RCTs have been used in the respective fields. The benefits and limitations of their use are discussed.
A Guidance Randomised Control Trial:

Effects of DISCOVER on the Career Maturity of Middle School Students:[19]

This study evaluated the effects of DISCOVER, a computer-assisted career guidance system, on the career maturity of 38 students enrolled in a rural middle school in The United States. Students randomly assigned to the treatment group worked with DISCOVER for approximately 1 hour a day over a 2 week period, whereas students in the control group did not have access to the DISCOVER programme. Career maturity was measured by Screening Form A-2 of the Career Maturity Inventory’s Attitude Scale (CMI-AS; Crites, 1978), completed before and after the intervention. This scale includes 50 true-false items representing a variety of attitudes toward the career decision-making process. Students in the control group were taught a unit on oral and written business communication skills in the regular classroom and did not have access to the computer lab. Furthermore, students in the treatment group were asked not to discuss their experiences with DISCOVER with other students to avoid an exchange of information about the programme. Results indicated significant gains in career maturity among students in the treatment group. (p<.05).

A Medical Randomised Control Trial:

Treatment of active ankylosing spondylitis with infleximab: a randomised controlled multi-centre trial. [20]

The aim of this study was to assess the effectiveness of infleximab, an antibody to tumour necrosis factor in treatment of patients with ankylosing spondylitis (a chronic inflammatory rheumatic disease), a disease with very few treatment options. In this 12-week placebo-controlled multi-centre study, 35 patients with active ankylosing spondylitis were randomly assigned to intravenous infleximab and 35 to placebo. The primary outcome was regression of disease activity at week 12 of at least 50% of the treatment group compared with 9% on placebo. Function and quality of life also improved significantly on imfleximab but not on placebo. Treatment with infleximab was generally well tolerated, but three patients had to stop treatment because of side effects. To assess response, validated clinical criteria from the ankylosing spondylitits working group was used, including disease activity, functional indices, metrology and quality of life.

Benefits and Limitations

The Participants.

In clinical trials it is possible to ensure that the ‘treatment’ and ‘control’ groups are appropriately matched according to the physical condition from which they are suffering. In the majority of clinical trials, the participants are patients who are diagnosed with the same physical condition, in the same stage of development and who have the same prognosis. The physical condition is the only characteristic that is of concern, and individual personalities and characteristics are irrelevant. The criteria for inclusion ensure that generalisations about the effectiveness of a drug for patients with a specified condition at a particular stage of development can be made.

In a guidance setting, participants can be selected according to specific characteristics including, age, gender, socio-economic status, and employment status. However, it is difficult to determine and match individual traits and behaviours. Traits such as motivation, capability, capacity and level of ability level cannot be controlled for but will have implications for the effectiveness of an intervention and significantly affect the results. For example, in the study above, it was possible to ensure that participants were in the same year groups and that the sample was representative in terms ethnicity and gender. However, the study could not take into account the students’ level of motivation and other personal characteristics, making generalisations more problematic.

The intervention:

In terms of medical interventions, when the intervention is the administration of a certain treatment or drug, no other intervention between pre-test and post-test, (other than an act of God) will affect the patients’ condition. The patient in the treatment group will receive a new drug whereas the patient in the control group will receive either an existing drug or a placebo, patient care will be otherwise the same.

In terms of guidance interventions, although a specific intervention such as a computer assisted guidance programme may be undertaken by the participants in the treatment group and not by the control group, confounding variables are harder to control.
There may be opportunity for the treatment group to pass on their knowledge to the control group. In the above example students in the treatment group were asked not to disclose their experiences of the DISCOVER programme, but this is not a reliable control of contamination of results, especially when the participants are children.
Chance Encounters: after the treatment, a participant may on returning home encounter a friend or stranger who is aware of an employment/business opportunity, and pass on information, which may result in the participant finding employment in a way, which cannot be considered as a result of the intervention.
Simultaneous Interventions: For example the advice of friends and family


The outcome of a clinical trial is whether or not the new treatment/drug benefits the patient in the control group in that the patients’ physical condition improves. There is something tangible to measure, and compare with the control group, such as the regression of ankylosing spondylitis above. A physical effect is significantly easier to measure and record statistically.

In a guidance setting, there is no immediate single tangible outcome that can be measured effectively with statistics. There are various outcomes that a guidance intervention may have whether it is increased motivation, increased self-awareness, increased employability, increased knowledge, or in fact securing employment. It may be able to measure such outcomes qualitatively but not quantitatively.


The evidence suggests that although both professions face many of the same challenges when using scientific methods to analyse the impact of interventions, the nature of guidance-related research lends itself to less scientific approaches, that capture the qualitative as well as the quantitative difference an intervention can make to individual clients. Although lessons can be learned from the use of RCT to measure impact in the medical profession, attempting to adopt similar experimental/quasi experiment methods for guidance research is unlikely to yield valid and reliable results.

However, the methods adopted to measure impact at an organisational or national level may be more comparable. Services are often assessed on the basis of the following performance indicators:

Customer satisfaction
Access to services
Waiting times for appointments and follow-up appointments

In both professions, data can be collected using similar methods including:

Customer feedback
Management information

These quantitative measures are an indication of organisational achievement against prescribed targets, some but not all of which will be impact measures.

In my research of the medical profession, I have come across several documents, which may be of use in a future report on such a topic, and may be found at, under the headings of the NHS plan and NHS performance Assessment Framework.

Guidelines and Regulation of Research

Clinical trials in the UK are subject to research governance[21] and numerous guidelines. A number of changes have taken place in recent years designed to ensure that clinical research is of the highest achievable scientific and ethical standard. Several of these changes relate specifically to research connected to drug development and have been introduced at the international level, including the International Committee on Harmonisation Good Clinical Practice Guideline (ICH GCP)[22]. Actions taken to improve the performance of clinical research in the UK can be summarised as follows:

The introduction of the Research Governance Framework for Health and Social Care has set in place a comprehensive set of principles for the organisation, management and corporate governance of research within healthcare.
The NHS Research and Development Forum has been established as the body responsible for the dissemination of good practice in research management in the health service[23].
The Medical Research Council (MRC)[24] has been associated with randomised control trials for over 50 years. It is involved in producing guidelines such as the MRC guidelines for good clinical practice in clinical trials. The MRC ensures that those who are funded to conduct research its behalf involving human participation agree to adhere to guidelines that safeguard participants and ensure that the data gathered are of high quality. The MRC are also involved in joint projects with the Department of Health to address issues arising from the implementation of the EU Clinical Trials Directive (Directive 2001/20/EC). This directive aims to protect trial participants and to simplify and harmonise trials across Europe. The UK’s Medicines & Healthcare Products Regulatory Agency (MHRA) consulted widely on the draft regulations and legislation to give effect to the Directive. The regulations are the Medicines for Human Use (Clinical Trials) Regulations 2003 (MLX 287).
A model Clinical Trial Agreement[25] has also been developed for use in connection with contract clinical trials sponsored by pharmaceutical companies and carried out by NHS Trusts in England. There is a requirement under the NHS Research Governance Framework for Health and Social Care (RGF) for pharmaceutical companies to enter into a contractual agreement with NHS Trusts when clinical trials involve NHS patients.

In a field where life is at stake, such as the medical profession, clear, and comprehensive guidelines governing the conduct of clinical trials are imperative.

Research in the guidance sector is not governed by statutory policies and procedures in the same way as the medical profession. This is perhaps unsurprising as research into guidance policy and practice is highly unlikely to put life at risk. However, as career education and guidance becomes increasingly more diverse and takes account of wider personal and social issues, failure to conduct research in a safe and ethical manner could have serious implications for the participants, the interventions they receive and the researchers.

In 1993, the British Psychological Society (BPS) published its Ethical Principles for Conducting Research with Human Participants, which state that:

…investigations must consider the ethical implications and psychological consequences for the participants in their research…The investigation should be considered from the standpoint of all participants; foreseeable threats to their psychological well-being, health, values or dignity should be eliminated.’[26]

Other considerations include informed consent, privacy, harm, exploitation, confidentiality, coercion and consequences for the future[27].

In January 1999, ESOMAR produced guidelines[28] for interviewing children and young people that recommend that the welfare of participants should be the overriding consideration. The rights of the child must be safeguarded and researchers must be protected against possible allegations of misconduct. At the very least, researchers working with children and young people under the age of 18 and vulnerable adults should apply for Criminal Records Bureau Clearance.

Although it not obligatory for social researchers to comply with these guidelines, adherence should be encouraged to protect the researcher and the participants.


Abrams, H. (Feb, 2001) Outcome measures: In health care today, you can’t afford not to do them. Hearing Journal

Alderson, P. Roberts, I. (Feb 5, 2000) Should journals publish systematic reviews that find no evidence to guide practice? Examples from injury research. British Medical Journal. Vol.320, pp. 376 - 377.

Barber, J.A. Thompson, S.G. (Oct 31, 1998) Analysis and interpretation of cost data in randomised controlled trials: review of published studies. British Medical Journal. Vol. 317, pp. 1195 - 1200.

Barker, J. Gilbert, D. (Feb 19, 2000) Evidence produced in evidence-based medicine needs to be relevant. (Letter to the Editor). British Medical Journal. Vol. 320, pp. 515.

Barton, S. (Jul 29, 2000) Which clinical studies provide the best evidence? (Editorial). British Medical Journal. Vol.321, pp.255 - 256.

Barton, S. (Mar 3, 2001) Using clinical evidence: having the evidence in your hand is just a start – but a good one. (Editorial). British Medical Journal. Vol. 322, pp. 503 - 504.

Braun, J. et al., (2002) Treatment of Active Ankylosing Spondylitis With Infleximab: A Randomised Controlled Multi-centre Trial. THE LANCET. Vol.359, pp. 1187-1193

Chantler, C. (2002) The second greatest benefit to mankind? THE LANCET, Vol. 360, pp. 1870-1877.

Culpepper, L. Gilbert, T.T. (1999) Evidence and ethics. THE LANCET. Vol. 353, pp. 829-31.

DeMets, D.L. Pocock, S.J. Julian, D.G. (1999) The agonising negative trend in monitoring of clinical trials. THE LANCET. Vol 354, pp. 1983-88

Falshaw, M. Carter, Y.H. Gray, R.W. (Sept 2, 2000) Evidence should be accessible as well as relevant. (Letter to the Editor). British Medical Journal. Vol. 321, p.567.

Gilber, J. Morgan, A. Harmon, R.J. (Apr 2003) Pretest-posttest comparison group designs: analysis and interpretation. (clinicians’ Guide to Research Methods and Statistics). Journal of the American Academy of Child and Adolescent Psychiatry. Vol. 42:4, pp. 500

Glaniville, J. Haines, M. Auston, I. (Jul 18, 1998) Finding information on clinical effectiveness. British Medical Journal. Vol. 317, pp. 200 - 203.

Haynes, B. Haines, A. (Jul 25, 1998) Barriers and bridges to evidence based clinical practice. (Getting Research Findings into Practice, part 4). British Medical Journal. Vol. 317, pp. 273 - 276.

Irvine, D. (Apr 3, 1999) The performance of doctors: the new professionalism. THE LANCET. Vol.353, pp.1174-1177.

Lipsey, M.W. Cordray, D.S. (2000) Evaluation methods for social intervention. Annual Review of Psychology. Vol. 51, pp. 345-375.

Lock, K. (May 20, 2000) Health impact assessment. British Medical Journal. Vol. 320, pp. 1395 - 1398.

Luzzo, D.A., Pierce, G. (1996) Effects of DISCOVER on the Career Maturity of Middle School Students. Career Development Quarterly. Vol.45(2), pp.170-172.

Malterud, K. (Aug 4, 2001) The art and science of clinical knowledge: evidence beyond measures and numbers. THE LANCET. Vol. 358, pp. 397-400.

Mant, D. (Feb 27, 1999) Can randomised trials inform clinical decisions about individual patients? British Medical Journal. Vol. 353, pp. 743-746.

March, J.S. Curry, J.F. (Feb, 1998) Predicting the outcome of treatment. Journal of Abnormal Child Psychology. Vol. 26 (1), pp. 39-51

Mariotto, A. Lam, A.M. Bleck, T.P. (Jul 22, 2000) Alternatives to evidence based medicine. British Medical Journal. Vol. 321, p.239

McColl, A. Smith, H. White, P. Field, J. (Jan 31, 1998) General practitioners’ perceptions of the route to evidence based medicine: a questionnaire survey. British Medical Journal. Vol. 316, pp. 361 - 365.

Medical Research Council (MRC) (Apr 2000) A Framework for Development and Evaluation of RCTs For Complex Interventions to Improve Health: A discussion document.

Medical Research Council (MRC) (Nov 2002) Cluster randomised trials: Methodological and ethical considerations: MRC Clinical trials series.

Moher, D. et al (Aug 22, 1998) Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? THE LANCET. Vol. 352, pp. 609-613.

Newburn, T. (2001) What do we mean by evaluation? Children & Society, Vol. 15, pp. 5-13.

Paton, C.R. (Jan 16, 1999) Evidence-Based Medicine. British Medical Journal. Vol. 318, pp. 201.

Pogue, J. Salim, Yusuf. (Jan 3, 1998) Overcoming the limitations of current meta-analysis of randomised control trials. THE LANCET. Vol. 351, pp. 47-52.

Rosser, W.W. (Feb20, 1999) Application of evidence from randomised controlled trials to general practice. THE LANCET. Vol. 353, pp. 661-664.

Sheldon, T.A. Guyatt, G.H. Hanines, A. (Jul 11, 1998) When to act on the evidence. (Getting Research Findings Into Practice, part 2). British Medical Journal. Vol. 317, pp. 139 - 142.

Smith, G.D. Ebrahim, S. Frankel, S. (Jan 27, 2001) how policy informs the evidence: “evidence based” thinking can lead to debased policy making. (Editorial). British Medical Journal. Vol. 322, pp. 184 - 185.

Sniderman, A.D. (Jul 24, 1999) Clinical trials, consensus conferences, and clinical practice. THE LANCET. Vol. 354, pp. 327-330.

Trindel, L. Reynolds, S. (2000) Evidence-Based Practice: A Critical Appraisal Blackwell Science Ltd. Chapters 1-2.

Van Weel, C. Knottnerus, J.A. (1999) Evidence-based interventions and comprehensive treatment. THE LANCET. Vol. 353, pp. 916-18.

URL Links:

Medical Research Council electronic publications/ information.

National Library of Medical Electronic Publications.

Database of Controlled Trials.

Find articles search engine – articles in BMJ, HSJ.

British Medical Journal

Electronic publications of articles appearing in the LANCET journal

Health service journal

National Institute for Clinical Excellence

Department of Health website

Centre for Guidance Studies December 2003
[1] See Lipsey, M.W., Cordray, D.S. (2000) Evaluation Methods for Social Intervention, Annual Review of Psychology. Vol. Vol. 51, pp. 345-375
[2] Trindel, L., Reynolds, S., (2000) Evidence-Based Practice: A Critical Appraisal, Blackwell Science Ltd. p.19
[3] Ibid. pp. 18-19.
[4] Ibid, p.22-23
[5] Chantler, C. (2002) The second greatest benefit to mankind? THE LANCET, Vol. 360, pp. 1870-1877.

[6] Malterud, K., (Aug 2001) The Art and Science of Clinical Knowledge: Evidence Beyond Measures and Numbers. Qualitative Research Series, LANCET, Vol 358 p.397
[7] See Barker, J., Gilbert, D., (Feb 19, 2000) Evidence Produced in Evidence Based Medicine Needs to be Relevant. (Letter to the Editor), British Medical Journal. Vol. Vol. 320, pp. 515. Alderson, P., Roberts, I. (Feb 5, 2000). Should Journals Publish Systematic Reviews that Find no Evidence to Guide Practice? , British Medical Journal. Vol.320, pp. 376 - 377.
[8] Newburn, T., (2001) What do We Mean By Evaluation? Children & Society, Vol 15 pp. 5-13
[9] Barton, S., (July, 29, 2000) Which Clinical Studies Provide the Best Evidence? (Editorial) British Medical Journal. Vol. 321, pp.255 - 256
[10] Lipsey, M.W., Cordray, D.S. op.cit.
[11] Barber, J.A., Thompson, S.G. (Oct 31, 1998) Analysis and Interpretation of Cost Data in Randomised Controlled Trials: Review of Published Studies. British Medical Journal. Vol. 317, pp. 1195 – 1200.
[12] Gilber, J.A., Morgan, G.A, Harmon, R.J. (April 2003) Pretest-posttest Comparison Group Designs: Analysis and Interpretation. (Clinicians’ Guide to Research Methods and Statistics). Journal of the American Academy of Child and Adolescent Psychiatry. Vol. 42:4, pp. 500
[13] The above three limitations are recognised by Lipsey, M.W. and Cordray, D.S. Op.cit.
[14] Ibid
[15] Malterud, Op.Cit.
[16] Mant, D., (Feb 27, 1999) Can Randomised Trials Inform Clinical Decisions About Individual Patients? Evidence and Primary Care, THE LANCET, Vol 353, pp. 743-746
[17] Irvine, D. (Apr 3, 1999) The performance of doctors: the new professionalism. THE LANCET. Vol.353, pp.1174-1177.

[19] Luzzo, D.A., Pierce, G. (1996) Effects of DISCOVER on the Career Maturity of Middle School Students. Career Development Quarterly. Vol.45(2), pp.170-172.

[20] Braun, J. et al., (2002) Treatment of Active Ankylosing Spondylitis With Infleximab: A Randomised Controlled Multi-centre Trial. THE LANCET. Vol.359, pp. 1187-1193
[21] See the NHS Research Governance Framework for Health and Social Care at
[22] See Guidance for R&D Managers in NHS Trusts and Clinical Research Departments in the Pharmaceutical Industry at
[24] The information below is taken from the following website:
[25] See
[26] British Psychological Society (1993) Ethical Principles for Conducting Research with Human Participants. The Psychologist Vol. 6 pp 33-35.
[27] Hammersley, M., & Atkinson, P., (1995), Ethnography. London: Routledge.
[28] ESOMAR The World Association of Research Professionals (1999) Guideline on Interviewing Children and Young People. Published at