Skip to main content Skip to navigation

Quality and impact

How do we measure whether a good job is being done and the effect guidance has on individuals? Does quality assurance place too great an emphasis upon policies and procedures and too little on directly assessing and enhancing the impact of services on end-users? This section brings together studies, reports and discussions which explore these questions and relationships.

What is the relationship between quality assurance, performance management and impact analysis?

Discussion on quality assurance, performance management and impact analysis.
Participants in this discussion concentrated on three related questions. Their debate answered the questions posed as well as raising further issues for consideration

Participants in this discussion concentrated on three related questions. Their debate answered the questions posed as well as raising further issues for consideration. Issues discussed are:

Is there too much emphasis upon policies and procedures and too little on enhancing the services for end-users?

Does achieving IIP actually improve business effectiveness?

What relationship, if any, is there between quality assurance and quality assurance systems, and performance management and impact analysis?

Participants in this discussion concentrated on three related questions. Their debate answered the questions posed as well as raising further issues for consideration.

Does much quality assurance place too great an emphasis upon policies and procedures and too little on directly assessing and enhancing the impact of services on end-users?

Participants in this section of the discussion argued both for and against the value of quality assurance systems. Some saw them as another layer of bureaucracy which encouraged complacency, other felt standards enhanced their practice and motivated them. A recurring theme was the question of what are the measurable and desirable outcomes of guidance.

Participants in this section of the discussion argued both for and against the value of quality assurance systems. Some saw them as another layer of bureaucracy which encouraged complacency, other felt standards enhanced their practice and motivated them. A recurring theme was the question of what are the measurable and desirable outcomes of guidance.
Does much quality assurance place too great an emphasis upon policies and procedures and too little on directly assessing and enhancing the impact of services on end-users? Good systems may be necessary for good practice, but they are surely insufficient in themselves. I wonder how thoroughly systems are critiqued. For example, it may be that client feedback is indeed obtained, but are there checks in place to ensure that any such feedback exercises are properly designed to ensure they have sufficient rigour to elicit meaningful comments? Are quality standards necessary but insufficient, or could they be more sinister in encouraging complacency? There may be genuine improvements that are of benefit to end users as a result. It might also be that it just creates another layer of bureaucracy and that the process of being inspected diverts staff time and resources away from the very tasks that would be most beneficial to users. Ultimately in measuring what is easy to measure, rather than most relevant, a hierarchy of services created that doesn't necessarily represent a useful and accessible service from the perspective of clients. Lots of organisations seek to meet minimum criteria without any reference to the context they are operating in. No one really considers if the process does improve the service to clients because the individuals implementing the systems don't generally have any client contact or understanding of the issues that present. Some years ago I worked for an organisation that had an apparently comprehensive and robust quality assurance system that included the all-important process of regularly collecting and reporting client feedback on the service delivered. I remember many detailed reports containing information about what clients were saying, supported by statistical charts and tables. However, there was far less emphasis upon detailed actions plans to address concerns raised and very little evidence that anything actually changed and improved as a result of all of that client feedback. It was almost as though the attitude was: "yes, we listen to what our clients say; we collect client feedback; so then, we've got a tick in that little box!" There is a tendency to become complacent because of the mere existence and appearance of a quality framework and a quality standard. The essential added ingredient that is absolutely necessary for ensuring a quality service is a team of well-trained and talented staff who are genuinely committed to delivering a quality service. Sadly, you don't get that from a quality framework. Just wondering, would services be penalised from experimentation by quality standards. i.e. in offering something untried there is a risk of teething problems and perhaps some poor user feedback, particularly if trying to engage with new client groups. Would this type of initiate 'skew' any inspection, offering a further disincentive for creative approaches to guidance work? Talking to those recently matrixed, among the usual moans, reservations, relief etc there have been comments about how rigorously they were examined on impact. The tick in the feedback box was not enough. It feels a bit like seeds are being sown that may come to fruition in the next or subsequent cycles. The policies and procedures provide the system by which the impact on the client can be verified. It can help provide impact analysis material. This is not denying that there can be impact without policy and procedure. However, from a personal point of view I can say that my practice was enhanced when it was underpinned by policy and procedure. It meant that the organisation within which I worked had to work to those policies too. In my experience of Career Mark, East Consortium Midlands quality assurance product for careers education and guidance, it can only be said that career mark motivates those using it to provide enhanced services to clients through getting the policies and procedures in place. It supports guidance workers within their organisation and enables the client to have a better service as a result. Who is the guardian of those standards? Clients have the right to know that they are receiving competent guidance from qualified and accredited people and organisations. All the time I have been in careers work there has been the debate about how to balance what clients say they want and what we as professionals feel they need. Maybe this has been at the root of our avoidance of any real impact analysis - concern that they will still feel that they didn't get what they wanted and what we thought they needed, didn't help! To assess the impact of practice, first we need to be clear exactly what we are assessing. For this, practitioners need to be explicit about the theoretical framework within which they are delivering guidance. Generally they are not - and organisations seem not to be interested. Only then (when we know about the framework) can we sensibly assess whether the guidance has 'worked' - because we will be clear about the desirable outcomes of that type of intervention. Otherwise, there is a danger that we flounder about in muddle and confusion - assuming we all know what to measure when we trying to evaluate impact. Quality standards have to reflect what it is received as professional practice. They cannot just be customer satisfaction surveys. The LSC is inexorably beginning to shape the debate with its emphasis upon measurable client outcomes (numbers returning to learning, or paid work, or voluntary work etc) and the linkage to the funding of client/practitioner interventions through the IAG Partnerships. Not much room here for practitioner reflection!…If the LSC reach some negative conclusions on the basis of their criteria, this could have implications for guidance-related services beyond IAGPs. So, I think we do need to be more vocal, and clearer, about what are the measurable and desirable outcomes of guidance.

Does achieving IIP actually improves business effectiveness

Does achieving IIP actually improves business effectiveness; or could it simply mean that those businesses that had the time, resources and motivation to go for a nationally recognised quality mark were already effective organisations, and that the award itself has added little value? Participants in this section of the discussion argued both for and against the value of IIP. Some saw IIP as "just another hoop to jump through rather than an opportunity to critique organisational practice with a view to improvement," others point to evidence of its benefits.

Does achieving IIP actually improves business effectiveness; or could it simply mean that those businesses that had the time, resources and motivation to go for a nationally recognised quality mark were already effective organisations, and that the award itself has added little value? Participants in this section of the discussion argued both for and against the value of IIP. Some saw IIP as "just another hoop to jump through rather than an opportunity to critique organisational practice with a view to improvement," others point to evidence of its benefits.
Anecdotally, there was one employer who as part of seeking IIP accreditation did a massive qualifications audit of their staff. They then used the existing qualifications of their employees of evidence of a learning culture in an exercise that appeared to be so cynical the whole endeavour was discredited in the eyes of many of its employees. For example, within the personnel department there was a CIPD qualified staff member who was actually working in a scale 1 part time clerical role during a career break whilst raising a family. Yet her qualification was cited as evidence of the professional calibre of the team even though it was not utilised as part of her work. Arguably the high level of qualifications within the team reflected not a commitment to investing in staff development, but under-employment of skilled individuals at a time when their was a shortage of suitable opportunities in the labour market. A great deal of the preparation for the initial assessment seemed to focus on coaching staff on what to say if they were interviewed. It was about playing a game and projecting an image. We were literally told that it was important to identify that we had had an induction, and that we knew we had access to training even when these were blurred and contentious areas where paper work policies were in place, but everyone recognised they weren't followed in practice. Nevertheless, IiP was presented as important, and crucial to the ongoing success and reputation of the institution (and therefore by implication your jobs) so now was not the time to rock the boat by being honest with external visitors about what really happened. If staff are cynical about the difference between the rhetoric of the framed certificates and the reality they experience then e.g. IiP is perceived to be worthless window dressing. Yet at the same time, in seeking e.g. preferred supplier status, matrix accreditation or IiP is a necessary pre-requisite as whatever limitation it may have it offers some external measure of organisational ethos. My experience suggests its just another hoop to jump through rather than an opportunity to critique organisational practice with a view to improvement. A report by Mark Cox and Rod Spires in June 2002 on behalf of the DfES, ' The wider role and benefits of Investors in people' identifies their survey showed that half the companies working towards investors anticipated increased quality of goods and services as a result of the award and 25-30% see it as a means to increased profitability and/or business growth. Many of those involved evidenced the effects of the standard accumulated over time. A report from The British Quality Foundation- The impact of Business excellence on financial performance- found that award winners out performed average firms significantly- Operating income for award winners increased by 91% over the post implementation period. It concludes" Business Excellence improves profitability, leads to higher growth and improves efficiency"

What relationship, if any, is there between quality assurance and quality assurance systems...

What relationship, if any, is there between quality assurance and quality assurance systems, and performance management and impact analysis? Participants in this section of the discussion highlighted the need for greater staff in quality assurance processes and the need to embed standards into delivery. 'Quality assuring the whole process and accessing reliable and credible data on the difference we make is essential but it must be owned by all staff, permeate all that we do and its raison d'etre identified by all as the pursuit of continuously improving learning outcomes for the client.'

What relationship, if any, is there between quality assurance and quality assurance systems, and performance management and impact analysis? Participants in this section of the discussion highlighted the need for greater staff in quality assurance processes and the need to embed standards into delivery. 'Quality assuring the whole process and accessing reliable and credible data on the difference we make is essential but it must be owned by all staff, permeate all that we do and its raison d'etre identified by all as the pursuit of continuously improving learning outcomes for the client.'
There can be files of reports but staff moaning that "nothing is done with it". Many employees still identify with an approach to quality assurance where activities are imposed but are not embedded in delivery. Reservations about quality assurance arise from the lack of staff involvement in the development, delivery and evaluation of the quality assurance process rather than from a dismissal of the concept of quality assurance itself Policies and procedures must be in place to ensure consistency of service but on their own they will not ensure continuous improvement There is no shortage of quality standards and companies have to make informed choices about what is useful rather than chase after certificates. The self-assessment process which underpins these standards requires the identification of good practice, room for improvement and an action plan. The key issue with any standard, is to embed in delivery so that good practice is maintained, External Quality standards provide external validation and take a snapshot at any one time. If used as a genuine developmental exercise, external quality standards can reap rewards in excess of accreditation and a certificate on the wall. Impact measurement attempts to identify quantifiable, client learning outcomes that have resulted from the career planning process. It is concerned with value that has been added and distance that has been travelled. To establish impact, it is necessary to begin with base-line data so that any change can be identified. Added value, "soft" outcomes such as increased confidence, empowerment and the development of life skills can have reverberations in many other aspects of the client's life such as social life, impact on family and children, even health and finance. The quality of the client's experience could depend on which part of the country or which office within a company they happened to visit or even which member of staff happened to be on duty at the time of the visit. …In the last decade or two, entitlement and accountability have been on the agenda of governments in a much more significant way than previously. This quest for accountability is reflected in Careers and Connexions companies' Business Planning Guidance documentation as well as in quality assurance Common Inspection Frameworks produced by Ofsted and Estyn. Following the Tomlinson 14 -19 review support for guidance, we are now dealing with a favourable policy context: but the detailed case about the form of guidance required still needs to be made. To do this, the guidance community needs to assemble different types of evidence of impact. One key question here is what is our basis of understanding guidance encounters as outlined in research studies? How can we demonstrate a basis of our knowledge claims to others? Can we build evidence-based knowledge to help answer complex questions about the nature and outcomes of guidance, and is our idea of what constitutes an appropriate evidence base different from the conceptualisation of evidence-based practice of policy makers? There is a double loop here: not only is it a question of whether guidance has an impact, but also whether the research into this question has an impact.

Full Text of The Disscussion

Quality assurance, performance management and impact analysis - Team Task Two, second time-bound online discussion

Performance Measurement, Quality Standards and Quality Assurance

Paper by Anne Dean. This paper poses the question 'Does quality assurance place too great an emphasis upon policies and procedures and too little on directly assessing and enhancing the impact of services on end-users?'

CHECK WHETHER AVAILABLE

Discussion on the relationship between Quality and Impact

"Much of quality assessment is to do with how systems operate with an emphasis on what the organisation does, procedures and paper trials, complaints, appointment procedures and so on. There could be an inbuilt danger that quality assessment tilts too far towards looking at organisational systems and practice at the expense of enquiry into the benefits to service users." This discussion explores the benefits standards bring to clients.

NOT AVAILABLE

Models of evaluation

How do you know when you've done a good job, are evaluation and impact analysis the same thing? This paper explores how theories of evaluation might help in the design of impact analysis measures for guidance.

How do you know when you've done a good job, are evaluation and impact analysis the same thing? This paper explores how theories of evaluation might help in the design of impact analysis measures for guidance.
Donald Kirkpatrick's 4-stage model of evaluation (Reaction - Learning - Behaviour -Results) is described. The author argues that many practitioners are more comfortable with the idea of evaluation rather than impact analysis and that Kirkpatrick's levels, particularly Level 4 - Results, relate to impact analysis.

There is an interesting example of analysis conducted after an Insight into the Media course which highlights the difficulty of finding suitable ways of measuring impact, particularly outcomes such as participants deciding a media career was not for them.

The problems caused by evaluation are described. These include the observation that data can be used defensively to bolster complacency rather than a prompt for examining the need for improvement.

Links to subsections:
Original Evaluation

Impact Analysis - Knowing that a good job is being done.
My experience as a trainer and teacher concerned the constant the use of evaluation as a vital part of the design and implementation of any 'intervention'. I was interested to see what the differences if any, were between this process and 'Impact Analysis' and whether the theories of evaluation would be helpful in designing impact analysis measures for the broader range of guidance interventions. Secondly, I felt that 'evaluation' is accepted and much practised by careers practitioners when running group work or in careers education generally and does not seem to invoke the same concerns and anxieties as the term 'Impact Analysis' can do.

Evaluation in its broadest sense could be equated with IA in that it seeks to establish effect or outcome of an event. The most used model for Evaluation of Training - Donald Kirkpatrick's 4-stage model , identifies different levels of effect as explained below:

Level 1 - Reaction

As the word implies, evaluation at this level measures how those who participate in the event, react to it. This level is often measured with attitude questionnaires (smile sheets) that are passed out after most training sessions. This level measures one thing: the learner's perception (reaction) to the event.

They might be asked how well they liked the trainer / teacher's presentation techniques, how completely the topics were covered, how valuable they perceived each module of the programme, or the relevance of the programme content to their specific needs. They might also be asked how to plan to use their new skills.

Learners are keenly aware of what they need to know to accomplish a task. If the training programme fails to satisfy their needs, a determination should be made as to whether it's the fault of the programme design or delivery.

This level is not indicative of the trainer's return on investment as it does not measure what new skills the learners have acquired or what they have learned will transfer back to their working environments. This has caused some evaluators to downplay its value. However, the interest, attention and motivation of the participants are critical to the success of any training programme. People learn better when they react positively to the learning environment.

Level 2 - Learning

This can be defined as the extent to which participants change attitudes, improve knowledge, and increase skill as a result of attending the programme. It addresses the question: Did the participants learn anything? The learning evaluation requires post-testing to ascertain what skills were learned during the training. The post-testing is only valid when combined with pre-testing, so that you can differentiate between what they already knew prior to training and what they actually learnt during the training programme.

Measuring the learning that takes place in a training programme is important in order to validate the learning outcomes. Evaluating the learning that has taken place is typically focused on such questions as:

What knowledge was acquired?
What skills were developed or enhanced?
What attitudes were changed?
Learning measurements can be implemented throughout the programme, using a variety of evaluation techniques. Measurements at level 2 might indicate that a programme's instructional methods are effective or ineffective, but it will not prove if the newly acquired skills will be used back in the working environment.

Level 3 - Behaviour

Level 3 evaluations can be performed formally (testing) or informally (observation). MISSING http://www.guidance-research.org/EG/impact/quality/mod_eval/evaluation/level3

Level 4 - Results

This is defined as the final results that occurred because the participants have attended the programme: the ability to apply learned skills to new and unfamiliar situations. It measures the training effectiveness, "what impact has the training achieved?." This broad category is concerned with the impact of the programme on the wider community (results). It addresses the key question: Is it working and yielding value for the organisation? These impacts can include such items as monetary, efficiency, moral, teams etc. Here we expand our thinking beyond the impact on learners who participated in the training programme and begin to ask what happens to the organisation as a result of the training efforts.

While it is often difficult to isolate the results of the training programme, it is usually possible to link training contributions to organisational improvements. Collecting, organising and analysing level 4 information can be difficult, time-consuming and more costly than the other 3 levels, but the results are often worthwhile when viewed in the full context of the value to the organisation.

As we move from level 1 to level 4, the evaluation process becomes more difficult and time-consuming, although it provides information that is of increasingly significant value. Perhaps the most frequently used measurement is level 1 because it is the easiest to measure. However, it provides the least valuable data. Measuring results that affect the organisation is more difficult and is conducted less frequently, yet yields the most valuable information….whether or not the organisation is receiving a return on its training investment. Each level should be used to provide a cross set of data for measuring a training programme.

Using this model, every level can to an extent be considered a measurement of 'impact'. However, the Level 4 evaluation probably relates most closely to my interpretation of 'Impact Analysis'. This model was helpful to me in understanding what to measure and in understanding the difficulties of measurement. However it also provides me with some ideas about how to go about measuring.

An example of my own experience of using this model with a particular guidance intervention - a 3-day 'Insight into the Media' course - was as follows:

There was an immediate end of course evaluation sheet largely to measure Reaction. This included some self assessment by the participants of their pre-event and post event state in terms of awareness, learning etc thus a small degree of evaluation at level 2 - Learning, was included
Follow up questionnaire sent 6 weeks after the event concentrated on what participants felt they had learned (i.e. level 2 ) from the event in terms of increased awareness, development of skills and acquisition of knowledge. In addition there was some attempted to evaluate any changes in behaviour (level 3) that had taken place.
Level 3 - Behaviour was also tracked by observation of the use being made by participants of the other services provided by the careers service in the weeks following the event - use of the information room, quick query slots, CV workshops etc.
After 4 months, three semi-structure focus groups were held to try and establish what impact the event had had on career thinking, on subsequent behaviour and on action planned i.e. levels 3 and 4.
Finally, the destinations of the participants were tracked in the Final Destination Survey, 6 months after graduation ( 1-2 years after the event) i.e. level 4.
This quantity of evaluation was unusual and was undertaken as the event was funded through the Enterprise Initiative and chosen as a particular indicator of 'Impact'. The final measure (FDS) did not show any significant impact (i.e. only one participant was in a 'media' career at that point) though all of the other levels undertaken were very positive. I feel this demonstrates some of the difficulties of both finding a suitable measure for 'Impact' but also the virtual impossibility of isolating the impact of a particular intervention in the long term outcome. In this instance, this final measure provided no indication of one of the major 'impacts' of the event - namely that many participants decided that a media career wasn't for them! Similarly the FDS statistics indicated more about the labour market at that point than the preparedness of individuals.

However, the model has assisted me in the design of evaluation processes for all events and interventions moving them from purely Reaction to incorporate Learning. I routinely undertake more measurement to establish Impact, often through focus groups.

Problems with Evaluation

That it is undertaken routinely
That the instruments used a re often generic and not designed for the specific event / intervention
That they are pitched at the level of Reaction and thus provide less useful data
That the data is often unused
That the data often does not provide feedback for the individual practitioner
The data is used to bolster complacency or in a defensive manner - 'we/I are OK' rather than 'how can we/I improve'.

Models of Evaluation - Bibliography

Killeen, J. , “The learning and economic outcomes of guidance”, in Watts, A.G. et al “Rethinking Careers Education and Guidance” Routledge 1996

Killeen, J., “Evaluation” in Watts, A.G. et al “Rethinking Careers Education and Guidance” Routledge 1996

Killeen, J., Kidd, J.M., Haethorn, R., Sampson, J. and White, M. “A review of Measures for the Learning Outcomes of Guidance”, Cambridge: National Institute for Careers Education and Counselling. 1994

Kirkpatrick, D.L., "Techniques for Evaluating Training Programs", Journal of American Society for Training and Development, Vol 13 Nos 11-12, 1959


Models of Evaluation - Links

Course evaluation methods
An informal review of evaluation methods for training events but with a good set of links at the end.

National EBP Network – Placement for Teachers - NO LONGER AVAILABLE
A very useful chart showing the 4-level model applied to the evaluation of Teacher placements

Context and Causation in the evaluation of training NO LONGER AVAILABLE, check
Donovan, P and Hannigan, K (1999), Context and Causation in the Evaluation of Training: A Review and Research Outline, Irish Management Institute Working paper. This paper provides an overview of approaches to identifying the impact of training on firm performance. The literature from two disciplines - economics and HRD - is reviewed. This review leads to the conclusion that neither discipline has provided a comprehensive methodology that is readily applicable. While there are considerable differences between the two approaches, both adopt a results based approach that tries to measure and compare the resources used in training with the value of changes in output. This assumes that causal linkages are in place. The HRD literature has supplemented this with trainee reaction research as a proxy for impact. However, this does not capture a number of aspects relating to the context of the training provided. A new model in the literature has the potential to overcome these problems, by diagnosing organisational requirements and predicting where expenditures may be productive. This is an academic paper reviewing the literature concerning evaluation of training in an economic and HRD context. It extends the 4-level model of Kirkpatrick and highlights interesting work on the transfer of learning to the workplace i.e. impact, involving the development of a Learning Transfer Inventory System.

Evaluating Guidance NO LONGER AVAILABLE, check
Evaluation of Adult IAG partnerships
Impact of Careers Guidance on adult employed people John Killeen and Michael White NO LONGER AVAILABLE, check

Levels of Evaluation

This description of the four levels of evaluation is based on a MS powerpoint presentation
Levels of Evaluation (Kirkpatrick)
Level 1 - Reaction

did they like it?
the measure of customer satisfaction
can be measured immediately after the event
Level 2 - Learning

did they learn anything?
i.e. the measure of knowledge acquired and skills improved by the event
does not extend to the application of the knowledge / skills
Level 3 - Application

did the person use it?
i.e. the measure of the change in behaviour as a result of the event
you need to leave time for the change in behaviour to take place before evaluating
Level 4 - Impact

did it make a difference?
i.e. the measure of the results of the change in behaviour
very difficult to evaluate in terms of the specific event as it is impossible to exclude all the other factors involved


Example Client Feedback Questions

The example questions have been designed to assess the IMPACT of various careers interventions.

Guidelines for Questionnaires

The following may be helpful:

You are free to design your own questionnaires
It is suggested that you use a 4-point scale for the response to the questions; this may use numbers (either ascending or descending) or symbols such as smiley faces. Whatever system you use, you should also include a 'Not Appropriate' category where possible so that these responses are not incorporated into your statistical analysis. The 4-point scale allows you to report in terms of 'broadly positive' (responses 1 and 2) and 'broadly negative' (responses 3 and 4).
You can combine questions in whatever way seems appropriate for the services you offer.
You can use whatever media you wish to obtain the feedback - paper questionnaires, telephone surveys, e-mail surveys, interviews on or off the premises etc.
It is important that your feedback is obtained from a 'representative' sample of your users. This may require you to include monitoring information with the questionnaire. This may cover: subject, year of study, gender, ethnic origin, health status etc.
Think about the size of the sample – it should be large enough to be representative of the clients, of the services provided and of the various individuals providing the service.
Consider confidentiality. Do you want the provider of the service to be identifiable. If so, how are you going to preserve their privacy whilst enabling them to use the resultant feedback.
When analysising the resultant feedback it is important to consider how you are going to use the information to improve your service and individual performance.
Example Questions
...a range of example questions which might be useful in questionnaires

ADVICE, DUTY OR DROP-IN SESSION

The aim of an advice session is to help you clarify the issues that are important to you, identify what you may need to do next and provide ideas about other sources of help. To what extent did the advice session help you to:
Clarify and understand the main issues facing you?
Identify what you need to do next?
Find out about other sources of help available here or elsewhere?
GUIDANCE INTERVIEW

Our aim is to help you clarify the issues that are important to you, provide guidance, advice, information and / or ideas about other sources of help. To what extent did the guidance Interview help you to:
Clarify and understand the main issues facing you?
Identify appropriate options?
Identify what you need to do next?
Expand your options (if appropriate)?
Satisfy your needs for guidance, advice or information?
Find out about other sources of help provided by us or elsewhere?
If you were given suggestions on how to prepare for your careers advisory interview, how helpful were these suggestions?
If your appointment was delayed or cancelled how well were rearrangements made?
GROUP WORK - Title of Session

Our aim is to ensure that the session meets your needs. How well did we meet this aim in terms of helping you to:
Understand the aims of the session and clarify that it was appropriate for you?
Understand how the session would be run?
Be better informed, skilled or prepared as a result of the session?
Contribute to the session?
SUPPORTING WRITTEN APPLICATIONS

Our aim is to help you improve your self-presentation skills in terms of CV's, covering letters and application forms. How well did we meet this aim in terms of helping you to:
Think about your abilities and what you had to offer employers?
Fit your applications to the needs of particular employers?
Feel positively supported and encouraged in presenting yourself on paper?
Identify what you need to do next?
TESTING SESSION

Our aim is to provide you with practice in undertaking Tests and to give you feedback on your test results. How well did we meet this aim by helping you to:
Understand the aims of the Testing session and clarify what you would get out of it?
Understand the materials that will be used?
Understand who would see your test results?
Be better informed, skilled and/or prepared?
If you took up the offer of feedback (written or verbal) on your test results, to what extent do you consider it helped you to
Clarify any issues for you?
Identify what you may need to do next?
Identify other sources of help provided by us or elsewhere?
REFERRAL

There may be occasions where we think your needs are going to be best met by another member of staff, or by another service. It is our aim to provide you with the necessary information so that you can decide for yourself if this is your best course of action. How well did we meet this aim by helping you to:
Understand what the other person/service could offer?
Decide if this was your best course of action?