Skip to main content Skip to navigation

Session 3: Evaluation and Assessment

Assessing the reliability of labour market forecasts

A primary motivation for providing education and training is to equip the workforce with the skills it requires to meet the needs of industry. Furthermore, as training takes some time to accomplish, the training that is provided today must be targeted at the needs of industry at some time in the future. Hence policy makers with a responsibility for the allocation of training resources require access to a labour market forecast of some kind.

Formal labour market forecasts produced using an economic model have several features which ought to make them attractive to policy makers. For example, they embody modern economic theory and large amounts of relevant economic data, they are comprehensive and coherent, and they can be updated regularly at reasonable cost. Yet many training professionals are reluctant to avail themselves of formal forecasts, preferring instead to rely on more informal methods such case studies and opinion surveys. Formal forecasts are held to be too unreliable for most policy purposes. It is argued in this paper that the reluctance is misplaced and, indeed, that it constitutes an unnecessary barrier to the efficient promulgation of training policy.

The discussion is predicated on a detailed analysis of the performance of a labour market forecasting system built around the MONASH applied general equilibrium model of the Australian economy. Using forecasts published over the last thirteen years, the paper

  • reviews their accuracy for industries, occupations and regions;
  • compares their accuracy at various levels of aggregation;
  • compares their accuracy for various time horizons from one to eight years;
  • compares their accuracy with forecasts derived from time series extrapolation;
  • identifies the forecasting errors introduced at each level of the top-down forecasting system (the macro forecasts, the national labour forecasts, and the regional labour forecasts); and
  • identifies the role of sampling errors in the Labour Force Survey (the source of the historical values against which the forecasts are assessed).

The paper by G.A. Meagher and Felicity Pang (Centre of Policy Studies, Monash University) also reviews various criticisms that have been made of the MONASH forecasts in published assessments by the private sector consulting firm Access Economics, by the National Institute of Labour Studies at Flinders University, and by the Organisation for Economic Cooperation and Development. It concludes with a discussion of the role currently accorded to the MONASH forecasts in the development of workforce policy in Australia.

Evaluating & Assessing Projections

“But how accurate are they?” This is the perennial cry from some users and all critics of skills forecasts. But is this the right question? This paper address the issue of how to assess and evaluate employment projections. It highlights the many problems and pitfalls in trying to answer the naïve question put at the start of this paragraph. It sounds like a straightforward exercise. In practice there are many difficulties, both conceptual and practical in reaching a definitive answer, and which need to be addressed if misleading conclusion are not to be drawn.

A key message is that accuracy in this is type of work is in very real sense a chimera or mirage. Few if any social science predictions can be expected to be precisely accurate. Nobody has a crystal ball that can show what the future will inevitably be like. Indeed one of the purposes of such projections, especially from a policy maker’s perspective, is to change the future.

Of course this comes a particularly inopportune time as far as the general credibility of forecasters is concerned. Critics will ask what is the point of worrying about such details, when all forecasters got things so wrong in terms of failing to predict the financial crash of 2008 and the subsequent synchronised world recession. The paper argues that this is to misunderstand the key purpose of such projections, as well as the ability of some forecasters to predict such crises.

Of course it is important that the projections are well founded and robust, but the crucial test is are they useful and informative, not whether they are precisely correct. In this spirit, an assessment is made of how well detailed projections of skill needs produced at both a UK and pan-European level stand up to scrutiny. It develops a taxonomy to explain the different sources of potential errors in skills projections. This includes a distinction between macroeconomic, sectoral and skills components, as well as, how much they are due to historical data revisions as opposed to pure forecasting errors.


Speakers

  • Tony Meagher
  • Rob Wilson