The trouble with targets

© André Faber

Performance indicators may be useful, but despite their appeal, setting targets can be tricky, as well as misleading.

Targets were given a kick-start and increased political importance by the new Labour government in 1997, with its election pledges on jobs and hospital waiting lists, and became especially important in 1998 following the first comprehensive spending review and the original publication of Public Service Agreement (PSAs). Both the 2004 budget and spending review emphasised the government’s continued enthusiasm for them. (…)

Despite the general support for the government’s use of targets many people have serious reservations about their operation in practice. There are allegations of cheating, perverse consequences and distortions in pursuit of targets, along with unfair pressure on professionals. League tables and ranking lists are often seen as untrustworthy and misleading. The increase in accountability and transparency which targets in theory bring, and should be invaluable, has been marred by insufficient heed being given to the risks of overinterpretation in the presence of large, often inadequately reported uncertainty. As the Royal Statistical Society said, “Good performance monitoring is productive for all concerned but done badly, it can be very costly, ineffective, harmful and destructive.” A subsequent report from the RSS offered practical solutions for resolving critical issues in target setting and in the design, analysis and reporting of performance indicators, against which current and future performance monitoring of the public services could be judged. (…)

Targets can be of different types and importance. The most important group are public service agreements made between the treasury and government departments. But beyond that there are targets announced by the prime minister, other ministers or heads of non-departmental bodies, those included in white papers and other reports, those set by Labour in opposition or at party conferences or other political events. The government said in March 1999 that it had set 350 policy targets and 175 efficiency targets. (…) The number of targets has since been cut.

The Public Administration Committee made a number of recommendations in its 2003 report, including ensuring greater local autonomy to construct more meaningful and relevant targets, making sure they are as few as possible and focus on key outcomes, widening the targets consultation process to involve professionals and service users, and reforming the way in which targets are set to move away from the simplistic hit-or-miss approach. The committee also called for common reporting standards on targets and an independent assessment by the National Audit Office of whether and how far targets have been met. These hopes, along with the committee’s desire to see a more mature political debate about the measurement culture based on a better understanding of targets as tools to improve performance, has yet to be fulfilled.

There are a number of failings.

A lack of clarity about what the government is trying to achieve and risks to equity: There is no guarantee that a reliance on national targets will promote greater equity. A national target can be met in more than one way and some of them promote greater equity than others. For example, a 10% improvement in services can be achieved if all providers improve equally. It can also be achieved if some units do disproportionately well while others fail. If top performers improve most, the gap in the available service quality will widen between citizens in different parts of the country.

Failure to provide a clear sense of direction and a clear message to staff: Targets can never be substitutes for a proper and clearly expressed strategy and set of priorities – they can be good servants but are poor masters. Targets should drop out of the business plan and not the other way round. Local people need to feel the centrally imposed targets reflect sensible aspirations if they are not to be counterproductive. Professionals need to feel ownership of the targets – they have often expressed concern that targets fail to take account of their special expertise and judgment. (…)

Failure to focus on delivering results: Even if the government is achieving the majority of the PSA targets it has set itself, that does not mean that results are also being delivered. There are documented cases where the measurement ceases to be a means to an end and becomes the end in itself – more effort is being directed into ensuring that the figures produced have hit the targets than to improving services. (…)

Another danger with the measurement culture is that excessive attention is given to what can easily be measured at the expense of what is difficult or impossible to measure quantitatively even though this may be fundamental to the service provided. The quality of patient care or the time devoted by a teacher to a difficult child’s needs is not easily measured. (…)

League tables and other simplistic measures: There is also a danger that any achievement short of 100% success is classified as failure. Simplistic approaches of this kind, with political and media charges about failure to fully meet the targets, can be profoundly demoralising to schools, teachers, police officers and hospital staff who have worked hard to achieve progress in the face of local difficulties. Crude league tables and star ratings can be particularly misleading and demotivating, as they tend to make everybody except the “league champions” look and feel like failures. (…)

The measurement culture adapts: There is no doubt that the management culture has been adapting fast during the 1990s and is continuing to adapt. The number of PSAs has been reduced since they were first introduced in 1998 and an increasing number of targets are now outcome or output related. (…)The publication of the pan-government FABRIC report in 2001 provides guidance to government departments on setting targets. The report stressed the importance of reliable data and recommended the use of National Statistics where appropriate. (…)

Editor’s note: Beyond this extract, Mr Briscoe goes on to recommend how politicians and statisticans could improve the practice of targeting, covering areas such as moving targets, leaking and spinning, and getting the data right. To read these recommendations and the rest of Mr Briscoe’s 12-page speech, please download at

References from original speech

Statistics Users’ Council (2003), "Measuring government performance", conference proceedings, November 2003, London.

UK Treasury (2001), "Choosing the right FABRIC: a framework for performance information”, see

Audit Commission (2003), "Targets in the public sector", London.

Royal Statistical Society (2003), "Performance Indicators: Good, Bad and Ugly", see

UK Parliament (2003), "On target? Government By Measurement", July 2003, HC 62-1,

©OECD Observer No 246/247, December 2004-January 2005

Economic data

GDP growth: -9.8% Q2/Q1 2020 2020
Consumer price inflation: 1.3% Sep 2020 annual
Trade (G20): -17.7% exp, -16.7% imp, Q2/Q1 2020
Unemployment: 7.3% Sep 2020
Last update: 10 Nov 2020

OECD Observer Newsletter

Stay up-to-date with the latest news from the OECD by signing up for our e-newsletter :

Twitter feed

Digital Editions

Don't miss

Most Popular Articles

NOTE: All signed articles in the OECD Observer express the opinions of the authors
and do not necessarily represent the official views of OECD member countries.

All rights reserved. OECD 2020