7 June 2023
PDF Version [510KB]
Rodney Bogaards
Economic Policy
In the May 2023 Budget the Australian
Government committed to provide $10 million over 4 years from 2023–24 (and $2.1
million per year ongoing) to ‘establish a central evaluation function within
Treasury to provide leadership and improve evaluation capability across Government,
including support to agencies and leading a small number of flagship
evaluations each year’(Budget
measures: budget paper no.2: 2023–24, p. 213).
This Treasury evaluation unit to be known as the Australian
Centre for Evaluation (ACE) was originally formulated in a Labor
election commitment prior to the May 2019 federal election (released on 13
November 2018). At that time Labor committed to creating an Evaluator General
based within Treasury. The development of this evaluation model was discussed
in a speech Building
a better feedback loop by Dr Andrew Leigh (then Shadow Assistant
Treasurer).
Thodey Review
The need for better evaluation was raised in the 2019 Thodey
Review (p. 221) which conducted a review of the Australian Public Service
(APS):
Research commissioned for the review found that the APS’s
‘approach to evaluation is piecemeal in both scope and quality, and that this
diminishes accountability and is a significant barrier to evidence-based
policy-making’. This is consistent with views from within the service. In a
private submission to the review, one APS leader said, ‘While there are some
areas in the APS where evaluation is done well, its actual execution is uneven
and, in some areas, non-existent.’
The Review (p. 222) proposed that a central enabling
function be established to drive a service-wide approach to evaluation and to
uphold minimum standards of evaluation:
The main responsibility for evaluations will continue to
reside with individual agencies. But the central function should provide
guidance and support for agencies on best-practice approaches. It should also
develop, for the Government’s consideration, a new strategic approach to
evaluation of past, present and proposed programs and policies, with advice on
how best to embed mandatory requirements for formal evaluation in Cabinet
process and budget rules. Such changes will strengthen the basis on which
government decisions are considered and made — and help with explanations when
activities cease or change, and when new strategies are pursued.
It commissioned research from the Australia and New
Zealand School of Government (ANZSOG) to inform the Review’s deliberations. The
ANZSOG research
paper (see Appendix B) examined the evaluation framework in the APS and
considered that evaluation was hindered by departments:
- focusing
on immediate priorities at the behest of ministers
- focusing
on reputational risk, with efforts and resources dedicated to defending against
criticism, rather than learning from experience
- viewing
policy evaluation as a low priority.
Other impediments to effective evaluation in the APS
included:
- accountability
misalignment (accountability to government as opposed to wider accountability
to the community)
- the
media cycle and immediate community pressures driving ministers to focus on
short-term goals while ignoring long-term governance
- debate
over whether evaluation should be in a central department or remain
decentralised and undertaken by line departments
- a
preference for promoting successful program evaluations while those that showed
failure were restrained due to fear of embarrassing government.
Role of the Australian Centre for
Evaluation (ACE)
The new ACE is expected to collaborate with the existing
evaluation bodies of the Australian Government including:
However, the focus of the new unit is expected to be on
conducting its own policy and program evaluations.
According to the Age,
Dr Andrew Leigh (the Assistant Minister for Competition, Charities and
Treasury) said in late April 2023 that the rigorous appraisal of policies and
programs was fundamental to good government and should lead to better use of
public spending and less reliance on private contractors/consultants:
‘‘This unit will conduct high quality impact evaluations of
government programs, including randomised policy trials. This will allow
government to evaluate the impact of policies with the same rigour we use to
test new medical treatments,’’ he said. ‘‘Quality evaluation will save
taxpayers’ money and help government design and adapt programs to better serve
the community. It’s good for the budget bottom line, and good for all
Australians.’’
The move is also aimed at reducing the use of outside
contractors. The government spends up to $50 million a year on evaluation
reports from consultants.
Randomised policy trials (or, more
generally, randomised control trials) are an experimental form of impact
evaluation in which the population receiving the policy intervention or program
is chosen at random from the eligible population, and a control group is also
chosen at random from the same population. One of its strengths is that it
provides a response to questions of causality, helping researchers to assess to
what extent observable changes and achievements are due to the government
policy intervention or program under evaluation, or are due to other factors or
circumstances.
CEDA evaluation research
A 2023 Committee for Economic Development of Australia
(CEDA) research paper entitled Disrupting
disadvantage 3: finding what works indicates that better evaluation may
be required in the Australia Government. CEDA analysed 20 federal programs
covering a broad range of areas based on Auditor-General performance reports
that had been completed in the past decade. Of the 20 federal programs
analysed, 19 had not been properly evaluated:
- 5
had no evaluation framework and
- 14
were deemed to have either an incomplete, inconsistent, or poor evaluation
framework.
CEDA’s analysis found several consistent themes among the
evaluations undertaken, including:
- a
lack of clearly defined objectives and outcomes during policy development
- no
evaluation frameworks in place in the design phase
- incomplete
evaluation frameworks
- poor
or non-rigorous evaluation methodologies
- ineffective
evaluations that did not align with the objectives of the program
- data
limitations and poor data management.
CEDA also provided 5 recommendations related to the remit
of the new ACE. That is, that the new unit should:
- champion
an evaluation culture throughout the public service
- provide
expert advice and review to departments
- undertake
randomised control trials
- review
data gaps
- maintain
a national repository of completed evaluation accessible by the public.
Evaluation versus audit
It is important to differentiate evaluation from audit.
The Australian National Audit Office (ANAO) is tasked with conducting performance
audits. However, according to the 2006 ANAO paper Evaluation
and performance audit: close cousins or distant relatives? presented to
the Canberra Evaluation Forum, while evaluation and performance audit are
‘close cousins’ they have a key difference: an evaluation can question the
merits of government policy while a performance audit remains silent:
Evaluation aids in the assessment of program effectiveness,
and may cover both policy and administrative aspects of a program. A
performance audit is an independent review of the efficiency or administrative
effectiveness of a program (or agency) but does not extend to assessing the
policy merit of a program.
This difference is important because it allows evaluators the
ability to question whether a policy is worth doing rather than just focusing
on whether a given policy can be implemented more effectively or efficiently.
Will the ACE be able to embed
itself in government for the longer term?
Building evaluation skills in the Australian Public Service has
been tried before in relation to regulatory impact analysis. This followed
concerns raised by the Taskforce
on Reducing Regulatory Burdens on Business that
there was a skills deficit within departments and agencies and that more
rigorous cost–benefit analysis (CBA) should be used in regulation-making.
In 2006 the Taskforce
recommended the Office of Regulation Review (which became the Office of Best
Practice Regulation and is now the Office of Impact Analysis) develop in-house CBA
skills in departments and agencies (see Recommendation 7.4, p. 150). The Australian
Government (p. 77) decided to enhance the regulatory oversight body’s role,
with a focus on training departments and agencies in undertaking CBA and
developing guidance material on CBA concepts (such as the appropriate discount
rate).
The current Office
of Impact Analysis website says the ‘Australian Government is committed to
the use of CBA to assess regulatory proposals in order to encourage better decision
making’. However, there have been no CBA
working papers published
or CBA
conferences (pp. 8–9) held by the regulatory oversight body since 2007.
Small steps or large strides
towards better evaluation?
Whilst it appears clear from recent media statements that
the ACE will champion an evaluation culture within the APS, provide expert
advice to departments, undertake randomised trials and review data gaps, it is
unclear whether there will be a national repository under the new arrangements.
Establishing a repository would improve transparency and accountability and
allow governments to learn from their successes and failures over time. It was
also suggested in CEDA’s paper that the Government legislate an obligation to
evaluate existing major Commonwealth‑funded programs at least every five
years. This would have the positive effect of ‘locking in’ a rolling schedule
of program evaluations.
The Productivity Commission’s recently released 5-year productivity
inquiry: advancing prosperity also suggested that the new evaluation
unit ‘could be a starting point for improving CBA [cost‑benefit analysis]
practice (for example, by providing independent evaluation of CBA assumptions
and inputs)’ (p. 523). Even if improving the use of CBAs only leads to a
slight shift in government decision-making and a small reduction in cost
overruns in percentage terms, the Productivity Commission says this would
amount to a substantial improvement in resource allocation and efficiency gains
in dollar terms given the size of government projects and programs.
The establishment of the ACE seems a positive step to
building evaluation capacity within the Australian Government. Time will tell the
extent to which it can lead to improved public policy decisions.
For copyright reasons some linked items are only available to members of Parliament.
© Commonwealth of Australia

Creative Commons
With the exception of the Commonwealth Coat of Arms, and to the extent that copyright subsists in a third party, this publication, its logo and front page design are licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Australia licence.
In essence, you are free to copy and communicate this work in its current form for all non-commercial purposes, as long as you attribute the work to the author and abide by the other licence terms. The work cannot be adapted or modified in any way. Content from this publication should be attributed in the following way: Author(s), Title of publication, Series Name and No, Publisher, Date.
To the extent that copyright subsists in third party quotes it remains with the original owner and permission may be required to reuse the material.
Inquiries regarding the licence and any use of the publication are welcome to webmanager@aph.gov.au.
This work has been prepared to support the work of the Australian Parliament using information available at the time of production. The views expressed do not reflect an official position of the Parliamentary Library, nor do they constitute professional legal opinion.
Any concerns or complaints should be directed to the Parliamentary Librarian. Parliamentary Library staff are available to discuss the contents of publications with Senators and Members and their staff. To access this service, clients may contact the author or the Library‘s Central Entry Point for referral.