CHAPTER 2

CHAPTER 2

NAPLAN's Objectives

Background

2.1        NAPLAN is an annual assessment of Australian students in years 3, 5, 7 and 9 that tests students in reading, writing, language and literacy.  The test has been conducted in May each year since 2008, and results are available four months later in September. Since 2010 results have been available publically on the My School website at an individual school level.[1] 

2.2        Literacy and numeracy testing was carried out at a state level for a number of years before NAPLAN was introduced in 2008. New South Wales had been testing since 1989 and other states and territories followed throughout the 1990s.  According to Professor Barry McGaw, Chair of the Board of the Australian Curriculum, Assessment and Reporting Authority (ACARA), attempts were made to assess the results of the various testing regimes to provide a national perspective.[2]  This approach was taken until the mid-2000's when ministers from all State, Territory and Federal Governments agreed to shift to using the same test, thus establishing NAPLAN. 

Melbourne Declaration on Educational Goals for Young Australians

2.3        In 2008 the Ministerial Council for Education, Early Childhood Development and Youth Affairs (MCEECDYA) agreed the Melbourne Declaration on Educational Goals for Young Australians.  The declaration commits all Australian governments to meet two high level educational goals for education at all stages of a child's schooling and is supported by the MCEECDYA Four Year Plan that outlines the key strategies that will be followed to meet the goals.

2.4        The two educational goals are:

The key strategies from the MCEECDYA Four Year Plan that support them are:

2.5        ACARA were given the responsibility under the Four Year Plan and the Declaration to manage NAPLAN and to 'link assessment to the national curriculum where appropriate'.[5]  

Why was NAPLAN introduced?

2.6        Like many of the previous testing regimes, NAPLAN was introduced to identify students at an early stage who were not meeting minimum standards in literacy and numeracy.  Professor Joy Cummings explained in her submission that the process for establishing minimum standards was established through the Hobart Declaration in 1989 where 'Ministers of Education agreed to a plan to map appropriate knowledge and skills for English literacy. These literacy goals included listening, speaking, reading and writing.'[6]  Professor Cummings goes on to explain that these goals were expanded further throughout the 1990s:

National literacy goals and subgoals were also developed in the National Literacy (and Numeracy) Plan during the 1990s, including:

...comprehensive assessment of all students by teachers as early as possible in the first years of schooling...to ensure that...literacy needs of all students are adequately addressed and to intervene as early as possible to address the needs of those students identified as at risk of not making adequate progress towards the national...literacy goals....use [of] rigorous State-based assessment procedures to assess students against the Year 3 benchmark for...reading, writing and spelling for 1998 onward [emphasis added].[7]

2.7        Professor Cummings added that to achieve these goals national standardised testing would be introduced to ensure consistency across the educational spectrum to ensure that:

...every child commencing school from 1998 will achieve a minimum acceptable literacy and numeracy standard within four years (recognising that a very small percentage of students suffer from severe educational disabilities).[8]

2.8        Other submitters agreed that the primary concern of national testing and assessment was to assist in identifying the progress of students at key stages in their educational development.  Dr Kerry Hempenstall from the School of Health Sciences at RMIT University highlighted the importance of early assessment that allows for early intervention:

[A]ssessment can assist in the identification and management of students at-risk even before reading instruction commences. They can also help identify those making slow progress at any year level. This is especially important given the usually stable learning trajectory from the very early stages.[9]

2.9        Dr Hempenstall also suggested that the assessment of the intervention itself can inform its effectiveness:

If specific interventions are implemented, appropriate reading assessment can provide on-going information about the effectiveness of the chosen approach. [10]

NAPLAN's stated objectives

2.10      ACARA stated in their submission that the overall objective of NAPLAN is to provide education authorities, schools, parents and the local community with quality data.  This allows the various stakeholders to:

2.11      ACARA stresses that NAPLAN cannot bring about improvement to student outcomes directly, but can provide valuable data to schools and education authorities that may allow improvement to take place:

It should be emphasised that NAPLAN is a tool to inform school improvement, not an improver of educational outcomes. It is not the tests that will improve students’ literacy and numeracy skills, but the way students’ results (including school, system and national level results) are used by teachers, schools and systems to identify strengths and weaknesses, particularly in teaching practices and programs, that will improve student outcomes. [12]

2.12      Whether ACARA's description of the purpose of NAPLAN constitutes a clear statement of objectives was a question raised in a number of submissions. The Australian Association for the Teaching of English (AATE) suggested that while data from assessments 'has the potential to be useful...it needs to be used in ways that improve learning'.[13] The Queensland Catholic Education Commission was of a similar view and said that the ACARA website describes what NAPLAN does rather that what it is intended to achieve. As such it is difficult to assess whether it has successfully achieved its aim or not.[14]

2.13      The School of Education at the University of Queensland submitted that the stated objectives of NAPLAN are not 'clearly communicated in any of the available documents' which is allowing NAPLAN to be used for 'a range of purposes beyond its stated objectives'.[15]

2.14      Other submitters focussed on the data collection potential of the NAPLAN tests, and how it could be utilised to improve student outcomes.  The School of Education at the University of South Australia (the School) endorsed some elements of the program:

NAPLAN has made a contribution to providing schools with data to analyse progress; school leaders with a sense of trends occurring in their school that can inform program and policy decisions; and governments with regional data that can inform how to best support and resources areas of strength and need (Dooner, 2001).[16]

2.15      However the School was circumspect about whether there was evidence to support the conclusion that NAPLAN is benefitting all schools and students.  Some of the issues that may prevent potential benefits being accessed by all schools and students include:

2.16      Fintona Girls' School in Victoria commented that it is impossible for the tests to 'reflect the core elements of curriculum documents used in the different States and Territories', which is one of the objectives set out in the NAPLAN literature.[18] This is due to a lack of flexibility in its delivery across the country:

...the recent implementation of the National Curriculum has shown that there is a necessity for flexibility in these curricula in order to address the differing educational priorities of States and Territories.[19]

Committee View

2.17      The objectives of NAPLAN at a macro level are clear. However evaluations of their effectiveness at that level are relatively meaningless, and highly dependent on the perception of a particular stakeholder. 

2.18      The remainder of the report examines the various objectives as set by ACARA in more detail, but the committee's broad view of the objectives is that they need to be broken down to a meaningful level where each element can be separately measured and evaluated for its effectiveness.  They then need to be communicated in a more accessible form so that schools, students, parents and the wider community all understand what the tests are intended to achieve, with regular evaluation to determine whether they are being as effective as possible.   

NAPLAN as a diagnostic test

2.19      Most submitters had views on whether the aspirations of NAPLAN being a useful diagnostic test were met.  Views were constructed by analysing the tests themselves, and by the way the data from the tests are used.

2.20      Fiona Mueller, literacy educator, raised concerns about whether a test with a multiple choice element could be considered diagnostic:

Apart from the absence of any recognisable structure in these tests, the multiple-choice design makes them virtually invalid as diagnostic or teaching tools. One of the most serious failings of the papers is that there is no opportunity for the students to show how they have arrived at a solution.[20]

2.21      The Australian Literacy Educators' Association (ALEA) submitted that while NAPLAN is intended to be a diagnostic test it cannot, by virtue of the tests themselves, provide the same specific diagnostic outcomes as formative assessment that:

...provide students with specific feedback about the qualities of their work with advice on how it can be improved to build the resilience of students and support a classroom culture of successful learning.[21] 

ALEA continued that, given the time delay of five months for the results of the assessment to arrive back in schools, 'it cannot be argued that they are assessing for learning'.[22]  

2.22      The delay highlighted by ALEA in returning test results was a theme for a number of submitters.  The Australian Association of Mathematics Teachers cited the delay in the results of the tests being returned to students as evidence that it is not an effective diagnostic tool for assessing students and addressing their specific learning needs:

Any objectives relating to diagnosis at the student level are compromised by the time it takes for schools and teachers to receive student results.[23]

2.23      Margaret Wu and David Hornsby, literacy educators, also denied the diagnostic effectiveness of the tests, pointing to the time it takes for the results to be known as one of the factors:

Even if the NAPLAN tests were diagnostic, the 5-month delay in providing the results would make them useless for informing teaching.[24]

2.24      As discussed in the committee's interim report, the Whitlam Institute, in conjunction with the University of Melbourne is conducting a project titled: The Experience of Education: The impacts of high stakes testing on school students and their families – An Educator's Perspective. As part of the project a survey of educators was conducted to gather views on NAPLAN.  One of the questions asked in the survey was whether the tests were a diagnostic tool for teachers.  The survey found that 58 percent of teachers 'believing that NAPLAN was not a diagnostic tool', while two thirds of Principals believed it was.[25]

2.25      The survey report posited that this variation in perception of NAPLAN as a diagnostic tool could be explained by considering how the data is used by a teacher who is primarily concerned with individual students, as opposed to the principal who is looking at the overall performance of a school:

It may be that at the school level, aggregate NAPLAN data can point to areas of the curriculum where average student achievement is low (with implications for Principals as they work to determine professional learning directions for their school) and are thus seen as useful by school leadership. In contrast, at the level of the individual student, the delay between testing and results makes the data less useful for teachers working to ensure individual students are developing in each of the areas covered. [26]

2.26      The Tasmanian Department of Education considers NAPLAN to be an effective assessment tool for both teachers and parents.  In its submission it states that NAPLAN:

...enables parents/guardians  to  monitor  progress  made  since  the  child’s  previous  NAPLAN  assessment. Through various publications, the DoE encourages parents/guardians to discuss children’s results and report with teachers.

In summary, NAPLAN data is both a key measure for teachers and parents/ guardians as to whether or not young Tasmanians are meeting important educational outcomes and is used diagnostically to support improvement.[27]

2.27      Christian Schools Australia Limited had reservations around how the data produced by the tests were used, but was positive in general about the potential of data as a diagnostic tool:

The use of a nationally consistent diagnostic instrument is widely accepted.   It provides the opportunity to tap into a rich and deep source of comparative data and more meaningful information for teachers. NAPLAN used in this way we believe to be highly effective and highly beneficial. This function should remain the primary purpose of NAPLAN with accountability requirements clearly and explicitly playing a secondary role.[28]

2.28      The committee noted evidence suggesting teachers and student teachers do not receive sufficient training or support to enable them to properly use or analyse data obtained by NAPLAN testing.  ACARA noted the recommendation in the 2013 Senate report Teaching and Learning – maximising our investment in Australian schools that advised teachers needed more support in learning how to use evaluative data. ACARA submitted that states and territories already have sophisticated data analysis tools available for teachers to access; however, it is clear that more work could be done to support teachers in becoming skilled at interpreting and using NAPLAN data.[29]

2.29      The Australian Education Union also commented that there is capacity for improvement in the training and skills of teachers in the application of NAPLAN data:

Can we do more in terms of professional development of teachers on the use of data, the interpretation of data and the application of information in informing our teaching and learning? I think we can always grow in that regard.[30]

Committee View

2.30      The committee does not believe that the current administration of NAPLAN leads to it being as effective a diagnostic test as it could be.  A number of elements inform this conclusion, such as the methodology of the test, which includes the use of multiple choice, and the exclusion of teacher assessment of the student.  However the principle consideration is the length of time the results take to be disseminated to the students and teachers.  The school year moves at a rapid pace and the turnaround of many months does not allow for meaningful intervention to ensure that students across the spectrum of development are given the appropriate support they require, either to meet minimum standards or to challenge them to reach their full potential.

2.31      The committee accepts that the introduction of NAPLAN Online should allow for much improved turnaround in the results, and is of the strong opinion that this should be a high priority in designing the online systems for the tests.

Recommendation 1

2.32      The committee recommends the quick turnaround of test results should receive the highest priority in the design of NAPLAN Online with achievable and measurable targets built in to the system.     

Navigation: Previous Page | Contents | Next Page