Flagpost is a blog on current issues of interest to members of the Australian Parliament

Parliamentary Library Logo showing Information Analysis & Advice

Filter by



Tag cloud

PISA versus NAPLAN—a difference of questions?

The 2018 Programme for International Student Assessment (PISA) results showed a significant decline in the performance of Australian 15 year olds over time since PISA testing began in 2000. However, the National Assessment Program - Literacy and Numeracy (NAPLAN) for Year 9 students—who are a similar age—have shown a slight improvement in equivalent subjects since testing commenced in 2008. So why the difference? It might be because of how the questions are asked.

Both tests claim to assess similar skills. PISA aims to assess ‘the extent to which 15-year-old students near the end of their compulsory education have acquired the knowledge and skills that are essential for full participation in modern societies’, while NAPLAN ‘tests the types of skills that are essential for every child to progress through school and life.’

Therefore it is surprising that they produce such different results. For example, between 2009 and 2018, the average PISA score in mathematics declined from 514 to 491, while the NAPLAN Year 9 result for numeracy increased from 582.2 in 2008 to 595.7 in 2018, with both changes statistically significant. The changes in the results for reading are slightly smaller, but still show a statistically significant decline in PISA, and a small rise in NAPLAN (although not statistically significant).

But the PISA assessment ‘does not just ascertain whether students can reproduce knowledge; it also examines how well students can extrapolate from what they have learned and can apply that knowledge in unfamiliar settings’. This is reflected in the way PISA questions are asked.

So while the two assessments may test the same skills, the way the questions are structured and phrased is very different. PISA maths tests usually present a scenario and then ask a series of questions, requiring the student to interpret what formula or type of question it is. NAPLAN asks a single question for each item it is testing.

For example, the PISA 2012 maths question ‘Faulty Players’, presents the following table of products from a mythical company:

Player type Average number of players made per day Average percentage of faulty players per day
Video players 2000 3%
Audio players 6000 5%

Students then answer yes or no to the following questions:

  • One third of the players produced daily are video players.
  • In each batch of 100 video players made, exactly 5 will be faulty.
  • If an audio player is chosen at random from the daily production for testing, the probability that it will need to be repaired is 0.03.

There are then several further sections, with increasingly complex questions.

In contrast the Year 9 numeracy test from 2016 NAPLAN tests similar concepts as two of these questions, with discrete multiple choice questions:

  • A jam recipe uses 2 cups of sugar for every 3 cups of fruit. Select the correct combination of sugar and fruit for this recipe (Question 13).
  • James spins the arrow on the wheel to win a prize. The arrow has an equal chance of landing on each section. [The diagram presents a wheel with eight sections, three of which contain a laptop symbol]. What is the probability that the arrow will land on the laptop symbol? (Question 10).

The concept of statistical uncertainty is not tested in NAPLAN.

Similarly, in the 2016 Year 9 reading test, NAPLAN gives students a ‘magazine’ of one page articles, and then provides a series of multiple choice comprehension questions on each article. The PISA reading questions are more complex, such as in the 2018 Rapa Nui example, where students are asked to contrast information from multiple sources, and explain which competing theory they support, citing evidence provided in the sources. The responses to such questions need to be human-coded, which does increase the cost of marking the tests.

So which test is a better assessment of the types of skills students need for life? Professor Geoff Masters, CEO of the Australian Centre for Educational Research (ACER), which conducts PISA in Australia, argues:

[The decline in PISA results] matters because PISA assesses skills that will be increasingly important in the future. Unlike many tests and examinations, PISA does not assess students’ abilities to recall facts or basic literacy and numeracy skills. Instead, it assesses the ability to transfer and apply learning to new situations and unseen problems. This requires an understanding of fundamental concepts and principles, as well as the ability to think.

The challenge for the education system is how to maintain the improvement in basic literacy and numeracy skills, as assessed by NAPLAN, while developing the thinking skills needed for more complex problem solving such as is tested in PISA—both are essential for the modern world.