Ask the expert: Interpreting standardised scores with NFER’s Liz Twist

In this ‘Ask the expert’, Liz Twist, Head of Assessment Research and Product Development at NFER and former teacher, answers some FAQs on interpreting standardised scores.

If a pupil has the same standardised score on consecutive termly tests, does that mean they have made no progress?

It depends on the tests you are using and how you are using them. If you use NFER Tests in the term for which they were designed (e.g. autumn tests in the autumn term, summer tests in the summer term), then the same standardised score on consecutive tests would indicate that progress has been made.

With NFER Tests, each test has been standardised in the term in which it is intended to be used, so autumn tests have been standardised in the autumn term, spring tests in the spring term and summer tests in the summer term. A standardised score of 100 represents the average score of all the pupils participating in that standardisation. Therefore a pupil achieving a standardised score of 100 is performing in line with the national average for that term. If the same pupil achieves a standardised score of 100 on a consecutive test, this shows they are performing consistently at an average level i.e. they are retaining a similar position relative to the national average. A pupil who consistently gains a similar standardised score is making average or expected progress in line with the progress seen nationally.

If a pupil’s standardised score goes up significantly, it means they are making more than average progress. Conversely, if a pupil’s standardised score falls significantly, they are making less than average progress and may need to be monitored more carefully.

How do I know if the progress made is better or worse than expected?

On NFER Tests, we provide confidence bands around each standardardised score. These are specific to each test and give an indication of the range of scores in which the pupil’s true score lies. When two standardised scores are being compared, for example between summer year 3 and summer year 4 tests, then the confidence bands should be looked at. If the standardised scores are different but the confidence bands overlap then there is no significant difference between the scores. In this case, it can be stated that the pupil has made typical progress, given their starting point. For example, a pupil gets a standardised score of 98 on year 3 summer maths test. The confidence band is minus 5 to plus 5 so there is a 90% likelihood that their ‘true’ score lies between 93 and 103. On the summer year 4 test, the same pupil achieves a standardised score of 102. This has a confidence band of minus 5 to plus 4 so there’s a 90% likelihood that the ‘true’ score lies between 97 and 106. These confidence bands overlap and so the pupil has made progress that is not signficantly different from that expected, given his or her starting point.

I have been told that pupils with standardised scores of 98 and 101 are still demonstrating average attainment. Is that true?

Yes it is in respect of NFER Tests.

This is another instance when you should look at the confidence bands which have been published alongside the score tables. For example, if the published confidence band for a standardised score of 98 on a maths test is minus 4 to plus 5, then a pupil has a ‘true’ score in the range of 94 to 103. For a standardised score of 101, the confidence band on this specific test is minus 5 to plus 5, giving a band of 96 to 106. This means that these two pupils (with standardised scores of 98 and 101) have no significant difference between their scores and their performance is broadly average.

What increase in standardised score would indicate the pupil has progressed significantly more than expected, given their starting point?

If confidence bands are applied to the standardised scores from consecutive tests and the range in which the pupil’s ‘true’ score lies overlaps, the pupil is making progress in line with others from the same starting point. If the range in which the ‘true’ score lies in the second test is much higher than that of the first test, then the pupil has made better than expected progress.

For example, Mel and Ahmed both gain a standardised score of 108 on the year 4 mathematics spring test. The confidence bands indicate their ‘true’ scores lies between 103 and 112. Mel later gains a standardised score of 111 on the summer test while Ahmed scores 119. Mel’s ‘true’ score on the summer test is between 106 and 115. This overlaps with the confidence band in the spring test. She is therefore making progress in line with others from the same starting point. Ahmed’s ‘true’ score on the summer test lies between 114 and 123. Ahmed’s ‘true’ summer score is therefore significantly higher than his spring ‘true’ score and Ahmed has made significantly more progress than others from the same starting point in the spring.

Written by Liz Twist, Head of Assessment Research and Product Development at NFER

With over 20 years’ experience in assessment development and research, Liz leads the teams developing NFER’s popular assessment products and research. She has also previously worked as deputy head of a combined school and taught both primary and secondary school pupils.

Do you have a question on assessment that you’d like to put to one of our assessment team? Send it through to us at assessmenthub@nfer.ac.uk.

Read the report Read the article