Developing spelling tests: what’s so hard about that?

By Laura Lynn

Wednesday 5 September 2018

In 2014, NFER released its year five spelling suite, consisting of a set of three spelling tests and a teacher guide, followed by complementary suites for years three and four in 2015. The development of each suite took at least 12 months.

Developing a suite of spelling tests, you might be thinking: what’s so hard about that? You just write a list of interesting words that pupils should be able to spell, right? Or maybe pluck spelling words from the English national curriculum (Appendix 1, for those keen to scan the unwieldy list of possibilities) and reproduce them in snazzy packaging?

You may be surprised to hear that it’s much more complicated than that. Despite popular belief, building a useful and appropriate spelling test that is not only accurate and reliable, but also reflects the nuances of the English national curriculum, means understanding the underlying statutory and non-statutory requirements (based on spelling patterns). It also involves using evidence from a robust trial to figure out which of these words and patterns can be combined to assess pupils’ ability and monitor their progress throughout an academic year.

Does it still sound easy? Think of it this way: what spelling patterns are children expected to know and learn in years three, four and five? Which of these spelling patterns are they likely to encounter regularly in their daily lives? Which combinations of words most accurately cover all the requirements of the national curriculum? Which words are harder or easier for children to spell and how exactly do they misspell them? Does it even matter how they are misspelt?

We would argue that it does. Based on the trials conducted with over 1300 pupils per year group, we find that pupils tend to misspell words similarly. For example, when children of a particular year group misspelt the word ‘library’, a third of those trialled made the same type of error. Also, based on the evidence collected, two sub-groups of common errors were detected based on pupil ability. One error type tended to be made by pupils with average ability while another was made by those who performed below average on the test as a whole.

Why do such details matter? Don’t we just want to know how many words a child can spell correctly – four out of ten or eight out of ten? (Aren’t we happy knowing that they can spell more words correctly in July than they could back in September?)

While these isolated pieces of data are helpful in telling us more about a child’s performance, it is also useful to look at the words children are misspelling and how they are misspelling them. And further, to know that children who performed similarly on the test overall might be more likely to misspell a word in the same way.

How is this useful to teachers?

The NFER teacher guides provide teachers with a user-friendly resource to help them incorporate evidence-based research into their teaching, just by utilising the product at their fingertips. The guide includes helpful information acquired from the trials such as:

  • the percentage of pupils who spelt the word correctly
  • the percentage of pupils who spelt the word incorrectly based on the same type of error
  • the types of errors pupils tended to make when writing words with similar patterns
  • lists of words which contain similar spelling patterns.

Armed with this knowledge of common errors and groups of words with similar spelling patterns, teachers are able to predict which spelling errors are likely to appear in pupils’ writing and plan targeted and differentiated lessons based on the weaknesses that have been identified.

So writing a list of words may be easy but writing an informative, evidence-based spelling test takes a lot of assessment expertise and experience.