top of page

Maths SATS 2024

admin89177

By Mark Dawes, December 2024

 

Here are some observations about the 2024 KS2 SATS, using data from just under 300 pupils, drawn from different primary schools.

 

There are three maths papers: an arithmetic paper (out of 40) and two reasoning papers (each out of 35).  The total mark is out of 110 and this is then converted to a scaled score from 80 to 120.

This year it wasn’t possible to get a scaled score of 80 in maths (it was in the English tests).  Pupils who scored a total of 0, 1 or 2 marks received a scaled score of ‘N’, while those with a total of 3 marks received a scaled score of 81.

 

Out of this group of pupils, a handful of them were deemed to be working below the level targeted by the tests and therefore did not sit them, while one pupil gained a total of 2 marks across the three papers, and another scored 1 mark on each paper.  At the other end of the distribution, half a dozen pupils achieved a scaled score of 120. 

 

Correlations between the SATS papers

[In this section I will quote an ‘r’ value for the strength of the correlation between different papers and subjects.  Those who are unfamiliar with this merely need to know that r goes from -1 to +1, where -1 shows a perfect negative correlation, zero indicates no correlation, and 1 is a perfect positive correlation.]

 

The strongest correlation, and it is very strong (r = 0.93) is between the raw marks in the two reasoning papers.  This is perhaps unsurprising and might lead us to wonder whether it is necessary for there to be two such papers. The material covered in the papers would still be tested were there to be only one of them, and this score could then be doubled to ensure it is weighted in the same way.  This would save vast amount of time and money (in setting the papers, marking them, administering the tests, etc).  The correlation between the arithmetic paper and the reasoning papers is also very strong (r = 0.88).

 

The Year 6s also take tests in Grammar and Reading.  These are also converted to scaled scores. One might expect that there is a positive correlation between the scaled scores in those two tests, and there is (r = 0.67).  Interestingly, the correlation between Maths and Grammar is stronger (r = 0.79).

 

Pupils who gain a scaled score of 100 or more are officially deemed to have “Achieved expected standard”, with those below that having “Not achieved expected standard”.  The majority of pupils achieved the expected standard in all three subjects (Maths, Grammar and Reading), with the next most common outcome being that they achieved the expected standard in none of them.  Roughly the same number of pupils achieved the expected standard in either one or two of the subjects, with one exception: it was very rare for a pupil to achieve this only in Grammar.

 

When the teacher assessments of whether pupils are working towards/at/at greater depth in science and in writing are included alongside the tests, there is little change, with the most common outcomes being that pupils achieve the expected standard in all five aspects (some at ‘greater depth’ in writing), or in none of them.  Some combinations did not appear at all including, perhaps surprisingly, that on-one achieved the expected standard in grammar, reading and writing but not in maths or science.

 

Individual questions

The two most difficult question on the arithmetic paper were the final two.  On average, Q35 received less than half the marks of any other question, except for Q36 (though Q35 was still much harder).


 

These two questions were also the ones that were most likely to be omitted, though this might be because some pupils ran out time (they had 30 minutes to answer the 36 questions).

 

The third most-omitted question was this one:


The final question on paper 2 was the lowest scoring question across all of the papers, with fewer than one pupil in five getting it correct.


Does the diagram look as if Jar A has double the number of beads?

 

The most-omitted question on papers 2 and 3 was this one (from paper 2):



This question comes from paper 3:

 


I encourage secondary teachers to consider the steps that are involved in answering the question and then to decide how many marks this question would be worth on a foundation tier GCSE paper.

 

This was a one-mark question! 

 

The mark scheme expects that the Y6 children will carry out the two calculations (non-calculator) and will then rewrite at least one of them to give fractions with the same denominator.  These fractions then need to be compared.

 

Before looking at the mark scheme I wondered whether there was a neat ‘trick’ involved in this which would obviate the need to carry out the calculations.  I couldn’t find one!  Then best I could do was this:

The last of these is clearly bigger than

That seems more difficult than just carrying out the calculations!

 

Formatting expectations

I like this question (from paper 3), but the formatting seems really off-putting to me:

Perhaps the pupils are used to this, but the second fraction appears to be very different from the first one.  I would prefer it to look like this:

Even better (though this might introduce more issues) would be this:


One of the things that looks unusual to me on the SATS papers is the use of commas as thousand separators. Here is an example (from paper 1):

When pupils write their answers, commas are optional, but if they are used then they must be commas and not points or apostrophes.

 

I was intrigued to see in the general guidance section of the mark scheme that there is considerable leeway as to how money is written:

I don’t think that £3.20p would be permitted at GCSE.

It seems a little arbitrary that the clearly incorrect £3:20, £3-20 and £3;20 are all permitted, whereas £3,20 (which is actually used elsewhere in Europe in countries that use a comma where we would have a decimal point) is not accepted.

 

Part-marks

On paper 1 (the arithmetic paper) the two questions involving long multiplication and the two involving short or long division were worth two marks.  2 marks are awarded for the correct answer, whatever method is used.  If pupils have used the formal method (as shown in Appendix 1 of the national curriculum) and have made a single arithmetic error, then they can be awarded 1 mark.  If they omit the place-holder zero when carrying out long multiplication then they are (rightly in my view) given no marks.  It seems a little harsh to me that if they use the grid method of multiplication and make an arithmetic slip then they also gain zero marks for the question.

 

I was therefore interested as to how often pupils gained a single mark on those four questions.

 

The table below shows the number of pupils who gained each number of marks on those questions:

Question:

Did not attempt

0 marks

1 mark

2 marks

20

4

34

17

217

25

10

41

42

179

30

19

67

34

152

36

46

103

16

107

 

Those who did not attempt the question obviously scored zero marks, but they are recorded separately from those who tried to do the question but could not do it.

 

The numbers of pupils gaining 1 mark on each of the questions suggests that very many pupils are using the standard methods.  I do wonder, however, how many of those who score zero marks (or who omit the question) might get the correct answer were they to use a “non-standard” method.

 

Final comment

I hope that secondary maths teachers have found it interesting to see some of the things that are included in the SATS tests, and that primary teachers have found it useful to see what secondary teachers find surprising!

 

A second blog post relating to primary tests will appear in about a week's time. This will look at some issues surrounding the Year 4 Multiplication Tables Check test.

117 views0 comments

Recent Posts

See All

Commentaires


bottom of page