To avoid raising my blood pressure every time someone posts something about grading that isn’t right I thought I’d collate how it is actually going to work in one post and leave it pinned to the top of the blog so that I can keep referring people to it. I wish we could stop playing “guess the grade boundary” game but I can’t see it happening anytime soon – I do however, suspect that we are playing it in vain because the setting of the boundary “anchor points” isn’t that simple and we don’t have all the information and in fact no-one really will until the exams are sat and the exam boards do their number crunching along with all the other information that they will have. That said … I know for a fact that I will be playing it myself! In fact today I wasted half an hour of my life looking at last years results to ascertain what percentage “may” have been a “5” in the new world! It was scary and I know there will be “school variability” come August but if we get anywhere close I know my gaffer will be pleased!
Predicting results is a dangerous game and I’ve seen lots on Twitter – but unfortunately as soon as I see one thing that doesn’t fit with the methodology that Ofqual have set out in various documents, I just switch off and it could well be that the rest of the methodology is correct but you’ve lost me (Sorry! … ummm … I .. err … nope! No excuse from me … I’m just rubbish!) So to start with the basics I’m going to refer to the response from Ofqual to my letter to them back in November (the 18th actually) STOP – Read the post which I’ve updated with their response in full and then come back to this one!! – here it is-> LETTER TO OFQUAL ).
Looking at the bigger picture, there is committment that broadly the same proportion that would have got a :
- G and above will be awarded a 1 and above – the 2016 provisional data shows that 97% achieved a G and above.
- C and above will be awarded a 4 and above – the 2016 provisional data shows that 70.5% achieved a C and above.
- A and above will be awarded a 7 and above – the 2016 provisional data shows that 19.7% achieved an A and above.
PLEASE NOTE: The above are not a guarantee (it’s a bit more complicated than that!) but could be seen as “ball park figures” for the national picture if it were based just on 2016 – I am quoting the aged 16 stats here as that is what Ofqual have confirmed that “it has always been our intention to set standards on the 16 year olds cohort, using predictions based on the results of 16 year olds who took the previous qualifications” … remember I had a concern that they would be using the “all students” data. They have blogged about their intentions recently too in their blog post of the 1st December (after my letter … I’m not suggesting it was prompted by my writing to them!!) -> here
So, the boundaries for what I am calling “anchor points” of grades 1, 4 and 7 will be set – it is important that you understand how these anchor points are set and Ofqual have said that “their approach to setting standards will place greater emphasis on statistical predictions based on the prior attainment of the cohort. Those predictions will be based on 16-year-old students who can be matched to their prior attainment at key stage 2”. Exam boards have always used statistics to help them set grade boundaries and this document from the 2015 exam session makes really good reading as do the data tables found on this web page that shows how close each of the boards were to the KS2 predictions for the key points (i.e. at a C and above etc for the matched students). The predictions used by exam boards to guide their decisions include only “matched students” – other students may be those who did not sit KS2 or who are in a different year group (students in year 10 taking GCSEs early, or adult learners etc). These predictions should take into account the fact that this current cohort achieved better key stage 2 results than last years did when they sat their SATs – in fact their maths results were 4 percentage points better!!
The Ofqual response sets out in great detail the benefits and how the “matched student” predictions are tailored to the profile of each exam boards entries so as to take into account the differing entry patterns which allays my fears about comparability across exam boards.
I also raised the matter of how comparability across tiers was being ensured and as you can see from the below extract Ofqual are currently discussing this with exam boards the precise detail about how this will work. It is clear however that there will be a focus on the crossover tasks (yay!! and a pat on the back for us with all the work we’ve been doing on these questions/topics!) … anyone that says they know how this process is going to work … THEY DON’T – It is still being worked on!
So let’s assume that the grade boundaries for a 1, 4 and 9 are agreed and set based on KS2 predictions with a bit of jiggery pokery across the tiers, the other boundary points will be set as laid out in the below document from Ofqual:
In terms of the Foundation tier this will look like the below (please forgive my little “anchors”!) :
In terms of the Higher tier it will look like the below – the 4 and 7 will be set and then the grade 9 calculated using the “tailored approach” which was the subject of the second grading consultation but will be calculated using this: percentage of those achieving a 7 to be awarded a 9 = 7% + 0.5% x (percentage who achieve a 7 and above). After this the 8, 5, 6 and 3 can all be set. Note that the ability to “move” the 3 boundary is subject to the most recent grading consultation that Ofqual have published – unsurprisingly I have written about this one too – > here
Not sure that has helped but I like to ask the questions and understand the “how” of the mechanics of the process rather than just make sh1t up as we go along which is what seems to be happening out there in the real world … remember …. we are all in this together and it will get easier!!