Something must have pulled someones chain because this week this appeared on the Ofqual blog.

I have included the text as it appears today (some of these government bodies update their pages after comments are made on a public forum … so just in case!) and also added my own comments to it just for my own amusement. Enjoy!

“Every school and teacher wants to give their pupils the best chance of success when sitting their GCSEs or A levels. One choice they have is which exam board’s specification to adopt in each subject each year. As such, debates frequently develop on social media or at teaching events about current preferences, and in particular which specifications are considered ‘harder’ or ‘easier’ than others.”

DUH! If there wasn’t such at risk in terms of accountability this wouldn’t happen. So many things are wrong with our system and the level of accountability is just one of them!

I tweeted some figures showing proportions of grades awarded by a couple of the boards a few weeks ago that triggered lots of discussion (good and bad) but the fact is, this information is in the public domain … we’re not fecking idiots and the sad thing is all the boards probably do the same thing anyway to look at their “share” of the market – I get that this discussion shouldn’t lead to “easiest” v “hardest” boards and I do try to issue “health warnings” with this kind of stuff but based on the response from certain areas I do wonder if maybe it is tweets like this that get people all twitchy. 

“Naturally, not all exam board specifications are the same. Our regulations allow for differences between specifications as long as the appropriate amount of stipulated curriculum is covered and the assessment is valid and is sufficiently challenging. These variations allow teachers to choose the specification that they would most enjoy teaching and which they believe would best suit their pupils’ needs.

Now, I’m not giving anything away by saying it is impossible for examiners to set exam papers at precisely the same level of difficulty each year, or in comparison with other boards in the same year. But these small fluctuations are accommodated by varying grade boundaries. So, in one year one board’s specification might be considered ‘hard’ and another ‘easy’, but flexing grade boundaries irons these differences away. And the reverse might be true the next year, and again the grade boundaries will adjust. These differences and the subsequent adjustments in grade boundaries are something we monitor each year in order to maintain standards.”

I keep banging on about this (and the below paragraph) whenever the grade boundaries are published following a sitting. Exam boards don’t set grade boundaries in isolation – at secondary, to some extent we can only ever achieve what the cohort achieved at key stage 2, so we are only competing against each other and can NEVER really make that much of a difference. The focus for national improvement must come through primary education in the first instance! Is it just me that thinks this is a no brainer??

“Teachers and pupils alike are increasingly knowledgeable about the process of setting grade boundaries, often triggered by a discussion of a particularly challenging question, such as ‘Hannah’s sweets’ in 2015. In summary, we use statistical predictions based on the prior attainment of each boards’ entry (KS2 at GCSE and mean GCSE at A level) as a starting point. If the prior attainment of the pupils entering two exam boards’ exams were identical then we would expect that the results of the two exam boards would be identical.

But that’s not to say the exam boards (and Ofqual) don’t carefully consider other sources of evidence. The senior examiners responsible for setting grade boundaries at each board also scrutinise the quality of pupils’ work, the difficulty of the exam compared to previous years, and any feedback they have received on how the exam has performed.

Most awards are close to that predicted statistically, but exam boards can submit evidence to us to explain why it is right that their results do not match what was predicted. We very carefully analyse any evidence submitted and are often persuaded. When we are not persuaded we challenge exam boards so that each award has a bank of evidence to support it.

We also monitor the content of exam papers and challenge exam boards if, for example, we detect unacceptable levels of challenge or undue predictability of questions.  And we are also piloting new sources of evidence, such as the National Reference Test to support the maintenance of standards over time. These various processes mean we are confident that any variation in exam boards’ approaches to assessment are taken into account during the setting of grade boundaries.”

“Various” implies more than two “processes” I can only see two mentioned here. Pedantic I know. One of these two processes is still only in the trial stage or are we assuming it is a “done deal”?

“For these reasons we believe that the search for the ‘easiest’ exam board is misguided. There may also be a tendency for teachers to think that certain styles of question are likely to be easier or harder than pupils actually find them when they sit the exam. For example, research shows that teachers tend to underestimate the difficulty of multiple choice questions, and overestimate the difficulty of more open ended questions. Teachers tend to overestimate the difficulty of wordy questions compared to how pupils actually perform on them. The inclusion of basic calculations in questions can also throw teachers’ judgements of difficulty. We’ll be publishing work on this shortly.”

So “multiple-choice” is obviously aimed at AQA … who is the “wordy” comment aimed at Edexcel? OCR? WJEC? I can’t wait to see this work.

“In addition, we know that when teachers change exam board it is no small decision and they often change lots of other things at the same time. Therefore an improvement in results might also be misattributed to the adoption of an ‘easier’ specification, rather than these other changes. And how often is a consequent drop in results just not talked about?”

True. So very true – changing exam boards is no easy thing to consider. It probably won’t be the only thing you change so consider carefully whether it is the change that will make the difference you’re looking to make. Don’t be bloody minded and change just because “its time for a change” but also if you are looking to change make sure you’re comfortable with the decision.

“So the next time you see or hear a debate about which board’s specification is ‘harder’ or ‘easier’ than the next, you can be sure that we’re thinking about it too. This leaves schools to decide their specification based on an individual teacher’s interests and their pupils’ needs, which must be a better basis for choosing an exam board.”

Individual teachers are not usually in a position to make a decision. It will be made either by the head of department or as a collective agreement within a team. Personally the most important factor we’ll be thinking about when making the decision will be which boards papers our students are more comfortable with and by that I mean which can they access “more of” in comparison to another board HOWEVER I have a belief that the more students are exposed to a certain style of question the better they get at answering those questions so it could be a “self-fulfilling prophecy” depending on which questions they get to see the most. I’m hoping that over time they build up a body of knowledge – a sort of frame of reference – to refer to and see if there are similarities with something they’ve seen before and so get better at applying their knowledge.

By the way lots of you will have noticed that I haven’t stated which board we’ll be using on the blog and I don’t intend to until we’ve ACTUALLY made the decision and even then “if I told you I might have to …. .”