Bumping this thread as I see some chatter on this topic has cropped up in the "Lovely, Happy, Joyful" (etc) thread that WG bestowed us a while ago. Maybe here is a better place for the A level topic thing to avoid the Happy Thread ending up Three Cocks-bound.
TBH I'd just been watching this A-levelgate thing from afar, largely because we have no "skin in the game" - our offspring, nieces and nephews are well past school age and neither Mrs VD or I have ever been teachers. When the kerfuffle kicked off I thought - well, no one was going to be happy with any system where teacher-assessed grades in most cases get marked down. There's always squealing every year about grades etc anyway. I felt sure the Ofqual model would make sense when you get behind the media chatter (let's face it, few journalists are good with data and this is classic silly season stuff).
But then, this morning with a rush of blood to keyboard, I actually went and found and downloaded (yep, I know

) the Ofqual report on how they had wrangled the teacher assessments and historic data to get to this year's A level results. I didn't read all 310 pages of the report, but enough to get the gist of their objectives and the methods they used.
It turns out that for the majority of cases (a 'case' here being an individual student/subject), Ofqual applied their very sophisticated statistical model to both the specific teacher grading and also the student rankings teachers had provided, and of course (given that teachers systematically substantially over-grade, by varying amounts) this led to a downgrading of predicted grades in nearly all cases to which the model was applied.
BUT (as has been said on the telly etc) it turns out that for sub-samples of less than 15 students in a subject - both this year and in previous reference years - they just went, oh gosh, our model can't be statistically valid for such small sub-samples (well, quite true). SO, tell you what, we'll just use the teacher predictions as they are for those cases.
It seems not to have occurred to anyone, or at least isn't discussed at all in the 300+ pages of the report, that schools with small (n<15) subject/cadre sizes might
just be highly likely to correlate very significantly with being the smaller, in many cases independent, schools with pupils from already-advantaged backgrounds. And it's not a small group - nearly 20% of the cases in the data universe fell into that 'too small' category and were reverted to teacher gradings and thereby systematically advantaged those students, usually by at least one grade.
So Ofqual you've just, quite deliberately, given a massive advantage in the university admissions game to the one-fifth of students who are most likely to be in the most advantaged quintile of society. A quite bizarre inability to see the socio-educational wood for the statistical trees.
D-minus in quantitative methods Ofqual and Dept of Ed. See me.