Grade calculation: the precise method of marking

Technical report gives insights into Leaving Cert process of Department of Education

A statistical model was created to estimate how a student with a particular profile of Junior Cert results was likely to perform in the Leaving Cert. Photograph: Dave Meehan
A statistical model was created to estimate how a student with a particular profile of Junior Cert results was likely to perform in the Leaving Cert. Photograph: Dave Meehan

The exact process used to arrive at calculated grades has been shrouded in mystery – until now.

A 200-page report from a unit established by the Department of Education provides insights into how grades were compiled and adjusted to arrive at the set of results released on Monday morning.

The report – or, to give it its full title, the Report from the National Standardisation Group to the Independent Steering Committee and the Programme Board – shows how marks were adjusted on a subject-by-subject basis.

Teachers’ estimates

READ MORE

It started in schools: teachers were asked to estimate a percentage mark for each of their students and to rank them in order of their likely Leaving Cert attainment.

Teachers’ marks and rankings were then collated by the school, adjusted where necessary by the school principal and leadership team, before a school-issued mark was sent to the Department of Education.

Under the model, teacher estimates were regarded as the best source of data in the absence of a uniform range of pre-existing school-based data.

There was, however, a recognition that teacher judgments tend to over-estimate student performance. A standardisation process was designed to address differences between teachers and schools in making estimates.

This process recognised that while teachers and schools give high-quality information about individuals, statistics are good at giving information about groups. The idea was to combine the strengths of both.

Statistical standardisation

The predictions from schools were combined with Department of Education information relating to Junior Cycle or Junior Cert results.

A statistical model was created to estimate how a student with a particular profile of Junior Cert results was likely to perform in the Leaving Cert.

Second, the Junior Cert attainment of the class of 2020 was used, in aggregate, form at school level.

Research shows that while Junior Cert results are good predictors of Leaving Cert results, they are not strong enough by themselves to estimate an individual’s performance.

Officials decided no student’s calculated grades would be determined by how they performed in the Junior Cert. The marks and rankings from schools, and the aggregate Junior Cert results, were combined to produce calculated grades.

While there had been plans to include schools’ historical performance in the standardisation process, these were dropped, partly due to concerns that they could penalise disadvantaged students. This measure had proved particularly controversial in the UK.

Fine adjustments

When teachers’ estimated marks were compared to the normal expected results, it was clear there was strong evidence of overestimation of marks at all points in the achievement spectrum; this was most pronounced at upper level.

In fact, even if the grading standards across all schools were in perfect alignment with each other, just over 60 per cent of higher-level grades in school estimates would need to be reduced by one grade if historic standards were adhered to.

In the end, the standardisation process in most cases split the difference between teachers’ estimated grades and historic standards. As a result, just 17 per cent of grades were lowered, with significant grade inflation as a result.