The Introduction to this series of blogs, HERE, sets out the background and goals.
There are many different senses in which people discuss ‘standards’. Sometimes they mean an overall judgement on the performance of the system as judged by an international test like PISA. Sometimes they mean judgements based on performance in official exams such as KS2 SATs (at 11) or GCSEs. Sometimes they mean the number of schools above or below a DfE ‘floor target’. Sometimes they mean the number of schools and/or pupils in Ofsted-defined categories. Sometimes people talk about ‘the quality of teachers’. Sometimes they mean ‘the standards required of pupils when they take certain exams’. Today, the media is asking ‘have Academies raised standards?’ because of the Select Committee Report (which, after a brief flick through, seems to have ignored most of the most interesting academic studies done on a randomised/pseudo-randomised basis).
This blog in the series is concerned mainly with the questions of – what has happened to the standards required of pupils when they take GCSEs and A Levels as a result of changes since the mid-1980s, and how do universities and learned societies judge the preparation of pupils for further studies. Have the exams got easier? Do universities and learned societies think pupils are well-prepared for further studies?
I will give a very short potted history of the introduction of GCSEs and the National Curriculum before examining the evidence of their effects. If you are not interested in the history, please skip to the Section B on Evidence. If you just want to see my Conclusions, scroll to the end for a short section.
I stress that my goal is not to argue for a return to the pre-1988 system of O Levels and A Levels. While it had some advantages over the existing system, it also had profound problems. I think that an unknown fraction of the cohort could experience far larger improvements in learning than we see now if they were introduced to different materials in different ways, rather than either contemporary exams or their predecessors, but I will come to this argument, and why I have this belief, in a later blog.
I have used the word ‘Department’ to represent the DES of the 1980s, the DfE of post-2010, and its different manifestations in between.
This is just a rough first stab at collecting things I’ve shoved in boxes, emails etc over the past few years. Please leave corrections and additions in Comments.
A. A very potted history
Joseph introduces GCSEs – ‘a right old mess’
The debate over the whole of education policy, and particularly the curriculum and exams, changed a lot after Callaghan’s Ruskin speech in 1976 and the Department’s Yellow Book. Before then, the main argument was simply about providing school places and the furore over selection. After 1976 the emphasis shifted to ‘standards’ and there was growing momentum behind a National Curriculum (NC) of some sort and reforms to the exam system.
Between 1979-85, the Department chivvied LAs on the curriculum but had little power and nothing significant changed. Joseph was too much of a free marketeer to support a NC so its proponents could not make progress.
Joseph was persuaded to replace O Levels with GCSEs. He thought that the outcome would be higher standards for all but he later complained that he had been hoodwinked by the bureaucratic process involving The Schools Examination Committee (SEC). He later complained:
‘I should have fought against flabbiness in general more than I did… I thought I did, but how do you reach into such a producer-oriented world? … “Stretching” was my favourite word; I judged that if you leant on that much else would follow. That’s what my officials encouraged me to imagine I was achieving… I said I’d only agree to unify the two examinations provided we established differentiation [which he defined as ‘you’re stretching the academic and you’re stretching the non-academic in appropriate ways’], and now I find that unconsciously I have allowed teacher assessment, to a greater extent than I assumed. My fault … my fault… it’s the job of ministers to see deeply… and therefore it’s flabby… You don’t find me defending either myself or the Conservative Party, but I reckon that we’ve all together made a right old mess of it. And it’s hurt most those who are most vulnerable.’ (Interview with Ball.)
I have not come across any other ministers or officials from this period so open about their errors.
The O Level survived under a different name as an international exam provided by Cambridge Assessment. It is still used abroad including in Singapore which regularly comes in the top three in all international tests. Cambridge Assessment also offers an ‘international GCSE’ that is, they say, tougher than the ‘old’ GCSE (i.e. the one in use now before it changes in 2015) but not as tough as the O Level. This international GCSE was used in some private schools pre-2010 along with ‘international GCSEs’ from other exam boards. From 2010, state schools could use iGCSEs. In 2014, the DfE announced that it would stop this again. I blogged on this decision HERE.
Entangled interests – Baker and the National Curriculum
In 1986, Thatcher replaced Joseph with Baker hoping, she admitted, that he would make up ‘in presentational flair what ever he lacked in attention to detail’. He did not. Nigel Lawson wrote of Baker that ‘not even his greatest friends would describe him as a profound thinker or a man with mastery of detail’. Baker’s own PPS said that at the morning meeting ‘the main issue was media handling’. Jenny Bacon, the official responsible for The National Curriculum 5-16 (1987), said that Baker liked memos ‘in “ball points” … some snappy things with headings. It wasn’t glorious continuous prose…[Ulrich, a powerful DES official] was appalled but Baker said “That’s just the kind of brief I want”.’
Between 1976 and 1986, concern had grown in Whitehall about the large number of awful schools and widespread bad teaching. Various intellectual arguments, ideology, political interests (personal and party), and bureaucratic interests aligned to create a National Curriculum. Thatcherites thought it would undermine what they thought of as the ‘loony left’, then much in the news. Baker thought it would bring him glory. The Department and HMI rightly thought it would increase their power. After foolishly announcing CTCs at Party Conference, thus poisoning their brand with politics from the start, Baker announced he would create a NC and a testing system at 7, 11, and 14.
The different centres of power disagreed on what form the NC would take. HMI lobbied against subjects and wanted a NC based on ‘areas of expertise’, not traditional subjects. Thatcher wanted a very limited core curriculum based on English, maths, and science. The Department wanted a NC that stretched across the whole curriculum. Baker agreed with the Department and dismissed Thatcher’s limited option as ‘Gradgrind’.
In order to con Thatcher into agreeing his scheme, Baker worked with officials to invent a fake distinction between ‘core’ and ‘foundation’ subjects. As Baker’s Permanent Secretary Hancock said, ‘We devised the notion of the core and the foundation subjects but if you examine the Act you will see that there is no difference between the two. This was a totally cynical and deliberate manoeuvre on Kenneth Baker’s part.’
The 1988 Act established two quangos to be what Baker called ‘the twin guardians of the curriculum’ – The National Curriculum Council (NCC), focused on the NC, and The Schools Examinations and Assessment Council (SEAC), focused on tests. Once the Act was passed, Baker’s junior minister Rumbold said that ‘Ken went out to lunch.’ Like many ministers, he did not understand the importance of the policy detail and the intricate issues of implementation. He allowed officials to control appointments to the two vital committees and various curriculum working groups. Even Baker’s own spad later said that Baker was conned into appointing ‘the very ones responsible for the failures we have been trying to put right’. Baker forlornly later admitted that ‘I thought you could produce a curriculum without bloodshed. Then people marched over mathematics. Great armies were assembled’, and he ‘never envisaged it would be as complex as it turned out to be’. Bacon, the official responsible for the NC, said that Baker ‘wasn’t interested in the nitty gritty’. Nicholas Tate (who was at the NCC and later headed the QCA) said that Baker was ‘affable but remote. He didn’t trouble his mind with attainment targets. He was resting on his laurels.’ Hancock, his Permanent Secretary, said that ‘after 1987 he became increasingly arrogant and impatient’. In 1989, Baker was moved to Party Chairman leaving behind chaos for his successor.
According to his colleagues, Baker was obsessed with the media, he did not try to understand (and did not have the training to understand) the policy issues in detail, and he confused the showmanship necessary to get a bill passed with serious management – he described himself as ‘a doer’ but the ‘doing’ in his mind consisted of legislation and spin. He did not even understand that there were strong disputes among teachers, subject bodies, and educationalists about the content of the NC – never mind what to do about these disputes. (Having watched the UTC programme from the DfE, the same traits were much in evidence thirty years later.)
Baker’s legacy 1989 – 1997: Shambles
Baker’s memoirs do not mention the report of The Task Group on Assessment (TGAT), chaired by Professor Paul Black, commissioned by Baker in 1987 to report on how the NC could be assessed. The plan was very complicated with ten levels of attainment having to be defined for each subject. Thatcher hated it and criticised Baker for accepting it. Meanwhile the Higginson Report had recommended replacing A Levels with some sort of IB type system. Bacon said that ‘the political trade-off was Higginson got ditched … and we got TGAT. In retrospect it may have been the wrong trade off.’
MacGregor could not get a grip of the complexity. He did not even hire a specialist policy adviser because, he said, ‘I didn’t feel I needed one.’ He blamed Baker for the chaos who, he said, ‘hadn’t spent enough time thinking about who was appointed to the bodies. He left it to officials and didn’t think through what he wanted the bodies to do. For the first year I was unable to replace anybody.’ The chairman of NCC described how they used ‘magic words to appease the right’ and get through what they wanted. The officials who controlled SEAC stopped the simplification that Thatcher wanted using the ‘legal advice’ card, claiming that the 1988 Act required testing of all attainment targets. (I had to deal with the same argument 25 years later.) MacGregor was trapped. He had an unworkable system and was under contradictory pressure from Thatcher to simplify everything and from Baker to maintain what he had promised.
Clarke bluffed and bullied his way through 18 months without solving the problems. His Permanent Secretary described the trick of getting Clarke to do what officials wanted: ‘The trick was to never box him into a corner… Show him where there was a door but never look at that door, and never let on you noticed when he walked through.’ Like MacGregor, Clarke blamed Baker for the shambles: ‘[Baker] had set up all these bloody specialist committees to guide the curriculum, he’d set up quango staff who as far as I could see had come out of the Inner London Education Authority the lot of them.’ Clarke solved none of the main problems with the tests, antagonised everybody, and replaced HMI with Ofsted.
After his surprise win, Major told the Tory Conference in 1992, ‘Yes it will mean another colossal row with the education establishment. I look forward to that.’ Patten soon imploded, the unions went for the jugular over the introduction of SATs, and by the end of 1993 Number Ten had backtracked on their bellicose spin and was in full retreat with a review by Dearing (published 1994). Suddenly, the legal advice that had supposedly prevented any simplification was rethought and officials told Dearing that the legal advice did allow simplification after all: ‘our advice is that the primary legislation allows a significant measure of flexibility’. (In my experience, one of the constants of Whitehall is that legal advice tends to shift according to what powerful officials want.) Dearing produced a classic Whitehall fudge that got everybody out of the immediate crisis but did not even try to deal with the fundamental problems, thus pushing the problems into the future.
The historian Robert Skidelsky, helping SEAC, told Patten ‘these tests will not run’ and he should change course but Patten shouted ‘That is defeatist talk.’ Skidelsky decided to work out a radically simpler model than the TGAT system with a small group in SEAC: ‘We pushed the model through committee and through the Council and sent it off to John Patten. We never received a reply. Six months after I resigned Emily Blatch approached me and said she had been looking for my paper on Assessment but no one seems to know where it is.’
Patten was finished. Gillian Shephard was put in to be friendly to the unions and quiet the chaos. Soon she and Major had also fallen out and the cycle of briefing and counter-briefing against Number Ten returned with permanent policy chaos. One of her senior officials, Clive Saville, concluded that ‘There was a great intellectual superficiality about Gillian Shephard and she was as intellectually dishonest as Shirley Williams. She was someone who wanted to be liked but wasn’t up to the job.’
A few thoughts on the process
The Government had introduced a new NC and test system and replaced O Levels with GCSEs. (They also introduced new vocational qualifications (NVQs) described by Professor Alan Smithers as a ‘disaster of epic proportions … utterly lightweight’.) The process was a disastrous bungle from start to finish.
Thatcher deserves considerable blame. She allowed Baker to go ahead with fundamental reforms without any agreed aims or a detailed roadmap. She knew, as did Lawson, that Baker could not cope with details yet appointed him on the basis of ‘presentational flair’ (media obsession is often confused with ‘presentational flair’).
The best book I have read by someone who has worked in Number Ten and seen why the Whitehall architecture is dysfunctional is John Hoskyns’ Just In Time. Extremely unusually for someone in a senior position in No10, Hoskyns both had an intellectual understanding of complex systems and was a successful manager. Inevitably, he was appalled at how the most important decisions were made and left Number Ten after failing to persuade Thatcher to tear up the civil service system. Since then, everybody in Number Ten has been struggling with the same issues. (If she had taken his advice history might have been extremely different – e.g. no ERM debacle.) His conclusion on Thatcher was:
‘The conclusion that I am coming to is that the way in which [Thatcher] herself operates, the way her fire is at present consumed, the lack of a methodical mode of working and the similar lack of orderly discussion and communication on key issues, means that our chance of implementing a carefully worked out strategy – both policy and communications – is very low indeed… Difficult problems are only solved – if they can be solved at all – by people who desperately want to solve them… I am convinced that the people and the organisation are wrong.’ (Emphasis added.)
Arguably the person who knowingly appoints someone like Baker is more to blame for the failings of Baker than Baker is himself. Major and the string of ministers that followed Baker were doomed. They were not unusually bad – they were representative examples of those at the apex of the political process. They did not know how to go about deciding aims, means, and operations. They were obsessed with media management and therefore continually botched the policy and implementation. They could not control their officials. They could not agree a plan and blamed each other. If they were the sort of people who could have got out of the mess, then they were the sort of people who would not have got into the mess in the first place.
Officials over-complicated everything and, like ministers, did not engage seriously with the core issue – what should pupils of different abilities be doing and how can we establish a process where we can collect reliable information. The process was dominated by the same attitude on all sides – how to impose a mentality already fixed.
It was also clearly affected by another element that has contemporary relevance – the constant churn of people. Just between summer 1989 and the end of 1992, there was: a new Permanent Secretary in May 1989, a new SoS in July 1989 (MacGregor), another new SoS in November 1990 (Clarke), a new PM and No10 team (Major), new heads for the NCC and SEAC in July 1991, then another new SoS in spring 1992 (Patten) and another new Permanent Secretary. Everybody blamed problems on predecessors and nobody could establish a consistent path.
Even its own Permanent Secretaries later attacked the DES. James Hamilton (1976-1983) was put into DES in June 1976 from the Cabinet Office to help with the Ruskin agenda and found a place where ‘when something was proposed someone would inevitably say, “Oh we tried that back in whenever and it didn’t work”…’. Geoffrey Holland (1992-3) admitted that, ‘It [DES] simply had no idea of how to get anything off the ground. It was lacking in any understanding or experience of actually making things happen.’
A central irony of the story shows how dysfunctional the system was. Thatcher never wanted a big NC and a complicated testing system but she got one. As some of her ideological opponents in the bureaucracy tried to simplify things when it was clear Baker’s original structure was a disaster, ministers were often fighting with them to preserve a complex system that could not work and which Thatcher had never wanted. This sums up the basic problem – a very disruptive process was embarked upon without the main players agreeing what the goal was.
Although the think tanks were much more influential in this period than they are now, Ferdinand Mount, head of Thatcher’s Policy Unit, made a telling point about their limitations: ‘Enthusiasts for reform at the IEA and the CPS were prodigal with committees and pamphlets but were much less helpful when it came to providing practical options for action. This made it difficult for the Policy Unit’s ideas to overcome the objections put forward by senior officials’. Thirty years later this remains true. Think tanks put out reports but they rarely provide a detailed roadmap that could help people navigate such reforms through the bureaucracy and few people in think tanks really understand how Whitehall works. This greatly limits their real influence. This is connected to a wider point. Few of those who comment prominently on education (or other) policy understand how Whitehall works, hence there is a huge gap between discussions of ideal policy and what is actually possible within a certain timeframe in the existing system, and commentators think that all sorts of things that happen do so because of ministers’ wishes, confusing public debate further.
I won’t go into the post-1997 story. There are various books that tell this whole story in detail. The National Curriculum remained but was altered; the test system remained but gradually narrowed from the original vision; there were some attempts at another major transformation (such as Tomlinson’s attempt to end A Levels, thwarted by Blair) but none took off; money poured into the school system and its accompanying bureaucracy at an unprecedented rate but, other than a large growth in the number and salaries of everybody, it remained unclear what if any progress was being made.
This bureaucracy spent a great deal of taxpayers’ money promoting concepts such as ‘learning styles’ and ‘multiple intelligences’ that have no proper scientific basis but which nevertheless were successfully blended with old ideas from Vygotsky and Piaget to dominate a great deal of teacher training. A lot of people in the education world got paid an awful lot of money (Hargreaves, Waters et al) but what happened to standards?
(The quotes above are taken mainly from Daniel Callaghan’s Conservative Party Education Policies 1976-1997.)
B. The cascading effects of GCSEs and the National Curriculum
Below I consider 1) the data on grade inflation in GCSEs and A Levels, 2) various studies from learned societies and others that throw light on the issue, 3) knock-on effects in universities.
1. Data on grade inflation in GCSEs and A Levels
We do not have an official benchmark against which to compare GCSE results. The picture is therefore necessarily hazy. As Coe has written, ‘we are limited by the fact that in England there has been no systematic, rigorous collection of high-quality data on attainment that could answer the question about systemic changes in standards.’ This is one of the reasons why in 2013 we, supported by Coe and others, pushed through (against considerable opposition including academics at the Institute of Education) a new ‘national reference test’ in English and maths at age 16, which I will return to in a later blog.
However, we can compare the improvement in GCSE results with a) results from international tests and b) consistent domestic tests uncontrolled by Whitehall.
The first two graphs below show the results of this comparison.
Chart 1: Comparison of English performance in international surveys versus GCSE scores 1995-2012 (Coe)
Chart 2: GCSE grades achieved by candidates with same maths & vocab scores each year 1996-2012 (Coe)
Professor Coe writes of Chart 1:
‘When GCSE was introduced in 1987 [I think he must mean 1988 as that was the first year of GCSEs or else he means ‘the year before GCSEs were first taken’], 26.4% of the cohort achieved five grade Cs or better. By 2012 the proportion had risen to 81.1%. This increase is equivalent to a standardised effect size of 1.63, 3 or 163 points on the PISA scale… If we limit the period to 1995 – 2011 [as in Chart 1 above] the rise (from 44% to 80% 5A*-C) is equivalent to 99 points on the PISA scale [as superimposed on Chart 1]… [T]he two sets of data [international and GCSEs] tell stories that are not remotely compatible. Even half the improvement that is entailed in the rise in GCSE performance would have lifted England from being an average performing OECD country to being comfortably the best in the world. To have doubled that rise in 16 years is just not believable…
‘The question, therefore, is not whether there has been grade inflation, but how much…’ [Emphasis added.] (Professor Robert Coe, ‘Improving education: a triumph of hope over experience‘, 18 June 2013, p. vi.)
Chart 2 plots the improving GCSE grades achieved by pupils scoring the same each year in a test of maths and vocabulary: pupils scoring the same on YELLIS get higher and higher GCSE grades as time passes. Coe concludes that although ‘it is not straightforward to interpret the rise in grades … as grade inflation’, the YELLIS data ‘does suggest that whatever improved grades may indicate, they do not correspond with improved performance in a fixed test of maths and vocabulary’ (Coe, ibid).
This YELLIS comparison suggests that in 2012 pupils received a grade higher in maths, history, and French GCSE, and almost a grade higher in English, than students of the same ability in 1996.
It is important to note that neither of Coe’s charts or measurements include the effects of either a) the initial switch from O Level to GCSE or b) what changed with GCSEs from 1988 – 1995.
The next two charts show this earlier part of the story (both come from Education: Historical statistics, House of Commons, November 2012). NB. they have different end dates.
Chart 3: Proportion getting 5 O Levels / GCSEs at grade C or higher 1953/4 – 2008/9
Chart 4: Proportion getting 1+ or 3+ passes at A Level 1953/4 – 1998/9
Chart 3 shows that the period 1988-95 saw an even sharper increase in GCSE scores than post-1995 so a GCSE/YELLIS style comparison that included the years 1988-1995 would make the picture even more dramatic.
Chart 4 shows a dramatic increase in A Level passes after the introduction of GCSEs. One interpretation of this graph, supported by the 1997-2010 Government and teaching unions, is that this increase reflected large real improvements in school standards.
There is GCSE data that those who believe this argument could cite. In 1988, 8% of GCSEs were awarded an ‘A’ in GCSE. In 2011, 23% of GCSEs were awarded an ‘A’ or ‘A*’ in GCSE. The DfE published data in 2013 which showed that the number of pupils with ten or more A* grades trebled 2002-12. This implies a very large increase in the numbers of those excelling at GCSE, which is consistent with a picture of a positive knock-on effect on improving A Level results.
However, we have already seen that the claims for GCSEs are ‘not believable’ in Coe’s words. It also seems prima facie very unlikely that a sudden large improvement in A Level results from 1990 could be the result of immediate improvements in learning driven by GCSEs. There is also evidence for A Levels similar to the GCSE/YELLIS comparison.
Chart 5: A level grades of candidates having the same TDA score (1988-2006)
Chart 5 plots A Level grades in different subjects against the international TDA test. As with GCSEs, this shows that pupils scoring the same in a non-government test got increasingly higher grades in A Levels. The change in maths is particularly dramatic from an ‘Unclassified’ mark in 1988 to a B/C in 2006.
What we know about GCSEs combined with this information makes it very hard to believe that the sudden dramatic increase in A Level performance since 1990 is because of real improvements and suggests another interpretation: these dramatic increases in A Level results reflected (mostly or entirely) A Levels being made significantly easier probably in order to compensate for GCSEs being much easier.
However, the data above can only tell part of the story. Logically, it is hard or impossible to distinguish between possible causes just from these sorts of comparisons. For example, perhaps someone might claim that A Level questions remained as challenging as before but grade boundaries moved – i.e. the exam papers were the same but the marking was easier. I think this is prima facie unlikely but the point is that logically the data above cannot distinguish between various possible dynamics.
Below is a collection of studies, reports, and comments from experts that I have accumulated over the past few years that throws light on which interpretation is more reasonable. Please add others in Comments.
(NB. David Spiegelhalter, a Professor of Statistics at Cambridge, has written about problems with PISA’s use of statistics. These arguments are technical. To a non-specialist like me, he seems to make important points that PISA must answer to retain credibility and the fact that it has not (as of the last time I spoke to DS in summer 2014) is a blot on its copybook. However, I do not think they materially affect the discussion above. Other international tests conducted on different bases all tell roughly the same story. I will ask DS if he thinks his arguments do undermine the story above and post his reply if any.)
2. Studies 2007 – now
NB1. Most of these studies are comparing changes over the past decade or so, not the period since the introduction of the NC and GCSEs in the 1980s.
NB2. I will reserve detailed discussion of the AS/A2/decoupling argument for a later blog as it fits better in the ‘post-2010 reforms’ section.
Learned societies. The Royal Society’s 2011 study of Science GCSEs: ‘the question types used provided insufficient opportunity for more able candidates … to demonstrate the extent of their scientific knowledge, understanding and skills. The question types restricted the range of responses that candidates could provide. There was little or no scope for them to demonstrate various aspects of the Assessment Objectives and grade descriptions… [T]he use of mathematics in science was examined in a very limited way.’ SCORE also published (2012) evidence on science GCSEs which reported ‘a wide variation in the amount of mathematics assessed across awarding organisations and confirmed that the use of mathematics within the context of science was examined in a very limited way. SCORE organisations felt that this was unacceptable.’
The 2012 SCORE report and Nuffield Report showed serious problems with the mathematical content of A Levels. SCORE was very critical:
‘For biology, chemistry and physics, it was felt there were underpinning areas of mathematics missing from the requirements and that their exclusion meant students were not adequately prepared for progression in that subject. For example, for physics many of the respondents highlighted the absence of calculus, differentiation and integration, in chemistry the absence of calculus and in biology, converting between different units… For biology, chemistry and physics, the analysis showed that the mathematical requirements that were assessed concentrated on a small number of areas (e.g. numerical manipulation) while many other areas were assessed in a limited way, or not at all… Survey respondents were asked to identify content areas from the mathematical requirements that should feature highly in assessments. In most cases, the biology, chemistry and physics respondents identified mathematical content areas that were hardly or not at all assessed by the awarding organisations.
‘[T]he inclusion of more in-depth problem solving would allow students to apply their knowledge and understanding in unstructured problems and would increase their fluency in mathematics within a science context.’
‘The current mathematical assessments in science A-levels do not accurately reflect the mathematical requirements of the sciences. The findings show that a large number of mathematical requirements listed in the biology, chemistry and physics specifications are assessed in a limited way or not at all within these papers. The mathematical requirements that are assessed are covered repeatedly and often at a lower level of difficulty than required for progression into higher education and employment. It has also highlighted a disparity between awarding organisations in their assessment of the use of mathematics within biology, chemistry and physics A-level. This is unacceptable and the examination system, regardless of the number of awarding organisations, must ensure the assessments provide an authentic representation of the subject and equip all students with the necessary skills to progress in the sciences.
‘This is likely to have an impact on the way that the subjects are taught and therefore on students’ ability to progress effectively to STEM higher education and employment.’ SCORE, 2012. Emphasis added.
The 2011 Institute of Physics report showed strong criticism from university academics of the state of physics and engineering undergraduates’ mathematical knowledge. Four-fifth of academics said that university courses had changed to deal with a lack of mathematical fluency and 92% said that a lack of mathematical fluency was a major obstacle.
‘The responses focused around mathematical content having to be diluted, or introduced more slowly, which subsequently impacts on both the depth of understanding of students, and the amount of material/topics that can be covered throughout the course…
‘Academics perceived a lack of crossover between mathematics and physics at A-level, which was felt to not only leave students unprepared for the amount of mathematics in physics, but also led to them not applying their mathematical knowledge to their learning of physics and engineering.’ IOP, 2011.
The 2011 Centre for Bioscience criticised Biology and Chemistry A Levels and preparation of pupils for bioscience degrees: ‘very many lack even the basics… [M]any students do not begin to attempt quantitative problems and this applies equally to those with A level maths as it does to those with C at GCSE. A lack of mathematics content in A level Biology means that students do not expect to encounter maths at undergraduate level. There needs to be a more significant mathematical component in A level biology and chemistry.’ The Royal Society of Chemistry report, The five decade challenge (2008), said there had been ‘catastrophic slippage in school science standards’ and that Government claims about improving GCSE scores were ‘an illusion’. (The Department said of the RSC report, ‘Standards in science have improved year on year thanks to 10 years of sustained investment and improvement in teaching and the education system – this is something we should celebrate, not criticise. Times have changed.’)
Ofqual, 2012. Ofqual’s Standards Review in 2012 found grade inflation in both GCSE and A-levels between 2001-03 and 2008-10: ‘Many of these reviews raise concerns about the maintenance of standards… In the GCSEs we reviewed (biology, chemistry and mathematics) we found that changes to the structure of the assessments, rather than changes to the content, reduced the demand of some qualifications.’
On A-levels, ‘In general we found that changes to the way the content was assessed had an impact on demand, in many cases reducing it. In two of the reviews (biology and chemistry) the specifications were the same for both years. We found that the demand in 2008 was lower than in 2003, usually because the structure of the assessments had changed. Often there were more short answer, structured questions’ (Ofqual, Standards Reviews – A Summary, 1 May 2012, found here).
Chief Executive of Ofqual, Glenys Stacey, has said: ‘If you look at the history, we have seen persistent grade inflation for these key qualifications for at least a decade… The grade inflation we have seen is virtually impossible to justify and it has done more than anything, in my view, to undermine confidence in the value of those qualifications’ (Sunday Telegraph, 28 April 2012).
The OECD’s International Survey of Adult Skills (October 2013). This assessed numeracy, literacy and computing skills of 16-24-year-olds. The tests were done over 2011/2012. England was 22nd out of 24 for literacy, 21st out of 24 for numeracy, and is 16th out of 20 for ‘problem solving in a technology-rich environment’.
PISA 2012. The normal school PISA tests taken in 2012 (reported 2013) showed no significant change between 2009-12. England was 21st for science, 23rd for reading, and 26th for mathematics. A 2011 OECD report concluded: ‘Official test scores and grades in England show systematically and significantly better performance than international and independent tests… [Official results] show significant increases in quality over time, while the measures based on cognitive tests not used for grading show declines or minimal improvements’ (OECD Economic Surveys: United Kingdom, 16 March 2011, p. 88-89). This interesting chart shows that in the PISA maths test the children of English professionals perform the same as children of Singapore cleaners (Do parents’ occupations have an impact on student performance?, PISA 2014).
Chart 6: Comparing pupil maths scores by parent occupation, UK (left) and Singapore (right) maths skills (PISA 2012)
TIMMS/PIRLS. The TIMMS/PIRLS tests (taken summer 2011, reported December 2012) told a similar story to PISA. England’s score in reading at age 10 increased since 2006 by a statistically significant amount. England’s score in science at age 10 decreased since 2007 by a statistically significant amount. England’s scores in science at age 14 and mathematics at ages 10 and 14 showed no statistically significant changes since 2007. (According to experts, the PISA maths test relies more on language comprehension than TIMMS which is supposedly why Finland scores higher in the former than the latter.)
National Numeracy (February 2012). Research showed that in 2011 only a fifth of the adult population had mathematical skills equivalent to a ‘C’ in GCSE, down a few percent from the last survey in 2003. About half of 16-65 year olds have at best the mathematical skills of an 11 year-old. A fifth of adults will struggle with understanding price labels on food and half ‘may not be able to check the pay and deductions on a wage slip.’
King’s College, 2009. A major study by academics from King’s College London and Durham University found that basic skills in maths have declined since the 1970s. In 2008, less than a fifth of 14 year-olds could write 11/10 as a decimal. In the early 1980s, only 22 per cent of pupils obtained a GCE O-level grade C or above in maths. In 2008, over 55 per cent gained a GCSE grade C or above in the subject (King’s College London/University of Durham, ‘Secondary students’ understanding of mathematics 30 years on‘, 5 September 2009, found here).
Chart 7: Performance on ICCAMS / CSMS Maths tests showing declines over time
Shayer et al (2007) found that performance in a test of basic scientific concepts fell significantly between 1976 and 2003. ‘[A]lthough both boys and girls have shown great drops in performance, the relative drop is greater for boys… It makes it difficult to believe in the validity of the year on year improvements reported nationally on Key Stage 3 NCTs in science and mathematics: if children are entering secondary from primary school less and less equipped with the necessary mental conditions for processing science and mathematics concepts it seems unlikely that the next 2.5 years KS3 teaching will have improved so much as more than to compensate for what students of today lack in comparison with 1976.’
Chart 8: Performance on tests of scientific concepts, 1976 – 2003 (Shayer)
Tymms (2007) reviewed assessment evidence in mathematics from children at the end of primary school between 1978 and 2004 and in reading between 1948 and 2004. The conclusion was that standards in both subjects ‘have remained fairly constant’.
Warner (2013) on physics. Professor Mark Warner (Cambridge University) produced a fascinating report (2013) on problems with GCSE and A Level Physics and compared the papers to old O Levels, A Levels, ‘S’ Level papers, Oxbridge entry exams, international exams and so on. After reading it, there is no room for doubt. The standards demanded in GCSEs and A Levels have fallen very significantly.
‘[In modern papers] small steps are spelt out so that not more than one thing needs to be addressed before the candidate is set firmly on the right path again. Nearly all effort is spent injecting numbers into formulae that at most require GCSE-level rearrangements… All diagrams are provided… 1986 O-level … [is] certainly more difficult than the AS sample… 1988 A-level … [is] harder than most Cambridge entrance questions currently… 1983 Common Entrance [is] remarkably demanding for this age group, approaching the challenge of current AS… There is a staggering difference in the demands put on candidates… Exams [from the 1980s] much lower down the school system are in effect more difficult than exams given now in the penultimate years [i.e. AS].’
For example, the mechanics problems in GCSE Physics are substantially shallower than those in 1980s O Level, which examined concepts now in A Level. The removal of calculus from A Level physics badly undermined it. Calculus is tested in A Level Maths’ Mechanics I paper and Mechanics II and III test deeper material than Physics A Level. This is one of the reasons why Cambridge Physics department stopped requiring Physics A Level for entry and made clear that Further Maths A Level is acceptable instead (many say it is better preparation for university than physics A Level is).
Warner also makes the point that making Physics GCSE and A Level much easier did not even increase the number taking physics degrees, which has declined sharply since the mid-1980s. He concludes: ‘one could again aim for a school system to get a sizable fraction of pupils to manage exams of these [older] standards. Children are not intrinsically unable to attack such problems.’ (NB. The version of this report on the web is not the full version – I would urge those interested to email Professor Warner.)
Gowers (2012) on maths. Tim Gowers, Cambridge professor and Fields Medallist, described some problems with Maths A Level and concluded:
‘The general point here is of course that A-levels have got easier [emphasis added] and schools have a natural tendency to teach to the test. If just one of those were true, it would be far less of a problem. I would have nothing against an easy A-level if people who were clever enough were given a much deeper understanding than the exam strictly required (though as I’ve argued above, for many people teaching to the test is misguided even on its own terms, since they will do a lot better on the exam if they have not been confined to what’s on the test), and I would not be too against teaching to the test if the test was hard enough…
‘[S]ome exams, such as GCSE maths, are very very easy for some people, such as anybody who ends up reading mathematics at Cambridge (but not just those people by any means). I therefore think that the way to teach people in top sets at schools is not to work towards those exams but just to teach them maths at the pace they can manage.’
Durham University analysis gives data to quantify this conclusion. Pupils who would have received a U (unclassified) in Maths A-Level in 1988 received a B/C in 2006 – see above for Chart 5 showing this (CEM Centre Durham University, Changes in standards at GCSE and A-Level: Evidence from ALIS and YELLIS, April 2007). Further Maths A Level is supposedly the toughest A Level and probably it is but a) it is not the same as its 1980s ancestor and b) it now introduces pupils to material such as matrices that used to be taught in good prep schools.
I spent a lot of time 2007-14 talking to maths dons, including heads of departments, across England. The reason I quote Gowers is that I never heard anybody dispute his conclusion but he was almost the only one who would say it publicly. I heard essentially the same litany about A Level maths from everybody I spoke to: although there were differences of emphasis, nobody disputed these basic propositions. 1) The questions became much more structured so pupils are led up a scaffolding with less requirement for independent problem-solving. 2) The emphasis moved to memorising some basic techniques the choice of which is clearly signalled in the question. 3) The modular system a) encouraged a ‘memorise, regurgitate, forget’ mentality and b) undermined learning about how different topics connect across maths, both of which are bad preparation for further studies. (There are also some advantages to a modular system that I will return to.) 4) Many undergraduates, including even those in the top 5% at such prestigious universities as Imperial, therefore now struggle in their first year as they are not well-prepared by A Level for the sort of problems they are given in undergraduate study. (The maths department at Imperial became so sick of A Level’s failings that they recently sought and got approval to buy Oxford’s entrance exam for use in their admission system.)
I will not go into arguments about vocational qualifications here but note the conclusion of Alison Wolf whose 2011 report on this was not disputed by any of the three main parties:
‘The staple offer for between a quarter and a third of the post- 16 cohort is a diet of low-level vocational qualifications, most of which have little to no labour market value.’
3. Knock-on effects in universities
Serious lack of maths skills
There are many serious problems with maths skills. Part of the reason is that many universities do not even demand A Level maths. The result? As of about 2010-12, about 20% of Engineering undergraduates, about 40% of Chemistry and Economics undergraduates, and about 60-70% of Biology and Computer Science undergraduates did not have A Level Maths. Less than 10% of undergraduate bioscience degree courses demand A Level Maths therefore ‘problems with basic numeracy are evident and this reflects the fact that many students have grades less than A at GCSE Maths. These students are unlikely to be able to carry out many of the basic mathematical approaches, for example unable to manipulate scientific notation with negative powers so commonly used in biology’ (2011 Biosciences report). (I think that history undergraduates should be able to manipulate scientific notation with negative powers – this is one of the many things that should be standard for reasonably able people.)
The Royal Society estimated (Mathematical Needs, 2012) that about 300,000 per year need a post-GCSE Maths course but only ~100,000 do one. (This may change thanks to Core Maths starting in 2015, see later blog.) This House of Lords report (2012) on Higher Education in STEM subjects concluded: ‘We are concerned that … the level at which the subject [maths] is taught does not meet the requirements needed to study STEM subjects at undergraduate level… [W]e urge HEIs to introduce more demanding maths requirement for admissions into STEM courses as the lack, or low level, of maths requirements at entry acts as a disincentive for pupils to study maths and high level maths at A level.’ House of Lords Select Committee on Science and Technology, Higher Education in STEM subjects, 2012.
Further, though this subject is beyond the scope of this blog, it is also important that the maths PhD pipeline ‘which was already badly malfunctioning has been seriously damaged by EPSRC decisions’, including withdrawal of funding from non-statistics subjects which drew the ire of UK Fields Medallists, cf. Submission by the Council for the Mathematical Sciences to the House of Lords, 2011. The weaknesses in biology also feed into the bioscience pipeline: only six percent of bioscience academics think their graduates are well prepared for a masters in the fast-growing field of Computational Biology (p.8 of report).
Closing of language departments, decline of language skills
I have not found official stats for this but according to research done for the Guardian (with FOIs):
‘The number of universities offering degrees in the worst affected subject, German, has halved over the past 15 years. There are 40% fewer institutions where it is possible to study French on its own or with another language, while Italian is down 23% and Spanish is down 22%.’
As Katrin Kohl, professor of German at Jesus College (Oxford) has said, ‘The UK has in recent years been systematically squandering its already poor linguistic resources.’ Dawn Marley, senior lecturer in French at the University of Surrey, summarised problems across languages:
‘We regularly see high-achieving A-level students who have only a minimal knowledge of the country or countries where the language of study is spoken, or who have limited understanding of how the language works. Students often have little knowledge of key elements in a country’s history – such as the French Revolution, or the fact that France is a republic. They also continue to struggle with grammatical accuracy, and use English structures when writing in the language they are studying… The proposals for the revival of A-level are directly in line with what most, if not all, academics in language departments would see as essential.’ (Emphasis added.)
The same picture applies to classical languages. Already by 1994 the Oxford Classics department was removing texts such as Thucydides as compulsory elements in ‘Greats’ because they were deemed ‘too hard’. These changes continued and have made Classics a very different subject than it was before 1990. At Oxford, they introduced whole new courses (Mods B then Mods C) that do not require any prior study of the ancient languages themselves. The first year of Greats now involves remedial language courses.
I quote at length from a paper by John Davie, a Lecturer in Classics at Trinity College, Oxford, as his comments summarise the views of other senior classicists in Oxbridge and elsewhere who have been reluctant to speak out (In Pursuit of Excellence, Davie, 2013). Inevitably, the problems described are damaging the pipeline for masters, PhDs, and future scholarship.
‘Classics as an academic subject has lost much of its intellectual force in recent years. This is true not only of schools but also, inevitably, of universities, which are increasingly required to adapt to the lowering of standards…
‘In modernist courses…, there is (deliberately) no systematic learning of grammar or syntax, and emphasis is laid on fast reading of a dramatic continuous story in made-up Latin which gives scope for looking at aspects of ancient life. The principle of osmosis underlying this approach, whereby children will learn linguistic forms by constant exposure to them, aroused scepticism among many teachers and has been thoroughly discredited by experts in linguistics. Grammar and syntax learned in this piecemeal fashion give pupils no sense of structure and, crucially, deny them practice in logical analysis, a fundamental skill provided by Classics…
‘[W]e have, in GCSE, an exam that insults the intelligence… Recent changes to this exam have by general consent among teachers made the papers even easier.
‘In the AS exam currently taken at the end of the first year of A-level … students study two small passages of literature, which represent barely a third of an original text. They are asked questions so straightforward as to verge on the banal and the emphasis is on following a prescribed technique of answering, as at GCSE. Imagination and independent thought are simply squeezed out of this process as teachers practise exam-answering technique in accordance with the narrow criteria imposed on examiners.
‘The level of difficulty [in AS] is not substantially higher than that of GCSE, and yet this is the exam whose grades and marks are consulted by the universities when they are trying to determine the ability of candidates… Having learned the translation of these bite-sized chunks of literature with little awareness of their context or the wider picture (as at GCSE, it is increasingly the case that pupils are incapable of working out the Latin/Greek text for themselves, and so lean heavily on a supplied translation), they approach the university interview with little or no ability to think “outside the box”. Dons at Oxford and Cambridge regularly encounter a lack of independent thought and a tendency to fall back on generalisations that betray insufficient background reading or even basic curiosity about the subject. This need not be the case and is clearly the product of setting the bar too low for these young people at school…
‘At A2 … students read less than a third of a literary text they would formerly have read in its entirety.
‘There is the added problem that young teachers entering the profession are themselves products of the modernist approach and so not wholly in command of the classical languages themselves. As a result they welcome the fact that they are not required by the present system to give their pupils a thorough grounding in the language, embracing the less rigorous approach of modern course-books with some relief.
‘In the majority of British universities Classics in its traditional form has either disappeared altogether or has been replaced by a course which presents the literature, history and philosophy mainly (or entirely) in translation, i.e. less a degree course in Classics than in Classical Civilisation.
‘This situation has been forced upon university departments of Classics by the impoverished language skills of young people coming up from schools… It is not only the classical languages but English itself which has suffered in this way in the last few decades. Every university teacher of the classical languages knows that he cannot assume familiarity with the grammar and syntax of English itself, and that he will have to teach from scratch such concepts as an indirect object, punctuation or how a participle differs from a gerund…
‘Even at Oxford cuts have been made to the number of texts students are required to read and, in those texts that remain, not as many lines are prescribed for reading in the original Latin or Greek.
‘In the last ten years of teaching for Mods [at Oxford] I have been struck by how the first-year students who come my way at the start of the summer term appear to know less about the classical languages each year, an experience I know to be shared by dons at other colleges…
‘GCSE should be replaced by a modern version of the O-level that stretches pupils… This would make the present AS exam completely unsuitable, and either a more challenging set of papers should be devised, if the universities wish to continue with pre A-level interviewing, or there should be a return to an unexamined year of wide reading before the specialisation of the last year.
‘Although the present exam, A2, has more to recommend it than AS, it also would no longer be fit for purpose and would need strengthening. As part of both final years there should be regular practice in the writing of essays, a skill that has been largely lost in recent years because of the exam system and is (rightly) much missed by dons.’
This combination of problems explains why we funded a project with Professor Pelling, Regius Professor of Greek at Oxford, to fund teacher training and language enrichment courses for schools.
I will not go into other humanities subjects. I read Ancient & Modern History and have thoughts about it but I do not know of any good evidence similar to the reports quoted above by the likes of the Royal Society. I have spoken to many university teachers. Some, such as Professor Richard Evans (Cambridge) told me they think the standard of those who arrive as undergraduates is roughly the same as twenty years ago. Others at Oxbridge and elsewhere told me they think that essay writing skills have deteriorated because of changes to A Level (disputed by Evans and others) and that language skills among historians have deteriorated (undisputed by anyone I spoke to).
For example, the Cambridge Professor of Mediterranean History, David Abulafia, has contradicted Evans and, like classicists, pointed out the spread of remedial classes at Cambridge:
‘It’s a pity, then, that the director of admissions at Cambridge has proclaimed that the old system [pre-Gove reforms] is good and that AS-levels – a disaster in so many ways – are a good thing because somehow they promote access. I don’t know for whom he is speaking, but not for me as a professor in the same university…
‘[Gove] was quite right about the abolition of the time-wasting, badly devised and all too often incompetently marked AS Levels; these dreary exams have increasingly been used as the key to admissions to Cambridge, to the detriment of intellectually lively, quirky, candidates full of fizz and sparkle who actually have something to say for themselves…
‘Bogus educational theories have done so much to damage education in this country… The effects are visible even in a great university such as Cambridge, with a steady decline in standards of literacy, and with, in consequence, the provision in one college after another of ‘skills teaching’, so that students who no longer arrive knowing how to structure an essay or even read a book can receive appropriate ‘training’… Even students from top ranked schools seem to find it very difficult … to write essays coherently… In the sort of exams I am thinking of, essay writing comes much more to the fore and examiners would be making more subjective judgements about scripts. In an ideal world there would be double marking of scripts.’ Emphasis added.
Judging essay skills is a more nebulous task than judging the quality of mechanics questions. Also, there is less agreement among historians about the sort of things they want to see in school exams compared to mathematicians and physicists who largely (in my experience, I stress, which is limited) agree about the sorts of problems they want undergraduates to be able to solve and the skills they want them to have.
I will quote a Professor of English at Exeter University, Colin MacCabe, whose view of the decline of essay skills is representative of many comments I have heard, but I cannot say confidently that this view represents a consensus, despite his claim:
‘Nobody who teaches A-level or has anything to do with teaching first-year university students has any doubt that A Levels have been dumbed down… The writing of the essay has been the key intellectual form in undergraduate education for more than a century; excelling at A-level meant excelling in this form. All that went by the board when … David Blunkett, brought in AS-levels… A-levels … became two years of continuous assessment with students often taking their first module within three months of entering the sixth form. This huge increase in testing went together with a drastic change in assessment. Candidates were not now marked in relation to an overall view of their ability to mount and develop arguments, but in relation to their ability to demonstrate achievement against tightly defined assessment objectives… A-levels, once a test of general intellectual ability in relation to a particular subject, are now a tightly supervised procession through a series of targets. Assessment doesn’t come at the end of the course – it is the course… In English, students read many fewer books… Students now arrive at university without the knowledge or skills considered automatic in our day… One of the results of the changes at A-level is that the undergraduate degree is itself a much more targeted affair. Students lack of a general education mean that special subjects, dissertations etc are added to general courses which are themselves much more limited in their approach… One result of this is a grade inflation much more dramatic even than A-levels… [T]here is little place within a modern English university for students to develop the kind of intellectual independence and judgment, which has historically been the aim of the undergraduate degree.’ Observer, 22 August, 2004. (Emphasis added.)
If anybody knows of studies on history and other humanities please link in Comments below.
As political arguments increasingly focused on ‘participation’ and ‘access’, Oxford and Cambridge largely abandoned their own entrance exams in the 1990s. There were some oddities. Cambridge University dropped their maths test and were so worried by the results that they immediately asked for and were given special dispensation to reintroduce it and they have used one since (now known as the STEP paper, used by a few other universities). Other Cambridge departments who wanted to do the same were refused permission and some of them (including the physics department) now use interviews to test material they would like to test in a written exam. Oxford changed its mind and gradually reintroduced admission tests in some subjects. (E.g. It does not use STEP in maths but uses its own test which has more ‘applied’ maths.) Cambridge now uses AS Levels. Oxford does not (but does not like to explain why).
A Levels are largely useless for distinguishing between candidates in the top 2% of ability (i.e. two standard deviations above average). Oxbridge entry now involves a complex and incoherent set of procedures. Some departments use interviews to test skills that are i) either wholly or entirely untested by A Levels and ii) are not explicitly set out anywhere. For example, if you go to an interview for physics at Cambridge, they will ask you questions like ‘how many photons hit your eye per second from Alpha Centauri?’ – i.e. questions that you cannot cram for but from which much information can be gained by tutors watching how students grapple with the problem.
The fact that the real skills they want to test are asked about in interviews rather than in public exams is, in my opinion, not only bad for ‘standards’ but is also unfair. Rich schools with long connections to Oxbridge colleges have teachers who understand these interviews and know how to prepare pupils for them. They still teach the material tested in old exams and other materials such as Russian textbooks created decades ago. A comprehensive in east Durham that has never sent anybody to Oxbridge is very unlikely to have the same sort of expertise and is much more likely to operate on the very mistaken assumption that getting a pupil to three As is sufficient preparation for Oxbridge selection. Testing skills in open exams that everybody can see would be fairer.
I will return to this issue in a later blog but it is important to consider the oddities of this situation. Decades ago, open public standardised tests were seen as a way to overcome prejudice. For example, Ivy League universities like Harvard infamously biased their admissions system against Jews because a fair open process based on intellectual abilities, and ignoring things like lacrosse skills, would have put more Jews into Harvard than Harvard wanted. Similar bias is widespread now in order to keep the number of East Asians low. It is no coincidence that Caltech’s admissions policy is unusually based on academic ability and it has a far higher proportion of East Asians than the likes of Harvard.
Similar problems apply to Oxbridge. A consequence of making exams easier and removing Oxbridge admissions tests was to make the process more opaque and therefore biased against poorer families. The fascinating journey made by the intellectual Left on the issue of standardised tests is described in Steven Pinker’s recent influential essay on university admissions. I agree with him that a big part of the reason for the ‘madness’ is that the intelligentsia ‘has lost the ability to think straight about objective tests’. Half a century ago, the Left fought for standardised tests to overcome prejudice, now many on the Left oppose tests and argue for criteria that give the well-connected middle classes unfair advantages.
This combination of problems is one of the reasons why the Cambridge pure maths department and physics department worked with me to develop projects to redo 16-18 curricula, teacher training, and testing systems. Cambridge is even experimenting with a ‘correspondence Free School’ idea proposed by the mathematician Alexander Borovik (who attended one of the famous Russian maths schools). Powerful forces tried to stop these projects happening because they are, obviously, implicit condemnations of the existing system – condemnations that many would prefer had never seen the light of day. Similar projects in other departments at other universities were kiboshed for the same reason, as were other proposals for specialist maths schools as per the King’s project (which also would never have happened but for the determination of Alison Wolf and a handful of heroic officials in the DfE). I will return to this too.
Here are some tentative conclusions.
- The political and bureaucratic process for the introduction of the GCSE and National Curriculum was a shambles. Those involved did not go through basic processes to agree aims. Implementation was awful. All elements of the system failed children. There are important lessons for those who want to reform the current system.
- Given the weight of evidence above, it is hard to avoid the conclusion that GCSEs were made easier than O Levels and became easier still over time. This means that at least the top fifth are aimed aged 14 at lower standards than they would have been aimed at previously (not that O Levels were at all optimal). Many of them spend two years with low grade material and repeating boring drills, in order that the school can maximise its league table position, instead of delving deeper into subjects. Inflation seems to have stopped in the last two years, perhaps temporarily, but by the use of an Ofqual system known as ‘comparable outcomes’ which is barely understood by anybody in the school system or DfE.
- A Levels, at least in maths, sciences, and languages, were quickly made easier after 1988 and not just by enough to keep pass marks stable but by enough to lead to large increases. Even A Level students are aimed at mundane tasks like ‘design a poster’ that are suitable for small children – not near-adults. (As I type this I am looking at an Edexcel textbook for Further Maths A Level which for some reason, Edexcel has chosen to decorate with the picture of a child in a ‘Robin’ masked outfit.)
- The old ‘S’ level papers, designed to stretch the best A Level students, were abandoned which contributed to a decline of standards aimed for among the top 5%.
- University degrees in some subjects therefore also had to become easier (e.g. classics) or longer (natural sciences) in order to avoid increases in failure rates. This happened in some subjects even in elite universities. Remedial courses spread, even in elite universities, to teach/improve skills that were previously expected on arrival (including Classics at Oxford and History at Cambridge). Not all of the problems are because of failures in schools or easier exams. Some are because universities themselves for political reasons will not make certain requirements of applicants. Even if the exam system were fixed, this would remain a big problem. On the other hand, while publicly speaking out for AS Levels, admissions officers also, very quietly, have been gradually introducing new, non-Government/Ofqual regulated, tests for admissions purposes. On this, it is more useful to watch what universities do than what they say.
- These problems have cascaded right through the system and now affect the pipeline into senior university research positions in maths, sciences, and languages. For example, the lack of maths skills among biologists is hampering the development of synthetic biology and computational biology. It is very common now to have (private) discussions with scientists deploring the decline in English research universities. Just in the past few weeks I have had emails from an English physicist now at Harvard and a prominent English neuroscientist giving me details of these developments and how we are falling further behind American universities. As they say, however, nobody wants to speak out.
- It is much easier to see what has happened at the top end of the ability curve, where effects show up in universities, than it is for median pupils. The media also focuses on issues at the top end of the ability curve, A Levels, and the Russell Group.
- Because politicians took control of the system and used results to justify their own policies, and because they control funding, debate over standards became thoroughly dishonest, starting with the Conservative government in the 1980s and continuing to now when academics are pressured not to speak out by administrators for fear of politicians’ responses. When governments are in control of the metrics according to which they are judged, there is likely to be dishonesty. If people – including unions, teachers, and officials – claim they deserve more money on the basis of metrics that are controlled by a small group of people operating an opaque process and controlling the regulator themselves, there is likely to be dishonesty.
An important caveat. It is possible that simultaneously a) 1-8 is true and b) the school system has improved in various ways. What do I mean?
This is a coherent (not necessarily right) conclusion from the story told above…
GCSEs are significantly easier than O Levels. Nevertheless, the switch to GCSEs also involved many comprehensives and secondary moderns dropping the old idea that maybe only a fifth of the cohort are ‘academic’ – the idea from Plato’s Republic of gold, silver, and bronze children, that influenced the 1944 Act. Instead, more schools began to focus more pupils on academic subjects. Even though the standards demanded were easier than in the pre-1988 exams, this new focus (combined with other things) at least led between 1988 and now to a) a reduction in the number of truly awful schools and b) more useful knowledge and skills at least for the bottom fifth of the cohort (in ability terms), and perhaps for more. Perhaps the education of median ability pupils stayed roughly the same (declining a bit in maths) hence the consistent picture in international tests, the King’s results comparing maths in 1978/2008, Shayer’s results and so on (above). Meanwhile the standards demanded by post-1988 A Levels clearly fell (at least in some vital subjects), as the changes in universities testify, and S Level papers vanished, so the top fifth of the cohort (and particularly the +2 standard deviation population, i.e. the top 2%) leave school in some subjects considerably worse educated than in the 1980s. (Given most scientific and technological breakthroughs come from among this top 2% this has a big knock-on effect.) Private schools felt incentivised to perform better than state schools on easier GCSEs and A Levels rather than pursue separate qualifications with all the accompanying problems. There remains no good scientific data on what children at different points on the ability curve are capable of achieving given excellent teaching so the discussion of ‘standards’ remains circular. Easier GCSEs and A Levels are consistent with some improvements for the bottom fifth, roughly stability for the median, significant decline for the top fifth, and fewer awful schools.
This is coherent. It fits the evidence sketched above.
But is it right?
In the next blog in this series I will consider issues of ‘ability’ and the circularity of the current debate on ‘standards’.
If people accept the conclusions about GCSEs and A Levels (at least in maths, sciences, and languages, I stress again) how should this evidence be weighed against the very strong desire of many in the education system (and Parliament and Whitehall) to maintain a situation in which the vast majority of the cohort are aimed at GCSEs (or international equivalents that are not hugely different) and, for those deemed ‘academic’, A Levels?
Do the gains from this approach outweigh the losses for an unknown fraction of the ‘more able’?
Is there a way to improve gains for all points on the ability distribution?
I have been told that there is no grade inflation in music exams. Is this true? If YES, is this partly because they are not regulated by the state? Are there other factors? Has A Level Music got easier? If not why not?
What sort of approaches should be experimented with instead of the standard approaches seen in O Levels, GCSEs, and A Levels?
What can be learned from non-Government regulated tests such as Force Concepts Tests (physics), university admissions tests, STEP, IQ tests and so on?
What are the best sources on ‘S’ Level papers and what happened with Oxbridge entrance exams?
What other evidence is there? Where are analyses similar to Warner’s on physics for other subjects?
What evidence is there for university grade inflation which many tell me is now worse than GCSEs and A Levels?
Reblogged this on originxl.
Thanks for bringing this all together.
I would add other stuff to the picture, such as the devaluation of the meaning of “apprenticeship”, and all that goes with it too, as the obvious standard bearer of another “qualification”. The devaluation of HND qualification. The devaluation of degrees and post grads themselves.
The closing of secondary moderns and grammar schools and the implementation of comprehensives, and the various variations on what comprehensive meant on the ground where individual schools would run multiple schools within a school while paying lip service to their imposed ideological ways of working etc. And how such variations had dramatic differences in pupil results.
I also look at all the bias in the system. From the obviously fraudulent claims of dyslexia (supported by private doctors happy to sign off the diagnosis) amongst some demographics because of the extra time in exams such a diagnosis unlocks, to the ways schools appeal exam results in very different ways giving some of them obvious bias in the results (because mostly the results of appeals of grades is higher grades being awarded). And so on.
Taking it back to the days during and just after the war when poor working class kids glass ceiling was limited in much more obvious ways than it is later, and how the its gone full circle in many ways so that life chances are limited much more by things other than ability and hard work.
“Standards” for me is also anecdotal. I see the modern output from “good” schools and feel they are a lot behind in so many ways the average school output in my day (yes I know I sound like an old fogey). But in comparison my father was always impressed at how much better the output from average schools were when I was there, than what was possible in the environment he went to school in. Some of this could be turned into quantitative measures of what is happening with a little thought.
Plagiarism from the internet is also widespread now. At all layers of education.
Immigration also plays a big part which the political bubble always ignores. From the kids from Spanish speaking countries being allowed to do Spanish A level here and use it to gain entry onto Uni courses, through to the challenges of class rooms full of kids not speaking English or even the same language as each other at home, the sheer pressure on the system in some places from the numbers, kids arriving from abroad mid-way through an education. And so on.
And “standards” for me also means parents being happy with the output (of the school system). My father was dissatisfied with his own education, was broadly happy with mine, and I am and will be very unhappy with what my kids are subjected to. Full circle.
You describe many of the issues well. What do you think the solutions are? Unconstrained by your knowledge of the barriers the system will put in the way? Myself I think we need a real empowerment of parents, giving them much more say and buying power, I don’t see the professional public sector, political, educational, or admin, as anywhere near as powerful a force for good change as individual parents empowered to move their kids to any school they want, and bargains between heads and parents replacing the current ways of doing things.
I look at one of my neighbours who has pulled their child out of school in despair at the obvious blatant rubbish “standards” now being threatened with the law, and having no real choice other than to move house or let the kid go back to the same school. When its got to that stage there really needs to be other options and more buying power in the parents hands.
Is there a way to improve gains for all points on the ability distribution? Yes absolutely, but I’ve rambled so much I would need to come back to it. Sorry : )
Thanks and good luck!
Reblogged this on The Echo Chamber.
A lot of what you say about “grade inflation” is probably right but there are a couple of missing bits. You say the content standard and assessment standard of GQs has changed and highlight evidence that might support that conclusion; you also allude to the impact on the performance standard of changing the structure of the qualifications (especially modularity) and that is also probably right – indeed it is a large part of the justification often given for modular qualifications.
The question about the assessment standard is “why” – and I don’t think it was just (or even primarily) to make exams “easier” for inclusion reasons. A huge part of it was driven by the need for predictability and homogeneity in order to ensure reliable awarding, necessary to satisfy the increasingly high stakes school accountability system. You rightly note that this increased predictability has a negative backwash effect on teaching and learning and also probably contributes to “grade inflation” – but the driver was a need for reliability at the expense of validity due to high stakes government accountability. I wrote more about this here: http://cmre.org.uk/qualifications.
The other issue is the grading standard, which you dismiss as being an unlikely cause when in fact it is pivotal. At the core of this are the flaws of criterion referencing which extensive research tells us is not reliable, in and of itself, and especially not in a high stakes system. A useful academic summary here: https://cerp.aqa.org.uk/research-library/setting-and-maintaining-gcse-and-gce-grading-standards-case-contextualised-cohort-referencing. In short, the evidence proves that human judgement is not a reliable basis in isolation for making awarding decisions, and this effect is probably amplified when the high stakes nature of qualifications provide an implicit incentive for awarders to give the “benefit of the doubt”. The creeping year-on-year effect of this was a major (if not *the* major) contributor to the very consistent improvement in qualification outcomes we’ve seen over the last 25 years.
The “comparable outcomes” approach currently employed is of course designed to address this by significantly reducing the role of human judgement in the awarding process. There are all sorts of problems with this but at least by defining your outcome in advance you stop “grade inflation”. That said, even if we weren’t concerned about rewarding putative improvements in performance, the system still isn’t perfect due to (a) its reliance on imperfect KS2 data to inform the statistical predictions and (b) the fact that the tolerances apply to the statistics still provide some room for “benefit of the doubt” decisions. Looking forward to your blog on the reference tests where we can discuss this further!
1. Your point re effects of trying to improve reliability/predictability etc and feedback with accountability is an important one that I meant to mention and forgot!
2. I did not mean to dismiss the grading issue – I just meant I did not think it could explain everything but my phrasing was poor, appols. I will read your link.
3. Ofqual has kindly emailed offering to brief me on where they are with comparable outcomes. The KS2 data reliability is clearly, as you say, an important issue.
4. I’ve been told from inside DfE the reference test is going ahead. However, I’ve been told from outside DfE that it may be going ahead in a bad way! I dunno at the mo…
Thanks and best wishes
The reference test is happening – Ofqual has a tendering process underway at the moment. There are some real concerns both about the design of the test itself and how it will be used in awarding. I’ve blogged on them here: http://policyblog.aqa.org.uk/2014/08/05/reference-tests-a-silver-bullet-for-gcse-grading/.
As an aside, as a former undergrad Imperial mathematician, I can confirm that the whole of the first semester was (when I was there at least) a recap of Further Maths A-level. To be fair, they had to do this because not everyone had taken Further Maths, and the modular Maths A-level meant that actually you couldn’t really be sure at all which content everyone has covered. The new Maths and Further Maths A-levels from 2017 will improve this situation somewhat (Maths will now have zero optionality).
Yes your experience is normal according to Richard Craster, head of maths there now, which is why they decided to buy in the Oxford exam. That action may also incentivise those who want to apply to Imperial to do Further Maths A Level too.
BUT there is an ‘international student competition’ issue here too. I don’t think they publish the numbers but there is a large fraction now doing maths at Imperial from East Asia.
It is possible that Imperial’s decision to increase standards by using the OXF test will both a) improve the quality of their first year maths AND b) mean fewer English students doing serious maths (i.e. it’s possible that raising the hurdle will DIS-incentivise some particularly in state schools that do not have access to Oxbridge preparation expertise. I hope this is not the case but it is possible. Also even if there are negative immediate effects one must hope the long-term will be better!)
One of the problems with discussions of ‘standards’ is that no universities will discuss these issues honestly in public – only in private…
One thing to consider with the introduction of the National Curriculum was that, at the same time, schools started entering students for more and more subjects. Given that there is a fixed number of teaching hours per week this necessitates a reduced number of teaching hours per subject at GCSE compared to the ‘O’ level days.
A reduction in teaching time for a subject reduces the amount of knowledge/content that a student can be taught and it is inevitable (given that learning is hierarchical) that it is the harder content that gets ‘bumped’ to the next qualification up (ie. things move from GCSE to A level and A level to degree).
Perhaps part of the issue is that not all Universities have adapted to address this in the way that A levels have adapted to address the shallowing at GCSE.
There is, of course, a much wider (and important) question, about whether stretch and challenge for our secondary school students is best met through a wider range of subjects (to a shallower depth) or a smaller number of subjects but deeper content.
it’s an interesting article but I had questions on two points:
1. I’m sure you’re right that modern foreign language skiils have declined in the last decade but I wonder whether this is because of problems in the school system or because of the continuing rise of English as a world language. Here’s a HEFCE commissioned report on HE language provision (http://www.hefce.ac.uk/pubs/year/2009/200941/#d.en.63844). There’s research on language provision in the rest of the education system from CILT, the national centre for languages, which closed in 2010 following the withdrawal of DFE grants http://webarchive.nationalarchives.gov.uk/20101227105751/http://cilt.org.uk/home/research_and_statistics/language_trends.aspx
2. The research on the decline in maths skills among university entrants is pretty persuasive and makes use of evidence that departments have had to make changes to syllabuses in response but is it automatically the case that this is a bad thing? I don’t have any great expertise on this but I was an Oxford student in the 1980s and I remember some of my contemporaries studying science being pretty dissatisfied at some of the abstract and abstruse things they needed to study. I’m sure that academics steeped in their subject know what is needed for those who move on to PhDs and I also know that the really ace scientists invent amazing things but changes in standards and syllabuses aren’t automatically evidence they;ve got worse. I guess it would help if there was an HE PISA.
Something else to add to Dale Bassett’s point. He says that there was pressure from school and colleges on awarding bodies to make GCSEs and A-levels more predictable because of their use in high-stakes accountability. We now have a situation where 1 million HE students take up student loan debts averaging £13,000 a year. The A level results play a significant role in determining where they spend this. Just at the point where university competition for students is heating up (because of the removal of student number controls), the reforms to GCSEs, AS and A levels will make student entry qualifications less predictable. This could result in an uncomfortable experience in the next few years and may create a new pressure to make things more predictable again.
“I have been told that there is no grade inflation in music exams. Is this true? If YES, is this partly because they are not regulated by the state? Are there other factors? Has A Level Music got easier? If not why not?”
If you are talking about the ABRSM instrumental exams then yes, there has been no grade inflation. Some reasons for this are as follows:
1. Examiners. It is incredibly hard to become an examiner (unlike GCSE/A Level) and they have many more applications than positions. It is very prestigious to have on your CV, rather than comments from colleagues such as “I had no idea you were so hard up” when you say you are marking exam scripts. See here for details http://gb.abrsm.org/en/news/article/examiner-training-the-full-story/175/
2. Teachers. There are no league tables to chase for instrumental teachers and many of the best believe in the high standards and challenge of the exams (much like my A level French and German teachers believed when they chose the harder O&C board), with tasks such as sight reading and scales & arpeggios, which most children hate but which allow the best to shine and are fantastic for technique. If ever such perverse incentives arrived in music then teachers would chase easier exam boards.
3. Parents. They are ready to believe that a child’s result in an exam is a result solely of how hard they practised and how well they played on the day. There are no standards or observations of instrumental teachers and there are some rotten ones out there but there is no culture of blaming the teacher for a failure, even where, as examiners tell me, it’s obvious that poor instruction is the reason they are unable to play.
4. The ABRSM itself. It sets exams in what it can reliably and objectively mark. While GCSE’s and A levels include many worthy but subjective activities such as composing the ABRSM never has. The pieces set are as hard as ever – a current grade 7 piano piece is the same I played for mine 20 odd years ago.
Other exam boards are (anecdotally) easier. It is, of course, entirely possible that poorer performances are getting higher marks as children have less time for practice and I have suggested this myself. David Didau says he doesn’t trust my memory of how well I performed in exams.
A levels are certainly easier, particularly since 2008. I could go into lots of detail another time if you like.
I therefore think that the way to teach people in top sets at schools is not to work towards those exams but just to teach them maths at the pace they can manage < Isn't this the ideal for every set for every subject? The question is whether it is possible in the way that we currently do school, i.e. 30 kids in a class and one teacher.
a few thoughts:
I used my talk at researchED in part to argue that the most urgent task facing educational psychology is to establish a research programme that aims to find out what pupils can achieve within the constraints of different ages, IQs, and teachers. It’s crazy that this has not been done properly to date, or even attempted as far as I know. Recently I came across some primary blogs analysing the recent rise in standards at KS2. Both the bloggers’ analyses and the standards themselves are based on little more than guesswork – are the standards now too ambitious for 80% of children? Or achievable for 80%? Or achievable but only with much better teaching across the system? Who knows? Who has the faintest idea?
It should not be beyond the wit of man to sort this out. It will require a program that is both large-scale and under close control; pupils and teachers cannot be allowed to be come unrepresentative of the wider population. This is not logistically easy but nor is it impossible.
re standards: I did Latin & Greek for both GCSE & A-level as well as Classics at undergraduate (only switched to Ed Psych for my MSc). While tutoring Latin & Greek primarily for Common Entrance and GCSE, it’s obvious there’s been a collapse of basic standards even in expensive prep schools, and not just for Latin. Upon starting with a new pupil my first questions are always “what is a noun? What is a verb? What is an adverb? What is an adjective?” These are things that should have been mastered in English lessons, but a surprisingly high % struggle. Latin itself is very badly taught, mostly because the major textbooks (Oxford and Cambridge Latin Courses) are atrocious and lack any grammatical rigour. The only good course I have found that is not of a pre-WWII vintage is Nick Oulton’s So You Really Want to Learn Latin. As a result material once set for O-level unseen translation is now studied as a prepared text at GCSE.
Tick-box marking exacerbates the problem. I sat the A-level English papers knowing that I was going to get a clean sweep of top grades because I knew that I was writing perfectly to the Assessment Objectives. It seems ridiculous that it is possible to get 100% (as you can, or could, in A-level English) in an inherently subjective discipline: I doubt that even a reincarnated Mozart would get 100% in his Grade 8 music Associated Board music, but NB I could be wrong about this. In other subjects it is possible to write semiliterate gibberish and still gain high marks if you have managed to make the relevant points, no matter how incoherently. Real life is not so forgiving.
On a somewhat related note, unless I have got something horribly wrong, pupil premiums are allocated on the basis of FSM eligibility which itself is based on parental income? But the IQ-SES correlation is only .3, and the GCSE-SES correlation is no greater. Why not just give everyone an IQ test at 11 and give secondary schools pupil premium money based on the number of below-average-ability students they get – the correlation between IQ and 11 and GCSE score 5 years later is a hefty .8! (see Deary et al 2007; http://emilkirkegaard.dk/en/wp-content/uploads/Intelligence-and-educational-achievement.pdf)
(NB this would not work so well for primary because of genetic innovation throughout early development, so a standardised IQ test administered at age 5/6 will probably not predict Key Stage 2 scores quite so well, though this would still likely be an improvement..)
Obviously I agree re the research programme. Nobody wants to touch it. I will explore this more shortly.
The Premium idea is reasonable but politically untouchable for the same reason.
If one could get the research programme off the ground, then it would be easier to do the Premium change. But it would be almost impossible to do the latter first.
Many thanks for collecting these thoughts together. If anything, I think you understate (which is probably the right way to err).
There are many additional factors, but the degeneration of assessment in mathematics (the only subject I half-understand) seems to stem from the interactions of four things:
(i) the loss (over the period 1980-present) of a critical mass of mathematically literate teachers, examiners, textbook authors, etc.;
(ii) the inappropriate use of exam results (e.g. for league tables, Ofsted data, to check on progression, etc.)
(iii) a plausible-sounding desire to be “fair” to students and to schools (which made “failure” unthinkable, and so led things top become miserably predictable and scaffolded)
(iv) competition between exam boards for market share.
All of these then interact to reduce “education” to “exam-preparation” (which is further exacerbated by the DfE’s and Ofqual’s failure to eliminate exam board endorsed texts and materials). Somehow (though it is hard to see how) we have to get back to a professional view, and supporting bureaucratic structures, that make it clear what it is that needs to be taught and learned without thinking about exam-preparation until very late in the day. Healthy learning is very different from, and always way ahead of, what pupils have internalised sufficiently robustly for it to be subjected to reliable examination. When learning mathematics one has to struggle with things in general form that could not possibly be assessed at that level. One has to be free to make all sorts of mistakes and establish connections. This continual process of internal re-organisation is slow and messy; and at any stage one is only ready to robustly tackle things on a much lower level.
Where the focus is on exam-preparation, thee cheap inference is that one might as well ignore the harder, general form, and spare students the mistakes and the connections – since they will not be tested (yet). However, it is precisely this more general, more demanding material, and the discovery of connections, that determines subsequent progression. So the attempt to achieve the best result with the least effort in the short term also guarantees large scale failure to progress in the longer term.
This “failure to progress in the longer term” is completely general. It explains why we have such a long tail of underachievement at the bottom end: our attempts to maximise their short-term results often make it even more difficult for them to progress – so the spread increases. The spread does not increase because the top is racing away – for the effect at the top end is even more marked. In FIMS (1964?) England had a very strong top end. TIMSS and PISA show that our top end has almost disappeared: the percentage of students performing at the top level – which is not all that high – in each of these assessments is between one third and one half of the international average (and that average is misleading in that one would not wish to be compared too closely with many of the systems being averaged).
Pingback: Bureaucratic cancer and the sabotage of A Level reform | Dominic Cummings's Blog
Is this study worth adding to the above on changing standards over time?
V interesting, thanks.
An interesting summary of some of the issues relating to GCSEs, A-levels etc. I feel that the ABRSM music exams represent a near perfect form of assessment, and, as you mention there has been no grade inflation for all the reasons another commenter has noted – the fact that the syllabus is set by experts, maintaining consistency is easy, and there is very little subjectivity in the examination – whilst music is inherently artistic and expressive, the level of exam offered by ABRSM (in the grades 1-8 format; there are, of course, the diploma qualifications) is sufficiently low to avoid any marking issues arising from a difference in musical ideals between two examiners. I use their exams as an example on my own blog post about the handling of raw marks from assessments. Perhaps more interesting is the fact that the music A-level has remained challenging, and grade inflation has not been severe. The reasons for this are not clear, and as a result of the UMS system it is virtually impossible to ascertain why the exam boards have made the decisions they have about grade boundaries. The A-level itself is, from anecdotal evidence, difficult and challenging, and I know several music teachers who have belief in it as a tool for measuring musical ability – I know no teachers who have full faith in the exams for any other subject. The mechanism by which grade inflation has occurred has been both a reduction in difficulty of the exams, but, perhaps more crucially, the manipulation of data via the UMS system. These issues would not present such an acute problem if the results from exams were not misused, and were not scrutinised to an extent that their accuracy does not merit. My experience of studying the modern science GCSEs makes it obvious to me that there are serious issues with them. It was much lamented by the scientific community that the practical coursework was to be removed from the new GCSEs, though in reality it was an utter waste of time – teachers cheated, bottom set pupils got nearly full marks, and it effectively became a free 25%. The syllabi themselves contain information that is either fully incorrect, or oversimplified to the point of worthlessness. For example, the life cycle of stars in GCSE Physics is wrong, and the electrolysis of aluminium in GCSE chemistry is wrong – at least, according to Jim Clark (Cambridge chemistry graduate, teacher, writer of Edexcel IGCSE chemistry).
As someone who has just finished A-levels at a good state school and who has an offer from Oxford University, I found this blog post this very interesting.
I have a few comments.
Firstly, one must recognise that schools can only teach so much. They have limited time, limited resources and only so many teachers. This situation is not being helped by the budget cuts schools are experiencing. Although I think some of Gove’s reforms were good, what many of the subject changes have led to is the requirement that schools teach more in their courses. A case in point: one of my A levels was politics, and for it I had to be taught 4/6 units offered by AQA. Under the new A-level, students taking AQA politics will have to learn effectively what I covered in the 4 units I did plus the content of the other two units. While this means that new politics students will cover more, teachers will simply be unable to teach them as much detail as I was taught. Quantity, not quality, has been prioritised, and likewise for other subjects. Do you think this system actually improves the education students are receiving? I accept that universities what their students to start with a good understanding of almost all aspects of the subject, but it is harder than it seems to implement that to a satisfactory standard due to time constraints.
On a slightly different subject, I found your comments about classics quite intriguing. When I was applying for university, I was struggling to choose between studying history or classics. In the end, I went for history. However, when I looked at classics course pages on universities websites, I noticed that in most universities (including Oxford) you do not need to have studied Latin or Greek to take at least one type of classics course (at Oxford, Classics II). Although this leads to a greater number of classics students without good or any understanding of Latin – which I can understand frustrates some academics – surely this development is to be welcomed in that it makes classics courses more accessible to students from state schools, where ancient languages are unlikely to be taught? If universities required prior, study of Greek or Latin for a student to take the course, it would make classics the preserve of those from private schools.
I recognise that increasing the ability of state-schools to teach Latin/Greek is, in the long term, preferable to lowering requirements and standards. Yet in the short-term it seems reasonable for universities to lower requirements to encourage more state-schooled students to attend before initiatives to promote Latin/Greek study in schools take effect. Unfortunately, ongoing budget cuts are threatening the ability of those state-schools that currently teach Latin/Greek to offer courses like Classics.
I would appreciate any response to my comment, though I realise am commenting over two years after you posted this blog. Thanks.