Effective action #4a: ‘Expertise’ from fighting and physics to economics, politics and government

‘We learn most when we have the most to lose.’ Michael Nielsen, author of the brilliant book Reinventing Discovery.

‘There isn’t one novel thought in all of how Berkshire [Hathaway] is run. It’s all about … exploiting unrecognized simplicities…Warren [Buffett] and I aren’t prodigies.We can’t play chess blindfolded or be concert pianists. But the results are prodigious, because we have a temperamental advantage that more than compensates for a lack of IQ points.’ Charlie Munger,Warren Buffett’s partner.

I’m going to do a series of blogs on the differences between fields dominated by real expertise (like fighting and physics) and fields dominated by bogus expertise (like macroeconomic forecasting, politics/punditry, active fund management).

Fundamental to real expertise is 1) whether the informational structure of the environment is sufficiently regular that it’s possible to make good predictions and 2) does it allow high quality feedback and therefore error-correction. Physics and fighting: Yes. Predicting recessions, forex trading and politics: not so much. I’ll look at studies comparing expert performance in different fields and the superior performance of relatively very simple models over human experts in many fields.

This is useful background to consider a question I spend a lot of time thinking about: how to integrate a) ancient insights and modern case studies about high performance with b) new technology and tools in order to improve the quality of individual, team, and institutional decision-making in politics and government.

I think that fixing the deepest problems of politics and government requires a more general and abstract approach to principles of effective action than is usually considered in political discussion and such an approach could see solutions to specific problems almost magically appear, just as you see happen in a very small number of organisations — e.g Mueller’s Apollo program (man on the moon), PARC (interactive computing), Berkshire Hathaway (most successful investors in history), all of which have delivered what seems almost magical performance because they embody a few simple, powerful, but largely unrecognised principles. There is no ‘solution’ to the fundamental human problem of decision-making amid extreme complexity and uncertainty but we know a) there are ways to do things much better and b) governments mostly ignore them, so there is extremely valuable low-hanging fruit if, but it’s a big if, we can partially overcome the huge meta-problem that governments tend to resist the institutional changes needed to become a learning system.

This blog presents some basic background ideas and examples…

*

Extreme sports: fast feedback = real expertise 

In the 1980s and early 1990s, there was an interesting case study in how useful new knowledge jumped from a tiny isolated group to the general population with big effects on performance in a community. Expertise in Brazilian jiu-jitsu was taken from Brazil to southern California by the Gracie family. There were many sceptics but they vanished rapidly because the Gracies were empiricists. They issued ‘the Gracie challenge’.

All sorts of tough guys, trained in all sorts of ways, were invited to come to their garage/academy in Los Angeles to fight one of the Gracies or their trainees. Very quickly it became obvious that the Gracie training system was revolutionary and they were real experts because they always won. There was very fast and clear feedback on predictions. Gracie jiujitsu quickly jumped from an LA garage to TV. At the televised UFC 1 event in 1993 Royce Gracie defeated everyone and a multi-billion dollar business was born.

People could see how training in this new skill could transform performance. Unarmed combat changed across the world. Disciplines other than jiu jitsu have had to make a choice: either isolate themselves and not compete with jiu jitsu or learn from it. If interested watch the first twenty minutes of this documentary (via professor Steve Hsu, physicist, amateur jiu jitsu practitioner, and predictive genomics expert).

Video: Jiu Jitsu comes to Southern California

Royce Gracie, UFC 1 1993 

Screenshot 2018-05-22 10.41.20

 

Flow, deep in the zone

Another field where there is clear expertise is extreme skiing and snowboarding. One of the leading pioneers, Jeremy Jones, describes how he rides ‘spines’ hurtling down the side of mountains:

‘The snow is so deep you need to use your arms and chest to swim, and your legs to ride. They also collapse underfoot, so you’re riding mini-avalanches and dodging slough slides. Spines have blind rollovers, so you can’t see below. Or to the side. Every time the midline is crossed, it’s a leap into the abyss. Plus, there’s no way to stop and every move is amplified by complicated forces. A tiny hop can easily become a twenty-foot ollie. It’s the absolute edge of chaos. But the easiest way to live in the moment is to put yourself in a situation where there’s no other choice. Spines demand that, they hurl you deep into the zone.’ Emphasis added.

Video: Snowboarder Jeremy Jones

What Jones calls ‘the zone’ is also known as ‘flow‘ — a particular mental state, triggered by environmental cues, that brings greatly enhanced performance. It is the object of study in extreme sports and by the military and intelligence services: for example DARPA is researching whether stimulating the brain can trigger ‘flow’ in snipers.

Flow — or control on ‘the edge of chaos’ where ‘every move is amplified by complicated forces’ — comes from training in which people learn from very rapid feedback between predictions and reality. In ‘flow’, brains very rapidly and accurately process environmental signals and generate hypothetical scenarios/predictions and possible solutions based on experience and training. Jones’s performance is inseparable from developing this fingertip feeling. Similarly, an expert fireman feels the glow of heat on his face in a slightly odd way and runs out of the building just before it collapses without consciously knowing why he did it: his intuition has been trained to learn from feedback and make predictions. Experts operating in ‘flow’ do not follow what is sometimes called the ‘rational model’ of decision-making in which they sequentially interrogate different options — they pattern-match solutions extremely quickly based on experience and intuition.

The video below shows extreme expertise in a state of ‘flow’ with feedback on predictions within milliseconds. This legendary ride is so famous not because of the size of the wave but its odd, and dangerous, nature. If you watch carefully you will see what a true expert in ‘flow’ can do: after committing to the wave Hamilton suddenly realises that unless he reaches back with the opposite hand to normal and drags it against the wall of water behind him, he will get sucked up the wave and might die. (This wave had killed someone a few weeks earlier.) Years of practice and feedback honed the intuition that, when faced with a very dangerous and fast moving problem, almost instantly (few seconds maximum) pattern-matched an innovative solution.

Video: surfer Laird Hamilton in one of the greatest ever rides

 

The faster the feedback cycle, the more likely you are to develop a qualitative improvement in speed that destroys an opponent’s decision-making cycle. If you can reorient yourself faster to the ever-changing environment than your opponent, then you operate inside their ‘OODA loop’ (Observe-Orient-Decide-Act) and the opponent’s performance can quickly degrade and collapse.

This lesson is vital in politics. You can read it in Sun Tzu and see it with Alexander the Great. Everybody can read such lessons and most people will nod along. But it is very hard to apply because most political/government organisations are programmed by their incentives to prioritise seniority, process and prestige over high performance and this slows and degrades decisions. Most organisations don’t do it. Further, political organisations tend to make too slowly those decisions that should be fast and too quickly those decisions that should be slow — they are simultaneously both too sluggish and too impetuous, which closes off favourable branching histories of the future.

Video: Boxer Floyd Mayweather, best fighter of his generation and one of the quickest and best defensive fighters ever

The most extreme example in extreme sports is probably ‘free soloing’ — climbing mountains without ropes where one mistake means instant death. If you want to see an example of genuine expertise and the value of fast feedback then watch Alex Honnold.

Video: Alex Honnold ‘free solos’ El Sendero Luminoso (terrifying)

Music is similar to sport. There is very fast feedback, learning, and a clear hierarchy of expertise.

Video: Glenn Gould playing the Goldberg Variations (slow version)

Our culture treats expertise/high performance in fields like sport and music very differently to maths/science education and politics/government. As Alan Kay observes, music and sport expertise is embedded in the broader culture. Millions of children spend large amounts of time practising hard skills. Attacks on them as ‘elitist’ don’t get the same damaging purchase as in other fields and the public don’t mind about elite selection for sports teams or orchestras.

‘Two ideas about this are that a) these [sport/music] are activities in which the basic act can be seen clearly from the first, and b) are already part of the larger culture. There are levels that can be seen to be inclusive starting with modest skills. I think a very large problem for the learning of both science and math is just how invisible are their processes, especially in schools.’ Kay 

When it comes to maths and science education, the powers-that-be (in America and Britain) try very hard and mostly successfully to ignore the question: where are critical thresholds for valuable skills that develop true expertise. This is even more a problem with the concept of ‘thinking rationally’, for which some basic logic, probability, and understanding of scientific reasoning is a foundation. Discussion of politics and government almost totally ignores the concept of training people to update their opinions in response to new evidence — i.e adapt to feedback. The ‘rationalist community’ — people like Scott Alexander who wrote this fantastic essay (Moloch) about why so much goes wrong, or the recent essays by Eliezer Yudkowsky — are ignored at the apex of power. I will return to the subject of how to create new education and training programmes for elite decision-makers. It is a good time for UK universities to innovate in this field, as places like Stanford are already doing. Instead of training people like Cameron and Adonis to bluff with PPE, we need courses that combine rational thinking with practical training in managing complex projects. We need people who practice really hard making predictions in ways we know work well (cf. Tetlock) then update in response to errors.

*

A more general/abstract approach to reforming government

If we want to get much higher performance in government, then we need to think rigorously about: the selection of people and teams, their education and training, their tools, and the institutions (incentives and so on) that surround and shape them.

Almost all analysis of politics and government considers relatively surface phenomena. For example, the media briefly blasts headlines about Carillion’s collapse or our comical aircraft carriers but there is almost no consideration of the deep reasons for such failures and therefore nothing tends to happen — the media caravan moves on and the officials and ministers keep failing in the same ways. This is why, for example, the predicted abject failure of the traditional Westminster machinery to cope with Brexit negotiations has not led to self-examination and learning but, instead, mostly to a visible determination across both sides of the Brexit divide in SW1 to double down on long-held delusions.

Progress requires attacking the ‘system of systems’ problem at the right ‘level’. Attacking the problems directly — let’s improve policy X and Y, let’s swap ‘incompetent’ A for ‘competent’ B — cannot touch the core problems, particularly the hardest meta-problem that government systems bitterly fight improvement. Solving the explicit surface problems of politics and government is best approached by a more general focus on applying abstract principles of effective action. We need to surround relatively specific problems with a more general approach. Attack at the right level will see specific solutions automatically ‘pop out’ of the system. One of the most powerful simplicities in all conflict (almost always unrecognised) is: ‘winning without fighting is the highest form of war’. If we approach the problem of government performance at the right level of generality then we have a chance to solve specific problems ‘without fighting’ — or, rather, without fighting nearly so much and the fighting will be more fruitful.

This is not a theoretical argument. If you look carefully at ancient texts and modern case studies, you see that applying a small number of very simple, powerful, but largely unrecognised principles (that are very hard for organisations to operationalise) can produce extremely surprising results.

We have no alternative to trying. Without fundamental changes to government, we will lose our hourly game of Russian roulette with technological progress.

‘The combination of physics and politics could render the surface of the earth uninhabitable… [T]he ever accelerating progress of technology and changes in the mode of human life … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.’ John von Neumann

As Steve Hsu says: Pessimism of the Intellect, Optimism of the Will.


Ps. There is an interesting connection between the nature of counterfactual reasoning in the fast-moving world of extreme sports and the theoretical paper I posted yesterday on state-of-the-art AI. The human ability to interrogate stored representations of their environment with counter-factual questions is fundamental to the nature of intelligence and developing expertise in physical and mental skills. It is, for now, absent in machines.

On the referendum #20: the campaign, physics and data science – Vote Leave’s ‘Voter Intention Collection System’ (VICS) now available for all

‘If you don’t get this elementary, but mildly unnatural, mathematics of elementary probability into your repertoire, then you go through a long life like a one-legged man in an ass-kicking contest. You’re giving a huge advantage to everybody else. One of the advantages of a fellow like Buffett … is that he automatically thinks in terms of decision trees and the elementary math of permutations and combinations… It’s not that hard to learn. What is hard is to get so you use it routinely almost everyday of your life. The Fermat/Pascal system is dramatically consonant with the way that the world works. And it’s fundamental truth. So you simply have to have the technique…

‘One of the things that influenced me greatly was studying physics… If I were running the world, people who are qualified to do physics would not be allowed to elect out of taking it. I think that even people who aren’t [expecting to] go near physics and engineering learn a thinking system in physics that is not learned so well anywhere else… The tradition of always looking for the answer in the most fundamental way available – that is a great tradition.’ Charlie Munger, Warren Buffet’s partner.

During the ten week official campaign the implied probability from Betfair odds of IN winning ranged between 60-83% (rarely below 66%) and the probability of OUT winning ranged between 17-40% (rarely above 33%). One of the reasons why so few in London saw the result coming was that the use by campaigns of data is hard to track even if you know what to look for and few in politics or the media know what to look for yet. Almost all of Vote Leave’s digital communication and data science was invisible even if you read every single news story or column ever produced in the campaign or any of the books so far published (written pre-Shipman’s book).

Today we have made a software product available for download – Vote Leave’s ‘Voter Intention Collection System’ (VICS) – click HERE. It was named after Victoria Woodcock, Operations Director, known as Vics, who was the most indispensable person in the campaign. If she’d gone under a bus, Remain would have won. When comparing many things in life the difference between average and best is say 30% but some people are 50 times more effective than others. She is one of them. She had ‘meetings in her head’ as people said of Steve Wozniak. If she had been Cameron’s chief of staff instead of Llewellyn and Paul Stephenson had been director of communications instead of Oliver and he’d listened to them, then other things being equal Cameron would still be on the No10 sofa with a glass of red and a James Bond flick. They were the operational/management and communications foundation of the campaign. Over and over again, those two – along with others, often very junior – saved us from the consequences of my mistakes and ignorance.

Among the many brilliant things Vics did was manage the creation of VICS. When we started the campaign I had many meetings on the subject of canvassing software. Amazingly there was essentially no web-based canvassing software system for the UK that allowed live use and live monitoring. There have been many attempts by political parties and others to build such systems. All failed, expensively and often disastrously.

Unfortunately, early on (summer 2015) Richard Murphy was hired to manage the ground campaign. He wanted to use an old rubbish system that assumed the internet did not exist. This was one of the factors behind his departure and he decided to throw in his lot with Farage et al. He then inflicted this rubbish system on Grassroots Out which is one of the reasons why it was an organisational/management disaster and let down its volunteers. After Vote Leave won the official designation, many GO activists defected, against official instructions from Farage, and plugged into VICS. Once Murphy was replaced by Stephen Parkinson (now in No10) and Nick Varley, the ground campaign took off.

We created new software. This was a gamble but the whole campaign was a huge gamble and we had to take many calculated risks. One of our central ideas was that the campaign had to do things in the field of data that have never been done before. This included a) integrating data from social media, online advertising, websites, apps, canvassing, direct mail, polls, online fundraising, activist feedback, and some new things we tried such as a new way to do polling (about which I will write another time) and b) having experts in physics and machine learning do proper data science in the way only they can – i.e. far beyond the normal skills applied in political campaigns. We were the first campaign in the UK to put almost all our money into digital communication then have it partly controlled by people whose normal work was subjects like quantum information (combined with political input from Paul Stephenson and Henry de Zoete, and digital specialists AIQ). We could only do this properly if we had proper canvassing software. We built it partly in-house and partly using an external engineer who we sat in our office for months.

Many bigshot traditional advertising characters told us we were making a huge error. They were wrong. It is one of the reasons we won. We outperformed the IN campaign on data despite them starting with vast mounts of data while we started with almost zero, they had support from political parties while we did not, they had early access to the electoral roll while we did not, and they had the Crosby/Messina data and models from the 2015 election while we had to build everything from scratch without even the money to buy standard commercial databases (we found ways to scrape equivalents off the web saving hundreds of thousands of pounds).

If you want to make big improvements in communication, my advice is – hire physicists, not communications people from normal companies and never believe what advertising companies tell you about ‘data’ unless you can independently verify it. Physics, mathematics, and computer science are domains in which there are real experts, unlike macro-economic forecasting which satisfies neither of the necessary conditions – 1) enough structure in the information to enable good predictions, 2) conditions for good fast feedback and learning. Physicists and mathematicians regularly invade other fields but other fields do not invade theirs so we can see which fields are hardest for very talented people. It is no surprise that they can successfully invade politics and devise things that rout those who wrongly think they know what they are doing. Vote Leave paid very close attention to real experts. (The theoretical physicist Steve Hsu has a great blog HERE which often has stuff on this theme, e.g. HERE.)

More important than technology is the mindset – the hard discipline of obeying Richard Feynman’s advice: ‘The most important thing is not to fool yourself and you are the easiest person to fool.’ They were a hard floor on ‘fooling yourself’ and I empowered them to challenge everybody including me. They saved me from many bad decisions even though they had zero experience in politics and they forced me to change how I made important decisions like what got what money. We either operated scientifically or knew we were not, which is itself very useful knowledge. (One of the things they did was review the entire literature to see what reliable studies have been done on ‘what works’ in politics and what numbers are reliable.) Charlie Munger is one half of the most successful investment partnership in world history. He advises people – hire physicists. It works and the real prize is not the technology but a culture of making decisions in a rational way and systematically avoiding normal ways of fooling yourself as much as possible. This is very far from normal politics.

(One of the many ways in which Whitehall and Downing Street should be revolutionised is to integrate physicist-dominated data science in decision-making. There are really vast improvements possible in Government that could save hundreds of billions and avoid many disasters. Leaving the EU also requires the destruction of the normal Whitehall/Downing Street system and the development of new methods. A dysfunctional broken system is hardly likely to achieve the most complex UK government project since beating Nazi Germany, and this realisation is spreading – a subject I will return to.)

In 2015 they said to me: ‘If the polls average 50-50 at the end you will win because of differential turnout and even if the average is slightly behind you could easily win because all the pollsters live in London and hang out with people who will vote IN and can’t imagine you winning so they might easily tweak their polls in a way they think is making them more accurate but is actually fooling themselves and everybody else.’ This is what happened. Almost all the pollsters tweaked their polls and according to Curtice all the tweaks made them less accurate. Good physicists are trained to look for such errors. (I do not mean to imply that on 23 June I was sure we would win. I was not. Nor was I as pessimistic as most on our side. I will write about this later.)

VICS allows data to be input centrally (the electoral roll, which in the UK is a nightmare to gather from all the LAs) and then managed at a local level, whether that be at street level, constituency or wider areas. Security levels can be set centrally to ensure that no-one can access the whole database. During the campaign we used VICS to upload data models which predicted where we thought Leave voters were likely to be so that we could focus our canvassing efforts, which was important given limited time and resources on the ground. The model produced star ratings so that local teams could target the streets more likely to contain Leave voters.

Data flowed in on the ground and was then analysed by the data science team and integrated with all the other data streaming in. Data models helped us target the ground campaign resources and in turn data from the ground campaign helped test and refine the models in a learning cycle – i.e. VICS was not only useful to the ground campaign but also helped improve the models used for other things. (This was the point of our £50 million prize for predicting the results of the European football championships, which gathered data from people who usually ignore politics – I’m still frustrated we couldn’t persuade someone to insure a £350 million prize which is what I wanted to do.) In the official 10 week campaign we served about one billion targeted digital adverts, mostly via Facebook and strongly weighted to the period around postal voting and the last 10 days of the campaign. We ran many different versions of ads, tested them, dropped the less effective and reinforced the most effective in a constant iterative process. We combined this feedback with polls (conventional and unconventional) and focus groups to get an overall sense of what was getting through. The models honed by VICS also were used to produce dozens of different versions of the referendum address (46 million leaflets) and we tweaked the language and look according to the most reliable experiments done in the world (e.g. hence our very plain unbranded ‘The Facts’ leaflet which the other side tested, found very effective, and tried to copy). I will blog more about this.

These canvassing events represented 80-90% of our ground effort in the last few months, hence some of the reports by political scientists derived from Events pages on the campaign websites, which did not include canvassing sessions, are completely misleading about what actually happened (this includes M Goodwin who is badly confused and confusing, and kept telling the media duff information after he was told it was duff). There was also a big disinformation campaign by Farage’s gang, including Bone and Pursglove, who told the media ‘Vote Leave has no interest in the ground campaign’. This was the opposite of the truth. By the last 10 weeks we had over 12,000 people doing things every week (we had many more volunteers than this but the 12,000 were regularly active). When Farage came to see me for the last time (as always fixated only on his role in the debates and not the actual campaign which he was sure was lost) he said that he had 7,000 activists who actually did anything. He was stunned when I said that we had over 12,000. I think Farage et al believe their own spin on this subject and were deluded not lying. (Obviously there was a lot of overlap between these two figures.) These volunteers delivered about 70 million leaflets out of a total ~125 million that were delivered one way or another.

While there were some fantastic MPs who made huge efforts on the ground – e.g. Anne Marie Trevelyan – it was interesting how many MPs, nominally very committed to Leave, did nothing useful in their areas nor had any interest in ground campaigning and data. Many were far more interested in trying to get on TV and yapping to hacks than in gathering useful data, including prominent MPs on our Board and Campaign Committee, some of whom contributed ZERO useful data in the entire campaign. Some spent much of the campaign having boozy lunches with Farage gossiping about what would happen after we lost. Because so many of them proved untrustworthy and leaked everything I kept the data science team far from prying eyes – when in the office, if asked what they did they replied ‘oh I’m just a junior web guy’. It would have been better if we could have shared more but this was impossible given some of the characters.

VICS is the first of its kind in the UK and provided new opportunities. It is, of course, far from ideal. It was developed very quickly, we had to cut many corners, and it could be improved on. But it worked. Many on the ground, victims of previous such attempts, assumed it would blow up under the pressure of GOTV. It did not. It worked smoothly right through peak demand. This was also because we solved the hardware problem by giving it to Rackspace which did a great job – they have a system that allows automatic scaling depending on the demand so you don’t have to worry about big surges overwhelming the system.

There were many things we could have done much better. Our biggest obstacle was not the IN campaign and its vast resources but the appalling infighting on our own side driven by all the normal human motivations described in Thucydides – fear, interest, the pursuit of glory and so on. Without this obstacle we would have done far more on digital/data. Having seen what is offered by London’s best communications companies, vast improvements in performance are clearly possible if you hire the right people. A basic problem for people in politics is that approximately none have the hard skills necessary to distinguish great people from charlatans. It was therefore great good fortune that I was friends with our team before the campaign started.

During the campaign many thousands of people donated to Vote Leave. They paid for VICS. Given we spent a lot of money developing it and there is nothing equivalent available on the market and Vote Leave is no more (barring a very improbable event), we thought that we would make VICS available for anybody to use and improve though strictly on the basis that nobody can claim any intellectual property rights over it. It is being made available in the spirit of the open source movement and use of it should be openly acknowledged. Thanks again to the thousands of people who made millions of sacrifices – because of you we won everywhere except London, Scotland and Northern Ireland against the whole Government machine supported by almost every organisation with power and money.

I will write more about the campaign once the first wave of books is published.

PS. Do not believe the rubbish peddled by Farage and the leave.EU team about social media. E.g. a) They boasted publicly that they paid hundreds of thousands of pounds for over half a million Facebook ‘Likes’ without realising that b) Facebook’s algorithms no longer optimised news feeds for Likes (it is optimised for paid advertising). Leave.EU wasted hundreds of thousands just as many big companies spent millions building armies of Likes that were rendered largely irrelevant by Facebook’s algorithmic changes. This is just one of their blunders. Vote Leave put our money into targeted paid adverts, not buying Likes to spin stories to gullible hacks, MPs, and donors. Media organisations should have someone on the political staff who is a specialist in data or have a route to talk to their organisation’s own data science teams to help spot snake oil merchants.

PPS. If you are young, smart, and interested in politics, think very hard before studying politics / ‘political science’ / PPE at university. You will be far better off if you study maths or physics. It will be easy to move into politics later if you want to and you will have more general skills with much wider application and greater market value. PPE does not give such useful skills – indeed, it actually causes huge problems as it encourages people like Cameron and Ed Balls to ‘fool themselves’ and spread bad ideas with lots of confidence and bluffing. You can always read history books later but you won’t always be able to learn maths. If you have these general skills, then you will be much more effective than the PPE-ers you will compete against. In a few years, this will be more obvious as data science will be much more visible. A new interdisciplinary degree is urgently needed to replace PPE for those who want to go into politics. It should include the basics of modelling and involve practical exposure to people who are brilliant at managing large complex organisations.

PPPS. One of the projects that the Gove team did in the DfE was funding the development of a ‘Maths for Presidents’ course, in the same spirit as the great Berkeley course ‘Physics for Presidents’, based on ideas of Fields Medallist Tim Gowers. The statistics of polling would be a good subject for this course. This course could have a big cultural effect over 20 years if it is supported wisely.

Standards In English Schools Part I: The introduction of the National Curriculum and GCSEs

The Introduction to this series of blogs, HERE, sets out the background and goals.

There are many different senses in which people discuss ‘standards’. Sometimes they mean an overall judgement on the performance of the system as judged by an international test like PISA. Sometimes they mean judgements based on performance in official exams such as KS2 SATs (at 11) or GCSEs. Sometimes they mean the number of schools above or below a DfE ‘floor target’. Sometimes they mean the number of schools and/or pupils in Ofsted-defined categories. Sometimes people talk about ‘the quality of teachers’. Sometimes they mean ‘the standards required of pupils when they take certain exams’. Today, the media is asking ‘have Academies raised standards?’ because of the Select Committee Report (which, after a brief flick through, seems to have ignored most of the most interesting academic studies done on a randomised/pseudo-randomised basis).

This blog in the series is concerned mainly with the questions of – what has happened to the standards required of pupils when they take GCSEs and A Levels as a result of changes since the mid-1980s, and how do universities and learned societies judge the preparation of pupils for further studies. Have the exams got easier? Do universities and learned societies think pupils are well-prepared for further studies?

I will give a very short potted history of the introduction of GCSEs and the National Curriculum before examining the evidence of their effects. If you are not interested in the history, please skip to the Section B on Evidence. If you just want to see my Conclusions, scroll to the end for a short section.

I stress that my goal is not to argue for a return to the pre-1988 system of O Levels and A Levels. While it had some advantages over the existing system, it also had profound problems. I think that an unknown fraction of the cohort could experience far larger improvements in learning than we see now if they were introduced to different materials in different ways, rather than either contemporary exams or their predecessors, but I will come to this argument, and why I have this belief, in a later blog.

I have used the word ‘Department’ to represent the DES of the 1980s, the DfE of post-2010, and its different manifestations in between.

This is just a rough first stab at collecting things I’ve shoved in boxes, emails etc over the past few years. Please leave corrections and additions in Comments.

A. A very potted history

Joseph introduces GCSEs – ‘a right old mess’

The debate over the whole of education policy, and particularly the curriculum and exams, changed a lot after Callaghan’s Ruskin speech in 1976 and the Department’s Yellow Book. Before then, the main argument was simply about providing school places and the furore over selection. After 1976 the emphasis shifted to ‘standards’ and there was growing momentum behind a National Curriculum (NC) of some sort and reforms to the exam system.

Between 1979-85, the Department chivvied LAs on the curriculum but had little power and nothing significant changed. Joseph was too much of a free marketeer to support a NC so its proponents could not make progress.

Joseph was persuaded to replace O Levels with GCSEs. He thought that the outcome would be higher standards for all but he later complained that he had been hoodwinked by the bureaucratic process involving The Schools Examination Committee (SEC). He later complained:

‘I should have fought against flabbiness in general more than I did… I thought I did, but how do you reach into such a producer-oriented world? … “Stretching” was my favourite word; I judged that if you leant on that much else would follow. That’s what my officials encouraged me to imagine I was achieving… I said I’d only agree to unify the two examinations provided we established differentiation [which he defined as ‘you’re stretching the academic and you’re stretching the non-academic in appropriate ways’], and now I find that unconsciously I have allowed teacher assessment, to a greater extent than I assumed. My fault … my fault… it’s the job of ministers to see deeply… and therefore it’s flabby… You don’t find me defending either myself or the Conservative Party, but I reckon that we’ve all together made a right old mess of it. And it’s hurt most those who are most vulnerable.’ (Interview with Ball.)

I have not come across any other ministers or officials from this period so open about their errors.

The O Level survived under a different name as an international exam provided by Cambridge Assessment. It is still used abroad including in Singapore which regularly comes in the top three in all international tests. Cambridge Assessment also offers an ‘international GCSE’ that is, they say, tougher than the ‘old’ GCSE (i.e. the one in use now before it changes in 2015) but not as tough as the O Level. This international GCSE was used in some private schools pre-2010 along with ‘international GCSEs’ from other exam boards. From 2010, state schools could use iGCSEs. In 2014, the DfE announced that it would stop this again. I blogged on this decision HERE.

Entangled interests – Baker and the National Curriculum

In 1986, Thatcher replaced Joseph with Baker hoping, she admitted, that he would make up ‘in presentational flair what ever he lacked in attention to detail’. He did not. Nigel Lawson wrote of Baker that ‘not even his greatest friends would describe him as a profound thinker or a man with mastery of detail’. Baker’s own PPS said that at the morning meeting ‘the main issue was media handling’. Jenny Bacon, the official responsible for The National Curriculum 5-16 (1987), said that Baker liked memos ‘in “ball points” … some snappy things with headings. It wasn’t glorious continuous prose…[Ulrich, a powerful DES official] was appalled but Baker said “That’s just the kind of brief I want”.’

Between 1976 and 1986, concern had grown in Whitehall about the large number of awful schools and widespread bad teaching. Various intellectual arguments, ideology, political interests (personal and party), and bureaucratic interests aligned to create a National Curriculum. Thatcherites thought it would undermine what they thought of as the ‘loony left’, then much in the news. Baker thought it would bring him glory. The Department and HMI rightly thought it would increase their power. After foolishly announcing CTCs at Party Conference, thus poisoning their brand with politics from the start, Baker announced he would create a NC and a testing system at 7, 11, and 14.

The different centres of power disagreed on what form the NC would take. HMI lobbied against subjects and wanted a NC based on ‘areas of expertise’, not traditional subjects. Thatcher wanted a very limited core curriculum based on English, maths, and science. The Department wanted a NC that stretched across the whole curriculum. Baker agreed with the Department and dismissed Thatcher’s limited option as ‘Gradgrind’.

In order to con Thatcher into agreeing his scheme, Baker worked with officials to invent a fake distinction between ‘core’ and ‘foundation’ subjects. As Baker’s Permanent Secretary Hancock said, ‘We devised the notion of the core and the foundation subjects but if you examine the Act you will see that there is no difference between the two. This was a totally cynical and deliberate manoeuvre on Kenneth Baker’s part.’

The 1988 Act established two quangos to be what Baker called ‘the twin guardians of the curriculum’ – The National Curriculum Council (NCC), focused on the NC, and The Schools Examinations and Assessment Council (SEAC), focused on tests. Once the Act was passed, Baker’s junior minister Rumbold said that ‘Ken went out to lunch.’ Like many ministers, he did not understand the importance of the policy detail and the intricate issues of implementation. He allowed officials to control appointments to the two vital committees and various curriculum working groups. Even Baker’s own spad later said that Baker was conned into appointing ‘the very ones responsible for the failures we have been trying to put right’. Baker forlornly later admitted that ‘I thought you could produce a curriculum without bloodshed. Then people marched over mathematics. Great armies were assembled’, and he ‘never envisaged it would be as complex as it turned out to be’. Bacon, the official responsible for the NC, said that Baker ‘wasn’t interested in the nitty gritty’. Nicholas Tate (who was at the NCC and later headed the QCA) said that Baker was ‘affable but remote. He didn’t trouble his mind with attainment targets. He was resting on his laurels.’ Hancock, his Permanent Secretary, said that ‘after 1987 he became increasingly arrogant and impatient’. In 1989, Baker was moved to Party Chairman leaving behind chaos for his successor.

According to his colleagues, Baker was obsessed with the media, he did not try to understand (and did not have the training to understand) the policy issues in detail, and he confused the showmanship necessary to get a bill passed with serious management – he described himself as ‘a doer’ but the ‘doing’ in his mind consisted of legislation and spin. He did not even understand that there were strong disputes among teachers, subject bodies, and educationalists about the content of the NC – never mind what to do about these disputes. (Having watched the UTC programme from the DfE, the same traits were much in evidence thirty years later.)

Baker’s legacy 1989 – 1997: Shambles

Baker’s memoirs do not mention the report of The Task Group on Assessment (TGAT), chaired by Professor Paul Black, commissioned by Baker in 1987 to report on how the NC could be assessed. The plan was very complicated with ten levels of attainment having to be defined for each subject. Thatcher hated it and criticised Baker for accepting it. Meanwhile the Higginson Report had recommended replacing A Levels with some sort of IB type system. Bacon said that ‘the political trade-off was Higginson got ditched … and we got TGAT. In retrospect it may have been the wrong trade off.’

MacGregor could not get a grip of the complexity. He did not even hire a specialist policy adviser because, he said, ‘I didn’t feel I needed one.’ He blamed Baker for the chaos who, he said, ‘hadn’t spent enough time thinking about who was appointed to the bodies. He left it to officials and didn’t think through what he wanted the bodies to do. For the first year I was unable to replace anybody.’ The chairman of NCC described how they used ‘magic words to appease the right’ and get through what they wanted. The officials who controlled SEAC stopped the simplification that Thatcher wanted using the ‘legal advice’ card, claiming that the 1988 Act required testing of all attainment targets. (I had to deal with the same argument 25 years later.) MacGregor was trapped. He had an unworkable system and was under contradictory pressure from Thatcher to simplify everything and from Baker to maintain what he had promised.

Clarke bluffed and bullied his way through 18 months without solving the problems. His Permanent Secretary described the trick of getting Clarke to do what officials wanted: ‘The trick was to never box him into a corner… Show him where there was a door but never look at that door, and never let on you noticed when he walked through.’ Like MacGregor, Clarke blamed Baker for the shambles: ‘[Baker] had set up all these bloody specialist committees to guide the curriculum, he’d set up quango staff who as far as I could see had come out of the Inner London Education Authority the lot of them.’ Clarke solved none of the main problems with the tests, antagonised everybody, and replaced HMI with Ofsted.

After his surprise win, Major told the Tory Conference in 1992, ‘Yes it will mean another colossal row with the education establishment. I look forward to that.’ Patten soon imploded, the unions went for the jugular over the introduction of SATs, and by the end of 1993 Number Ten had backtracked on their bellicose spin and was in full retreat with a review by Dearing (published 1994). Suddenly, the legal advice that had supposedly prevented any simplification was rethought and officials told Dearing that the legal advice did allow simplification after all: ‘our advice is that the primary legislation allows a significant measure of flexibility’. (In my experience, one of the constants of Whitehall is that legal advice tends to shift according to what powerful officials want.) Dearing produced a classic Whitehall fudge that got everybody out of the immediate crisis but did not even try to deal with the fundamental problems, thus pushing the problems into the future.

The historian Robert Skidelsky, helping SEAC, told Patten ‘these tests will not run’ and he should change course but Patten shouted ‘That is defeatist talk.’ Skidelsky decided to work out a radically simpler model than the TGAT system with a small group in SEAC: ‘We pushed the model through committee and through the Council and sent it off to John Patten. We never received a reply. Six months after I resigned Emily Blatch approached me and said she had been looking for my paper on Assessment but no one seems to know where it is.’

Patten was finished. Gillian Shephard was put in to be friendly to the unions and quiet the chaos. Soon she and Major had also fallen out and the cycle of briefing and counter-briefing against Number Ten returned with permanent policy chaos. One of her senior officials, Clive Saville, concluded that ‘There was a great intellectual superficiality about Gillian Shephard and she was as intellectually dishonest as Shirley Williams. She was someone who wanted to be liked but wasn’t up to the job.’

A few thoughts on the process

The Government had introduced a new NC and test system and replaced O Levels with GCSEs. (They also introduced new vocational qualifications (NVQs) described by Professor Alan Smithers as a ‘disaster of epic proportions … utterly lightweight’.) The process was a disastrous bungle from start to finish.

Thatcher deserves considerable blame. She allowed Baker to go ahead with fundamental reforms without any agreed aims or a detailed roadmap. She knew, as did Lawson, that Baker could not cope with details yet appointed him on the basis of ‘presentational flair’ (media obsession is often confused with ‘presentational flair’).

The best book I have read by someone who has worked in Number Ten and seen why the Whitehall architecture is dysfunctional is John Hoskyns’ Just In Time. Extremely unusually for someone in a senior position in No10, Hoskyns both had an intellectual understanding of complex systems and was a successful manager. Inevitably, he was appalled at how the most important decisions were made and left Number Ten after failing to persuade Thatcher to tear up the civil service system. Since then, everybody in Number Ten has been struggling with the same issues. (If she had taken his advice history might have been extremely different – e.g. no ERM debacle.) His conclusion on Thatcher was:

‘The conclusion that I am coming to is that the way in which [Thatcher] herself operates, the way her fire is at present consumed, the lack of a methodical mode of working and the similar lack of orderly discussion and communication on key issues, means that our chance of implementing a carefully worked out strategy – both policy and communications – is very low indeed… Difficult problems are only solved – if they can be solved at all – by people who desperately want to solve them… I am convinced that the people and the organisation are wrong.’ (Emphasis added.)

Arguably the person who knowingly appoints someone like Baker is more to blame for the failings of Baker than Baker is himself. Major and the string of ministers that followed Baker were doomed. They were not unusually bad – they were representative examples of those at the apex of the political process. They did not know how to go about deciding aims, means, and operations. They were obsessed with media management and therefore continually botched the policy and implementation. They could not control their officials. They could not agree a plan and blamed each other. If they were the sort of people who could have got out of the mess, then they were the sort of people who would not have got into the mess in the first place.

Officials over-complicated everything and, like ministers, did not engage seriously with the core issue – what should pupils of different abilities be doing and how can we establish a process where we can collect reliable information. The process was dominated by the same attitude on all sides – how to impose a mentality already fixed.

It was also clearly affected by another element that has contemporary relevance – the constant churn of people. Just between summer 1989 and the end of 1992, there was: a new Permanent Secretary in May 1989, a new SoS in July 1989 (MacGregor), another new SoS in November 1990 (Clarke), a new PM and No10 team (Major), new heads for the NCC and SEAC in July 1991, then another new SoS in spring 1992 (Patten) and another new Permanent Secretary. Everybody blamed problems on predecessors and nobody could establish a consistent path.

Even its own Permanent Secretaries later attacked the DES. James Hamilton (1976-1983) was put into DES in June 1976 from the Cabinet Office to help with the Ruskin agenda and found a place where ‘when something was proposed someone would inevitably say, “Oh we tried that back in whenever and it didn’t work”…’. Geoffrey Holland (1992-3) admitted that, ‘It [DES] simply had no idea of how to get anything off the ground. It was lacking in any understanding or experience of actually making things happen.’

A central irony of the story shows how dysfunctional the system was. Thatcher never wanted a big NC and a complicated testing system but she got one. As some of her ideological opponents in the bureaucracy tried to simplify things when it was clear Baker’s original structure was a disaster, ministers were often fighting with them to preserve a complex system that could not work and which Thatcher had never wanted. This sums up the basic problem – a very disruptive process was embarked upon without the main players agreeing what the goal was.

Although the think tanks were much more influential in this period than they are now, Ferdinand Mount, head of Thatcher’s Policy Unit, made a telling point about their limitations: ‘Enthusiasts for reform at the IEA and the CPS were prodigal with committees and pamphlets but were much less helpful when it came to providing practical options for action. This made it difficult for the Policy Unit’s ideas to overcome the objections put forward by senior officials’. Thirty years later this remains true. Think tanks put out reports but they rarely provide a detailed roadmap that could help people navigate such reforms through the bureaucracy and few people in think tanks really understand how Whitehall works. This greatly limits their real influence. This is connected to a wider point. Few of those who comment prominently on education (or other) policy understand how Whitehall works, hence there is a huge gap between discussions of ideal policy and what is actually possible within a certain timeframe in the existing system, and commentators think that all sorts of things that happen do so because of ministers’ wishes, confusing public debate further.

I won’t go into the post-1997 story. There are various books that tell this whole story in detail. The National Curriculum remained but was altered; the test system remained but gradually narrowed from the original vision; there were some attempts at another major transformation (such as Tomlinson’s attempt to end A Levels, thwarted by Blair) but none took off; money poured into the school system and its accompanying bureaucracy at an unprecedented rate but, other than a large growth in the number and salaries of everybody, it remained unclear what if any progress was being made.

This bureaucracy spent a great deal of taxpayers’ money promoting concepts such as ‘learning styles’ and ‘multiple intelligences’ that have no proper scientific basis but which nevertheless were successfully blended with old ideas from Vygotsky and Piaget to dominate a great deal of teacher training. A lot of people in the education world got paid an awful lot of money (Hargreaves, Waters et al) but what happened to standards?

(The quotes above are taken mainly from Daniel Callaghan’s Conservative Party Education Policies 1976-1997.)

B. The cascading effects of GCSEs and the National Curriculum

Below I consider 1) the data on grade inflation in GCSEs and A Levels, 2) various studies from learned societies and others that throw light on the issue, 3) knock-on effects in universities.

1. Data on grade inflation in GCSEs and A Levels

We do not have an official benchmark against which to compare GCSE results. The picture is therefore necessarily hazy. As Coe has written, ‘we are limited by the fact that in England there has been no systematic, rigorous collection of high-quality data on attainment that could answer the question about systemic changes in standards.’ This is one of the reasons why in 2013 we, supported by Coe and others, pushed through (against considerable opposition including academics at the Institute of Education) a new ‘national reference test’ in English and maths at age 16, which I will return to in a later blog.

However, we can compare the improvement in GCSE results with a) results from international tests and b) consistent domestic tests uncontrolled by Whitehall.

The first two graphs below show the results of this comparison.

Chart 1: Comparison of English performance in international surveys versus GCSE scores 1995-2012 (Coe)

Screenshot 2015-01-06 16.32.49

Chart 2: GCSE grades achieved by candidates with same maths & vocab scores each year 1996-2012 (Coe)

Screenshot 2015-01-06 16.33.23

Professor Coe writes of Chart 1:

‘When GCSE was introduced in 1987 [I think he must mean 1988 as that was the first year of GCSEs or else he means ‘the year before GCSEs were first taken’], 26.4% of the cohort achieved five grade Cs or better. By 2012 the proportion had risen to 81.1%. This increase is equivalent to a standardised effect size of 1.63, 3 or 163 points on the PISA scale… If we limit the period to 1995 – 2011 [as in Chart 1 above] the rise (from 44% to 80% 5A*-C) is equivalent to 99 points on the PISA scale [as superimposed on Chart 1]… [T]he two sets of data [international and GCSEs] tell stories that are not remotely compatible. Even half the improvement that is entailed in the rise in GCSE performance would have lifted England from being an average performing OECD country to being comfortably the best in the world. To have doubled that rise in 16 years is just not believable

‘The question, therefore, is not whether there has been grade inflation, but how much…’ [Emphasis added.] (Professor Robert Coe, ‘Improving education: a triumph of hope over experience‘, 18 June 2013, p. vi.)

Chart 2 plots the improving GCSE grades achieved by pupils scoring the same each year in a test of maths and vocabulary: pupils scoring the same on YELLIS get higher and higher GCSE grades as time passes. Coe concludes that although ‘it is not straightforward to interpret the rise in grades … as grade inflation’, the YELLIS data ‘does suggest that whatever improved grades may indicate, they do not correspond with improved performance in a fixed test of maths and vocabulary’ (Coe, ibid).

This YELLIS comparison suggests that in 2012 pupils received a grade higher in maths, history, and French GCSE, and almost a grade higher in English, than students of the same ability in 1996.

It is important to note that neither of Coe’s charts or measurements include the effects of either a) the initial switch from O Level to GCSE or b) what changed with GCSEs from 1988 – 1995. 

The next two charts show this earlier part of the story (both come from Education: Historical statistics, House of Commons, November 2012). NB. they have different end dates.

Chart 3: Proportion getting 5 O Levels / GCSEs at grade C or higher 1953/4 – 2008/9 

Screenshot 2015-01-09 17.24.19

Chart 4: Proportion getting 1+ or 3+ passes at A Level 1953/4 – 1998/9

Screenshot 2015-01-09 17.24.42

Chart 3 shows that the period 1988-95 saw an even sharper increase in GCSE scores than post-1995 so a GCSE/YELLIS style comparison that included the years 1988-1995 would make the picture even more dramatic.

Chart 4 shows a dramatic increase in A Level passes after the introduction of GCSEs. One interpretation of this graph, supported by the 1997-2010 Government and teaching unions, is that this increase reflected large real improvements in school standards.

There is GCSE data that those who believe this argument could cite. In 1988, 8% of GCSEs were awarded an ‘A’ in GCSE. In 2011, 23% of GCSEs were awarded an ‘A’ or ‘A*’ in GCSE. The DfE published data in 2013 which showed that the number of pupils with ten or more A* grades trebled 2002-12. This implies a very large increase in the numbers of those excelling at GCSE, which is consistent with a picture of a positive knock-on effect on improving A Level results.

However, we have already seen that the claims for GCSEs are ‘not believable’ in Coe’s words. It also seems prima facie very unlikely that a sudden large improvement in A Level results from 1990 could be the result of immediate improvements in learning driven by GCSEs. There is also evidence for A Levels similar to the GCSE/YELLIS comparison.

Chart 5: A level grades of candidates having the same TDA score (1988-2006)

Screenshot 2015-01-21 00.43.33

Chart 5 plots A Level grades in different subjects against the international TDA test. As with GCSEs, this shows that pupils scoring the same in a non-government test got increasingly higher grades in A Levels. The change in maths is particularly dramatic from an ‘Unclassified’ mark in 1988 to a B/C in 2006.

What we know about GCSEs combined with this information makes it very hard to believe that the sudden dramatic increase in A Level performance since 1990 is because of real improvements and suggests another interpretation: these dramatic increases in A Level results reflected (mostly or entirely) A Levels being made significantly easier probably in order to compensate for GCSEs being much easier.

However, the data above can only tell part of the story. Logically, it is hard or impossible to distinguish between possible causes just from these sorts of comparisons. For example, perhaps someone might claim that A Level questions remained as challenging as before but grade boundaries moved – i.e. the exam papers were the same but the marking was easier. I think this is prima facie unlikely but the point is that logically the data above cannot distinguish between various possible dynamics.

Below is a collection of studies, reports, and comments from experts that I have accumulated over the past few years that throws light on which interpretation is more reasonable. Please add others in Comments.

(NB. David Spiegelhalter, a Professor of Statistics at Cambridge, has written about  problems with PISA’s use of statistics. These arguments are technical. To a non-specialist like me, he seems to make important points that PISA must answer to retain credibility and the fact that it has not (as of the last time I spoke to DS in summer 2014) is a blot on its copybook. However, I do not think they materially affect the discussion above. Other international tests conducted on different bases all tell roughly the same story. I will ask DS if he thinks his arguments do undermine the story above and post his reply if any.)

2. Studies 2007 – now 

NB1. Most of these studies are comparing changes over the past decade or so, not the period since the introduction of the NC and GCSEs in the 1980s.

NB2. I will reserve detailed discussion of the AS/A2/decoupling argument for a later blog as it fits better in the ‘post-2010 reforms’ section.

Learned societies. The Royal Society’s 2011 study of Science GCSEs: ‘the question types used provided insufficient opportunity for more able candidates … to demonstrate the extent of their scientific knowledge, understanding and skills. The question types restricted the range of responses that candidates could provide. There was little or no scope for them to demonstrate various aspects of the Assessment Objectives and grade descriptions… [T]he use of mathematics in science was examined in a very limited way.’ SCORE also published (2012) evidence on science GCSEs which reported ‘a wide variation in the amount of mathematics assessed across awarding organisations and confirmed that the use of mathematics within the context of science was examined in a very limited way. SCORE organisations felt that this was unacceptable.’

The 2012 SCORE report and Nuffield Report showed serious problems with the mathematical content of A Levels. SCORE was very critical:

‘For biology, chemistry and physics, it was felt there were underpinning areas of mathematics missing from the requirements and that their exclusion meant students were not adequately prepared for progression in that subject. For example, for physics many of the respondents highlighted the absence of calculus, differentiation and integration, in chemistry the absence of calculus and in biology, converting between different units… For biology, chemistry and physics, the analysis showed that the mathematical requirements that were assessed concentrated on a small number of areas (e.g. numerical manipulation) while many other areas were assessed in a limited way, or not at all… Survey respondents were asked to identify content areas from the mathematical requirements that should feature highly in assessments. In most cases, the biology, chemistry and physics respondents identified mathematical content areas that were hardly or not at all assessed by the awarding organisations.

‘[T]he inclusion of more in-depth problem solving would allow students to apply their knowledge and understanding in unstructured problems and would increase their fluency in mathematics within a science context.’

‘The current mathematical assessments in science A-levels do not accurately reflect the mathematical requirements of the sciences. The findings show that a large number of mathematical requirements listed in the biology, chemistry and physics specifications are assessed in a limited way or not at all within these papers. The mathematical requirements that are assessed are covered repeatedly and often at a lower level of difficulty than required for progression into higher education and employment. It has also highlighted a disparity between awarding organisations in their assessment of the use of mathematics within biology, chemistry and physics A-level. This is unacceptable and the examination system, regardless of the number of awarding organisations, must ensure the assessments provide an authentic representation of the subject and equip all students with the necessary skills to progress in the sciences.

‘This is likely to have an impact on the way that the subjects are taught and therefore on students’ ability to progress effectively to STEM higher education and employment.’ SCORE, 2012. Emphasis added.

The 2011 Institute of Physics report showed strong criticism from university academics of the state of physics and engineering undergraduates’ mathematical knowledge. Four-fifth of academics said that university courses had changed to deal with a lack of mathematical fluency and 92% said that a lack of mathematical fluency was a major obstacle.

‘The responses focused around mathematical content having to be diluted, or introduced more slowly, which subsequently impacts on both the depth of understanding of students, and the amount of material/topics that can be covered throughout the course…

‘Academics perceived a lack of crossover between mathematics and physics at A-level, which was felt to not only leave students unprepared for the amount of mathematics in physics, but also led to them not applying their mathematical knowledge to their learning of physics and engineering.’ IOP, 2011.

The 2011 Centre for Bioscience criticised Biology and Chemistry A Levels and preparation of pupils for bioscience degrees: ‘very many lack even the basics… [M]any students do not begin to attempt quantitative problems and this applies equally to those with A level maths as it does to those with C at GCSE. A lack of mathematics content in A level Biology means that students do not expect to encounter maths at undergraduate level. There needs to be a more significant mathematical component in A level biology and chemistry.’ The Royal Society of Chemistry report, The five decade challenge (2008), said there had been ‘catastrophic slippage in school science standards’ and that Government claims about improving GCSE scores were ‘an illusion’. (The Department said of the RSC report, ‘Standards in science have improved year on year thanks to 10 years of sustained investment and improvement in teaching and the education system – this is something we should celebrate, not criticise. Times have changed.’)

Ofqual, 2012. Ofqual’s Standards Review in 2012 found grade inflation in both GCSE and A-levels between 2001-03 and 2008-10: ‘Many of these reviews raise concerns about the maintenance of standards… In the GCSEs we reviewed (biology, chemistry and mathematics) we found that changes to the structure of the assessments, rather than changes to the content, reduced the demand of some qualifications.’

On A-levels, ‘In general we found that changes to the way the content was assessed had an impact on demand, in many cases reducing it. In two of the reviews (biology and chemistry) the specifications were the same for both years. We found that the demand in 2008 was lower than in 2003, usually because the structure of the assessments had changed. Often there were more short answer, structured questions’ (Ofqual, Standards Reviews – A Summary, 1 May 2012, found here).

Chief Executive of Ofqual, Glenys Stacey, has said: ‘If you look at the history, we have seen persistent grade inflation for these key qualifications for at least a decade… The grade inflation we have seen is virtually impossible to justify and it has done more than anything, in my view, to undermine confidence in the value of those qualifications’ (Sunday Telegraph, 28 April 2012).

The OECD’s International Survey of Adult Skills (October 2013). This assessed numeracy, literacy and computing skills of 16-24-year-olds. The tests were done over 2011/2012. England was 22nd out of 24 for literacy, 21st out of 24 for numeracy, and is 16th out of 20 for ‘problem solving in a technology-rich environment’.

PISA 2012. The normal school PISA tests taken in 2012 (reported 2013) showed no significant change between 2009-12. England was 21st for science, 23rd for reading, and 26th for mathematics. A 2011 OECD report concluded: ‘Official test scores and grades in England show systematically and significantly better performance than international and independent tests… [Official results] show significant increases in quality over time, while the measures based on cognitive tests not used for grading show declines or minimal improvements’ (OECD Economic Surveys: United Kingdom, 16 March 2011, p. 88-89). This interesting chart shows that in the PISA maths test the children of English professionals perform the same as children of Singapore cleaners (Do parents’ occupations have an impact on student performance?, PISA 2014).

Chart 6: Comparing pupil maths scores by parent occupation, UK (left) and Singapore (right) maths skills (PISA 2012)

Screenshot 2015-01-26 18.43.03

TIMMS/PIRLS. The TIMMS/PIRLS tests (taken summer 2011, reported December 2012) told a similar story to PISA. England’s score in reading at age 10 increased since 2006 by a statistically significant amount. England’s score in science at age 10 decreased since 2007 by a statistically significant amount. England’s scores in science at age 14 and mathematics at ages 10 and 14 showed no statistically significant changes since 2007. (According to experts, the PISA maths test relies more on language comprehension than TIMMS which is supposedly why Finland scores higher in the former than the latter.)

National Numeracy (February 2012). Research showed that in 2011 only a fifth of the adult population had mathematical skills equivalent to a ‘C’ in GCSE, down a few percent from the last survey in 2003. About half of 16-65 year olds have at best the mathematical skills of an 11 year-old. A fifth of adults will struggle with understanding price labels on food and half ‘may not be able to check the pay and deductions on a wage slip.’

King’s College, 2009. A major study by academics from King’s College London and Durham University found that basic skills in maths have declined since the 1970s. In 2008, less than a fifth of 14 year-olds could write 11/10 as a decimal. In the early 1980s, only 22 per cent of pupils obtained a GCE O-level grade C or above in maths. In 2008, over 55 per cent gained a GCSE grade C or above in the subject (King’s College London/University of Durham, ‘Secondary students’ understanding of mathematics 30 years on‘, 5 September 2009, found here).

Chart 7: Performance on ICCAMS / CSMS Maths tests showing declines over time

Screenshot 2015-01-22 16.42.53

Shayer et al (2007) found that performance in a test of basic scientific concepts fell significantly between 1976 and 2003. ‘[A]lthough both boys and girls have shown great drops in performance, the relative drop is greater for boys… It makes it difficult to believe in the validity of the year on year improvements reported nationally on Key Stage 3 NCTs in science and mathematics: if children are entering secondary from primary school less and less equipped with the necessary mental conditions for processing science and mathematics concepts it seems unlikely that the next 2.5 years KS3 teaching will have improved so much as more than to compensate for what students of today lack in comparison with 1976.’

Chart 8: Performance on tests of scientific concepts, 1976 – 2003 (Shayer)

Screenshot 2015-01-23 17.21.10

Tymms (2007) reviewed assessment evidence in mathematics from children at the end of primary school between 1978 and 2004 and in reading between 1948 and 2004. The conclusion was that standards in both subjects ‘have remained fairly constant’.

Warner (2013) on physics. Professor Mark Warner (Cambridge University) produced a fascinating report (2013) on problems with GCSE and A Level Physics and compared the papers to old O Levels,  A Levels, ‘S’ Level papers, Oxbridge entry exams, international exams and so on. After reading it, there is no room for doubt. The standards demanded in GCSEs and A Levels have fallen very significantly.

‘[In modern papers] small steps are spelt out so that not more than one thing needs to be addressed before the candidate is set firmly on the right path again. Nearly all effort is spent injecting numbers into formulae that at most require GCSE-level rearrangements… All diagrams are provided… 1986 O-level … [is] certainly more difficult than the AS sample… 1988 A-level … [is] harder than most Cambridge entrance questions currently… 1983 Common Entrance [is] remarkably demanding for this age group, approaching the challenge of current AS… There is a staggering difference in the demands put on candidates… Exams [from the 1980s] much lower down the school system are in effect more difficult than exams given now in the penultimate years [i.e. AS].’

For example, the mechanics problems in GCSE Physics are substantially shallower than those in 1980s O Level, which examined concepts now in A Level. The removal of calculus from A Level physics badly undermined it. Calculus is tested in A Level Maths’ Mechanics I paper and Mechanics II and III test deeper material than Physics A Level. This is one of the reasons why Cambridge Physics department stopped requiring Physics A Level for entry and made clear that Further Maths A Level is acceptable instead (many say it is better preparation for university than physics A Level is).

Warner also makes the point that making Physics GCSE and A Level much easier did not even increase the number taking physics degrees, which has declined sharply since the mid-1980s. He concludes: ‘one could again aim for a school system to get a sizable fraction of pupils to manage exams of these [older] standards. Children are not intrinsically unable to attack such problems.’ (NB. The version of this report on the web is not the full version – I would urge those interested to email Professor Warner.)

Gowers (2012) on maths. Tim Gowers, Cambridge professor and Fields Medallist, described some problems with Maths A Level and concluded:

‘The general point here is of course that A-levels have got easier [emphasis added] and schools have a natural tendency to teach to the test. If just one of those were true, it would be far less of a problem. I would have nothing against an easy A-level if people who were clever enough were given a much deeper understanding than the exam strictly required (though as I’ve argued above, for many people teaching to the test is misguided even on its own terms, since they will do a lot better on the exam if they have not been confined to what’s on the test), and I would not be too against teaching to the test if the test was hard enough…

‘[S]ome exams, such as GCSE maths, are very very easy for some people, such as anybody who ends up reading mathematics at Cambridge (but not just those people by any means). I therefore think that the way to teach people in top sets at schools is not to work towards those exams but just to teach them maths at the pace they can manage.’

Durham University analysis gives data to quantify this conclusion. Pupils who would have received a U (unclassified) in Maths A-Level in 1988 received a B/C in 2006 – see above for Chart 5 showing this (CEM Centre Durham University, Changes in standards at GCSE and A-Level: Evidence from ALIS and YELLIS, April 2007). Further Maths A Level is supposedly the toughest A Level and probably it is but a) it is not the same as its 1980s ancestor and b) it now introduces pupils to material such as matrices that used to be taught in good prep schools.

I spent a lot of time 2007-14 talking to maths dons, including heads of departments, across England. The reason I quote Gowers is that I never heard anybody dispute his conclusion but he was almost the only one who would say it publicly. I heard essentially the same litany about A Level maths from everybody I spoke to: although there were differences of emphasis, nobody disputed these basic propositions. 1) The questions became much more structured so pupils are led up a scaffolding with less requirement for independent problem-solving. 2) The emphasis moved to memorising some basic techniques the choice of which is clearly signalled in the question. 3) The modular system a) encouraged a ‘memorise, regurgitate, forget’ mentality and b) undermined learning about how different topics connect across maths, both of which are bad preparation for further studies. (There are also some advantages to a modular system that I will return to.) 4) Many undergraduates, including even those in the top 5% at such prestigious universities as Imperial, therefore now struggle in their first year as they are not well-prepared by A Level for the sort of problems they are given in undergraduate study. (The maths department at Imperial became so sick of A Level’s failings that they recently sought and got approval to buy Oxford’s entrance exam for use in their admission system.)

I will not go into arguments about vocational qualifications here but note the conclusion of Alison Wolf whose 2011 report on this was not disputed by any of the three main parties:

‘The staple offer for between a quarter and a third of the post- 16 cohort is a diet of low-level vocational qualifications, most of which have little to no labour market value.’

3. Knock-on effects in universities

Serious lack of maths skills

There are many serious problems with maths skills. Part of the reason is that many universities do not even demand A Level maths. The result? As of about 2010-12, about 20% of Engineering undergraduates, about 40% of Chemistry and Economics undergraduates, and about 60-70% of Biology and Computer Science undergraduates did not have A Level Maths. Less than 10% of undergraduate bioscience degree courses demand A Level Maths therefore ‘problems with basic numeracy are evident and this reflects the fact that many students have grades less than A at GCSE Maths. These students are unlikely to be able to carry out many of the basic mathematical approaches, for example unable to manipulate scientific notation with negative powers so commonly used in biology’ (2011 Biosciences report). (I think that history undergraduates should be able to manipulate scientific notation with negative powers – this is one of the many things that should be standard for reasonably able people.)

The Royal Society estimated (Mathematical Needs2012) that about 300,000 per year need a post-GCSE Maths course but only ~100,000 do one. (This may change thanks to Core Maths starting in 2015, see later blog.) This House of Lords report (2012) on Higher Education in STEM subjects concluded: ‘We are concerned that … the level at which the subject [maths] is taught does not meet the requirements needed to study STEM subjects at undergraduate level… [W]e urge HEIs to introduce more demanding maths requirement for admissions into STEM courses as the lack, or low level, of maths requirements at entry acts as a disincentive for pupils to study maths and high level maths at A level.’ House of Lords Select Committee on Science and Technology, Higher Education in STEM subjects, 2012.

Further, though this subject is beyond the scope of this blog, it is also important that the maths PhD pipeline ‘which was already badly malfunctioning has been seriously damaged by EPSRC decisions’, including withdrawal of funding from non-statistics subjects which drew the ire of UK Fields Medallists, cf. Submission by the Council for the Mathematical Sciences to the House of Lords, 2011. The weaknesses in biology also feed into the bioscience pipeline: only six percent of bioscience academics think their graduates are well prepared for a masters in the fast-growing field of Computational Biology (p.8 of report).

Closing of language departments, decline of language skills

I have not found official stats for this but according to research done for the Guardian (with FOIs):

‘The number of universities offering degrees in the worst affected subject, German, has halved over the past 15 years. There are 40% fewer institutions where it is possible to study French on its own or with another language, while Italian is down 23% and Spanish is down 22%.’

As Katrin Kohl, professor of German at Jesus College (Oxford) has said, ‘The UK has in recent years been systematically squandering its already poor linguistic resources.’ Dawn Marley, senior lecturer in French at the University of Surrey, summarised problems across languages:

‘We regularly see high-achieving A-level students who have only a minimal knowledge of the country or countries where the language of study is spoken, or who have limited understanding of how the language works. Students often have little knowledge of key elements in a country’s history – such as the French Revolution, or the fact that France is a republic. They also continue to struggle with grammatical accuracy, and use English structures when writing in the language they are studying… The proposals for the revival of A-level are directly in line with what most, if not all, academics in language departments would see as essential.’ (Emphasis added.)

The same picture applies to classical languages. Already by 1994 the Oxford Classics department was removing texts such as Thucydides as compulsory elements in ‘Greats’ because they were deemed ‘too hard’. These changes continued and have made Classics a very different subject than it was before 1990. At Oxford, they introduced whole new courses (Mods B then Mods C) that do not require any prior study of the ancient languages themselves. The first year of Greats now involves remedial language courses.

I quote at length from a paper by John Davie, a Lecturer in Classics at Trinity College, Oxford, as his comments summarise the views of other senior classicists in Oxbridge and elsewhere who have been reluctant to speak out (In Pursuit of Excellence, Davie, 2013). Inevitably, the problems described are damaging the pipeline for masters, PhDs, and future scholarship.

‘Classics as an academic subject has lost much of its intellectual force in recent years. This is true not only of schools but also, inevitably, of universities, which are increasingly required to adapt to the lowering of standards…

‘In modernist courses…, there is (deliberately) no systematic learning of grammar or syntax, and emphasis is laid on fast reading of a dramatic continuous story in made-up Latin which gives scope for looking at aspects of ancient life. The principle of osmosis underlying this approach, whereby children will learn linguistic forms by constant exposure to them, aroused scepticism among many teachers and has been thoroughly discredited by experts in linguistics. Grammar and syntax learned in this piecemeal fashion give pupils no sense of structure and, crucially, deny them practice in logical analysis, a fundamental skill provided by Classics…

‘[W]e have, in GCSE, an exam that insults the intelligence… Recent changes to this exam have by general consent among teachers made the papers even easier.

‘In the AS exam currently taken at the end of the first year of A-level … students study two small passages of literature, which represent barely a third of an original text. They are asked questions so straightforward as to verge on the banal and the emphasis is on following a prescribed technique of answering, as at GCSE. Imagination and independent thought are simply squeezed out of this process as teachers practise exam-answering technique in accordance with the narrow criteria imposed on examiners.

‘The level of difficulty [in AS] is not substantially higher than that of GCSE, and yet this is the exam whose grades and marks are consulted by the universities when they are trying to determine the ability of candidates… Having learned the translation of these bite-sized chunks of literature with little awareness of their context or the wider picture (as at GCSE, it is increasingly the case that pupils are incapable of working out the Latin/Greek text for themselves, and so lean heavily on a supplied translation), they approach the university interview with little or no ability to think “outside the box”. Dons at Oxford and Cambridge regularly encounter a lack of independent thought and a tendency to fall back on generalisations that betray insufficient background reading or even basic curiosity about the subject. This need not be the case and is clearly the product of setting the bar too low for these young people at school…

‘At A2 … students read less than a third of a literary text they would formerly have read in its entirety.

‘There is the added problem that young teachers entering the profession are themselves products of the modernist approach and so not wholly in command of the classical languages themselves. As a result they welcome the fact that they are not required by the present system to give their pupils a thorough grounding in the language, embracing the less rigorous approach of modern course-books with some relief.

‘In the majority of British universities Classics in its traditional form has either disappeared altogether or has been replaced by a course which presents the literature, history and philosophy mainly (or entirely) in translation, i.e. less a degree course in Classics than in Classical Civilisation.

‘This situation has been forced upon university departments of Classics by the impoverished language skills of young people coming up from schools… It is not only the classical languages but English itself which has suffered in this way in the last few decades. Every university teacher of the classical languages knows that he cannot assume familiarity with the grammar and syntax of English itself, and that he will have to teach from scratch such concepts as an indirect object, punctuation or how a participle differs from a gerund…

‘Even at Oxford cuts have been made to the number of texts students are required to read and, in those texts that remain, not as many lines are prescribed for reading in the original Latin or Greek.

‘In the last ten years of teaching for Mods [at Oxford] I have been struck by how the first-year students who come my way at the start of the summer term appear to know less about the classical languages each year, an experience I know to be shared by dons at other colleges…

‘GCSE should be replaced by a modern version of the O-level that stretches pupils… This would make the present AS exam completely unsuitable, and either a more challenging set of papers should be devised, if the universities wish to continue with pre A-level interviewing, or there should be a return to an unexamined year of wide reading before the specialisation of the last year.

‘Although the present exam, A2, has more to recommend it than AS, it also would no longer be fit for purpose and would need strengthening. As part of both final years there should be regular practice in the writing of essays, a skill that has been largely lost in recent years because of the exam system and is (rightly) much missed by dons.’

This combination of problems explains why we funded a project with Professor Pelling, Regius Professor of Greek at Oxford, to fund teacher training and language enrichment courses for schools.

I will not go into other humanities subjects. I read Ancient & Modern History and have thoughts about it but I do not know of any good evidence similar to the reports quoted above by the likes of the Royal Society. I have spoken to many university teachers. Some, such as Professor Richard Evans (Cambridge) told me they think the standard of those who arrive as undergraduates is roughly the same as twenty years ago. Others at Oxbridge and elsewhere told me they think that essay writing skills have deteriorated because of changes to A Level (disputed by Evans and others) and that language skills among historians have deteriorated (undisputed by anyone I spoke to).

For example, the Cambridge Professor of Mediterranean History, David Abulafia, has contradicted Evans and, like classicists, pointed out the spread of remedial classes at Cambridge:

‘It’s a pity, then, that the director of admissions at Cambridge has proclaimed that the old system [pre-Gove reforms] is good and that AS-levels – a disaster in so many ways – are a good thing because somehow they promote access. I don’t know for whom he is speaking, but not for me as a professor in the same university…

‘[Gove] was quite right about the abolition of the time-wasting, badly devised and all too often incompetently marked AS Levels; these dreary exams have increasingly been used as the key to admissions to Cambridge, to the detriment of intellectually lively, quirky, candidates full of fizz and sparkle who actually have something to say for themselves…

‘Bogus educational theories have done so much to damage education in this country… The effects are visible even in a great university such as Cambridge, with a steady decline in standards of literacy, and with, in consequence, the provision in one college after another of ‘skills teaching’, so that students who no longer arrive knowing how to structure an essay or even read a book can receive appropriate ‘training’… Even students from top ranked schools seem to find it very difficult … to write essays coherently… In the sort of exams I am thinking of, essay writing comes much more to the fore and examiners would be making more subjective judgements about scripts. In an ideal world there would be double marking of scripts.’ Emphasis added.

Judging essay skills is a more nebulous task than judging the quality of mechanics questions. Also, there is less agreement among historians about the sort of things they want to see in school exams compared to mathematicians and physicists who largely (in my experience, I stress, which is limited) agree about the sorts of problems they want undergraduates to be able to solve and the skills they want them to have.

I will quote a Professor of English at Exeter University, Colin MacCabe, whose view of the decline of essay skills is representative of many comments I have heard, but I cannot say confidently that this view represents a consensus, despite his claim:

‘Nobody who teaches A-level or has anything to do with teaching first-year university students has any doubt that A Levels have been dumbed down… The writing of the essay has been the key intellectual form in undergraduate education for more than a century; excelling at A-level meant excelling in this form. All that went by the board when … David Blunkett, brought in AS-levels… A-levels … became two years of continuous assessment with students often taking their first module within three months of entering the sixth form. This huge increase in testing went together with a drastic change in assessment. Candidates were not now marked in relation to an overall view of their ability to mount and develop arguments, but in relation to their ability to demonstrate achievement against tightly defined assessment objectives… A-levels, once a test of general intellectual ability in relation to a particular subject, are now a tightly supervised procession through a series of targets. Assessment doesn’t come at the end of the course – it is the course… In English, students read many fewer books… Students now arrive at university without the knowledge or skills considered automatic in our day… One of the results of the changes at A-level is that the undergraduate degree is itself a much more targeted affair. Students lack of a general education mean that special subjects, dissertations etc are added to general courses which are themselves much more limited in their approach… One result of this is a grade inflation much more dramatic even than A-levels… [T]here is little place within a modern English university for students to develop the kind of intellectual independence and judgment, which has historically been the aim of the undergraduate degree.’ Observer, 22 August, 2004. (Emphasis added.)

If anybody knows of studies on history and other humanities please link in Comments below.

Oxbridge entrance

As political arguments increasingly focused on ‘participation’ and ‘access’, Oxford and Cambridge largely abandoned their own entrance exams in the 1990s. There were some oddities. Cambridge University dropped their maths test and were so worried by the results that they immediately asked for and were given special dispensation to reintroduce it and they have used one since (now known as the STEP paper, used by a few other universities). Other Cambridge departments who wanted to do the same were refused permission and some of them (including the physics department) now use interviews to test material they would like to test in a written exam. Oxford changed its mind and gradually reintroduced admission tests in some subjects. (E.g. It does not use STEP in maths but uses its own test which has more ‘applied’ maths.) Cambridge now uses AS Levels. Oxford does not (but does not like to explain why).

A Levels are largely useless for distinguishing between candidates in the top 2% of ability (i.e. two standard deviations above average). Oxbridge entry now involves a complex and incoherent set of procedures. Some departments use interviews to test skills that are i) either wholly or entirely untested by A Levels and ii) are not explicitly set out anywhere. For example, if you go to an interview for physics at Cambridge, they will ask you questions like ‘how many photons hit your eye per second from Alpha Centauri?’ – i.e. questions that you cannot cram for but from which much information can be gained by tutors watching how students grapple with the problem.

The fact that the real skills they want to test are asked about in interviews rather than in public exams is, in my opinion, not only bad for ‘standards’ but is also unfair. Rich schools with long connections to Oxbridge colleges have teachers who understand these interviews and know how to prepare pupils for them. They still teach the material tested in old exams and other materials such as Russian textbooks created decades ago. A comprehensive in east Durham that has never sent anybody to Oxbridge is very unlikely to have the same sort of expertise and is much more likely to operate on the very mistaken assumption that getting a pupil to three As is sufficient preparation for Oxbridge selection. Testing skills in open exams that everybody can see would be fairer.

I will return to this issue in a later blog but it is important to consider the oddities of this situation. Decades ago, open public standardised tests were seen as a way to overcome prejudice. For example, Ivy League universities like Harvard infamously biased their admissions system against Jews because a fair open process based on intellectual abilities, and ignoring things like lacrosse skills, would have put more Jews into Harvard than Harvard wanted. Similar bias is widespread now in order to keep the number of East Asians low. It is no coincidence that Caltech’s admissions policy is unusually based on academic ability and it has a far higher proportion of East Asians than the likes of Harvard.

Similar problems apply to Oxbridge. A consequence of making exams easier and removing Oxbridge admissions tests was to make the process more opaque and therefore biased against poorer families. The fascinating journey made by the intellectual Left on the issue of standardised tests is described in Steven Pinker’s recent influential essay on university admissions. I agree with him that a big part of the reason for the ‘madness’ is that the intelligentsia ‘has lost the ability to think straight about objective tests’. Half a century ago, the Left fought for standardised tests to overcome prejudice, now many on the Left oppose tests and argue for criteria that give the well-connected middle classes unfair advantages.

This combination of problems is one of the reasons why the Cambridge pure maths department and physics department worked with me to develop projects to redo 16-18 curricula, teacher training, and testing systems. Cambridge is even experimenting with a ‘correspondence Free School’ idea proposed by the mathematician Alexander Borovik (who attended one of the famous Russian maths schools). Powerful forces tried to stop these projects happening because they are, obviously, implicit condemnations of the existing system – condemnations that many would prefer had never seen the light of day. Similar projects in other departments at other universities were kiboshed for the same reason, as were other proposals for specialist maths schools as per the King’s project (which also would never have happened but for the determination of Alison Wolf and a handful of heroic officials in the DfE). I will return to this too.

C. Conclusions

Here are some tentative conclusions.

  1. The political and bureaucratic process for the introduction of the GCSE and National Curriculum was a shambles. Those involved did not go through basic processes to agree aims. Implementation was awful. All elements of the system failed children. There are important lessons for those who want to reform the current system.
  2. Given the weight of evidence above, it is hard to avoid the conclusion that GCSEs were made easier than O Levels and became easier still over time. This means that at least the top fifth are aimed aged 14 at lower standards than they would have been aimed at previously (not that O Levels were at all optimal). Many of them spend two years with low grade material and repeating boring drills, in order that the school can maximise its league table position, instead of delving deeper into subjects. Inflation seems to have stopped in the last two years, perhaps temporarily, but by the use of an Ofqual system known as ‘comparable outcomes’ which is barely understood by anybody in the school system or DfE.
  3. A Levels, at least in maths, sciences, and languages, were quickly made easier after 1988 and not just by enough to keep pass marks stable but by enough to lead to large increases. Even A Level students are aimed at mundane tasks like ‘design a poster’ that are suitable for small children – not near-adults. (As I type this I am looking at an Edexcel textbook for Further Maths A Level which for some reason, Edexcel has chosen to decorate with the picture of a child in a ‘Robin’ masked outfit.)
  4. The old ‘S’ level papers, designed to stretch the best A Level students, were abandoned which contributed to a decline of standards aimed for among the top 5%.
  5. University degrees in some subjects therefore also had to become easier (e.g. classics) or longer (natural sciences) in order to avoid increases in failure rates. This happened in some subjects even in elite universities. Remedial courses spread, even in elite universities, to teach/improve skills that were previously expected on arrival (including Classics at Oxford and History at Cambridge). Not all of the problems are because of failures in schools or easier exams. Some are because universities themselves for political reasons will not make certain requirements of applicants. Even if the exam system were fixed, this would remain a big problem. On the other hand, while publicly speaking out for AS Levels, admissions officers also, very quietly, have been gradually introducing new, non-Government/Ofqual regulated, tests for admissions purposes. On this, it is more useful to watch what universities do than what they say.
  6. These problems have cascaded right through the system and now affect the pipeline into senior university research positions in maths, sciences, and languages. For example, the lack of maths skills among biologists is hampering the development of synthetic biology and computational biology. It is very common now to have (private) discussions with scientists deploring the decline in English research universities. Just in the past few weeks I have had emails from an English physicist now at Harvard and a prominent English neuroscientist giving me details of these developments and how we are falling further behind American universities. As they say, however, nobody wants to speak out.
  7. It is much easier to see what has happened at the top end of the ability curve, where effects show up in universities, than it is for median pupils. The media also  focuses on issues at the top end of the ability curve, A Levels, and the Russell Group.
  8. Because politicians took control of the system and used results to justify their own policies, and because they control funding, debate over standards became thoroughly dishonest, starting with the Conservative government in the 1980s and continuing to now when academics are pressured not to speak out by administrators for fear of politicians’ responses. When governments are in control of the metrics according to which they are judged, there is likely to be dishonesty. If people – including unions, teachers, and officials – claim they deserve more money on the basis of metrics that are controlled by a small group of people operating an opaque process and controlling the regulator themselves, there is likely to be dishonesty.

An important caveat. It is possible that simultaneously a) 1-8 is true and b) the school system has improved in various ways. What do I mean?

This is a coherent (not necessarily right) conclusion from the story told above…

GCSEs are significantly easier than O Levels. Nevertheless, the switch to GCSEs also involved many comprehensives and secondary moderns dropping the old idea that maybe only a fifth of the cohort are ‘academic’ – the idea from Plato’s Republic of gold, silver, and bronze children, that influenced the 1944 Act. Instead, more schools began to focus more pupils on academic subjects. Even though the standards demanded were easier than in the pre-1988 exams, this new focus (combined with other things) at least led between 1988 and now to a) a reduction in the number of truly awful schools and b) more useful knowledge and skills at least for the bottom fifth of the cohort (in ability terms), and perhaps for more. Perhaps the education of median ability pupils stayed roughly the same (declining a bit in maths) hence the consistent picture in international tests, the King’s results comparing maths in 1978/2008, Shayer’s results and so on (above). Meanwhile the standards demanded by post-1988 A Levels clearly fell (at least in some vital subjects), as the changes in universities testify, and S Level papers vanished, so the top fifth of the cohort (and particularly the +2 standard deviation population, i.e. the top 2%) leave school in some subjects considerably worse educated than in the 1980s. (Given most scientific and technological breakthroughs come from among this top 2% this has a big knock-on effect.) Private schools felt incentivised to perform better than state schools on easier GCSEs and A Levels rather than pursue separate qualifications with all the accompanying problems. There remains no good scientific data on what children at different points on the ability curve are capable of achieving given excellent teaching so the discussion of ‘standards’ remains circular. Easier GCSEs and A Levels are consistent with some improvements for the bottom fifth, roughly stability for the median, significant decline for the top fifth, and fewer awful schools.

This is coherent. It fits the evidence sketched above.

But is it right?

In the next blog in this series I will consider issues of ‘ability’ and the circularity of the current debate on ‘standards’.

Questions?

If people accept the conclusions about GCSEs and A Levels (at least in maths, sciences, and languages, I stress again) how should this evidence be weighed against the very strong desire of many in the education system (and Parliament and Whitehall) to maintain a situation in which the vast majority of the cohort are aimed at GCSEs (or international equivalents that are not hugely different) and, for those deemed ‘academic’, A Levels?

Do the gains from this approach outweigh the losses for an unknown fraction of the ‘more able’?

Is there a way to improve gains for all points on the ability distribution?

I have been told that there is no grade inflation in music exams. Is this true? If YES, is this partly because they are not regulated by the state? Are there other factors? Has A Level Music got easier? If not why not?

What sort of approaches should be experimented with instead of the standard approaches seen in O Levels, GCSEs, and A Levels?

What can be learned from non-Government regulated tests such as Force Concepts Tests (physics), university admissions tests, STEP, IQ tests and so on?

What are the best sources on ‘S’ Level papers and what happened with Oxbridge entrance exams?

What other evidence is there? Where are analyses similar to Warner’s on physics for other subjects?

What evidence is there for university grade inflation which many tell me is now worse than GCSEs and A Levels?

 

Times op-ed: What Is To Be Done? An answer to Dean Acheson’s famous quip

On Tuesday 2 December, the Times ran an op-ed by me you can see HERE. It got cut slightly for space. Below is the original version that makes a few other points.

I will use this as a start of a new series on what can be done to improve the system including policy, institutions, and management.

NB1. The article is not about the election or party politics. My suggested answer to Acheson is, I think, powerful partly because it is something that could be agreed upon, in various dimensions, across the political spectrum. I left the DfE in January partly because I wanted to have nothing to do with the election and this piece should not be seen as advocating ‘something Tories should say for the election’. I do not think any of the three leaders are interested in or could usefully pursue this goal – I am suggesting something for the future when they are all gone, and they could quite easily all be gone by summer 2016.

NB2. My view is not – ‘public bad, private good’. As I explained in The Hollow Men II, a much more accurate and interesting distinction is between a) large elements of state bureaucracies, dreadful NGOs like the CBI, and many large companies (that have many of the same HR and incentive problems as bureaucracies), where very similar types rise to power because the incentives encourage political skills rather than problem-solving skills, and b) start-ups, where entrepreneurs and technically trained problem-solvers can create organisations that operate extremely differently, move extremely fast, create huge value, and so on.

(For a great insight into start-up world I recommend two books. 1. Peter Thiel’s new book ‘Zero To One‘. 2. An older book telling the story of a mid-90s start-up that was embroiled in the Netscape/Microsoft battle and ended up selling itself to the much better organised Bill Gates – ‘High Stakes, No Prisoners‘ by Charles Ferguson. This blog, Creators and Rulers, by physicist Steve Hsu also summarises some crucial issues excellently.)

Some parts of government can work like start-ups but the rest of the system tries to smother them. For example, DARPA (originally ARPA) was set up as part of the US panic about Sputnik. It operates on very different principles from the rest of the Pentagon’s R&D system. Because it is organised differently, it has repeatedly produced revolutionary breakthroughs (e.g. the internet) despite a relatively tiny budget. But also note – DARPA has been around for decades and its operating principles are clear but nobody else has managed to create an equivalent (openly at least). Also note that despite its track record, D.C. vultures constantly circle trying to make it conform to the normal rules or otherwise clip its wings. (Another interesting case study would be the alternative paths taken by a) the US government developing computers with one genius mathematician, von Neumann, post-1945 (a lot of ‘start-up’ culture) and b) the UK government’s awful decisions in the same field with another genius mathematician, Turing, post-1945.)

When I talk about new and different institutions below, this is one of the things I mean. I will write a separate blog just on DARPA but I think there are two clear action points:

1. We should create a civilian version of DARPA aimed at high-risk/high-impact breakthroughs in areas like energy science and other fundamental areas such as quantum information and computing that clearly have world-changing potential. For it to work, it would have to operate outside all existing Whitehall HR rules, EU procurement rules and so on – otherwise it would be as dysfunctional as the rest of the system (defence procurement is in a much worse state than the DfE, hence, for example, billions spent on aircraft carriers that in classified war-games cannot be deployed to warzones). We could easily afford this if we could prioritise – UK politicians spend far more than DARPA’s budget on gimmicks every year – and it would provide huge value with cascading effects through universities and businesses.

2. The lessons of why and how it works – such as incentivising goals, not micromanaging methods – have general application that are useful when we think generally about Whitehall reform.

Finally, government institutions also operate to exclude from power scientists, mathematicians, and people from the start-up world – the Creators, in Hsu’s term. We need to think very hard about how to use their very rare and valuable skills as a counterweight to the inevitable psychological type that politics will always tend to promote.

Please leave comments, corrections etc below.

DC


 

What Is to Be Done?

There is growing and justified contempt for Westminster. Number Ten has become a tragi-comic press office with the prime minister acting as Über Pundit. Cameron, Miliband, and Clegg see only the news’s flickering shadows on their cave wall – they cannot see the real world behind them. As they watch floundering MPs, officials know they will stay in charge regardless of an election that won’t significantly change Britain’s trajectory.

Our institutions failed pre-1914, pre-1939, and with Europe. They are now failing to deal with a combination of debts, bad public services, security threats, and profound transitions in geopolitics, economics, and technology. They fail in crises because they are programmed to fail. The public knows we need to reorient national policy and reform these institutions. How?

First, we need a new goal. In 1962, Dean Acheson quipped that Britain had failed to find a post-imperial role. The romantic pursuit of ‘the special relationship’ and the deluded pursuit of a leading EU role have failed. This role should focus on making Britain the best country for education and science. Pericles described Athens as ‘the school of Greece’: we could be the school of the world because this role depends on thought and organisation, not size.

This would give us a central role in tackling humanity’s biggest problems and shaping the new institutions, displacing the EU and UN, that will emerge as the world makes painful transitions in coming decades. It would provide a focus for financial priorities and Whitehall’s urgent organisational surgery. It’s a goal that could mobilise very large efforts across political divisions as the pursuit of knowledge is an extremely powerful motive.

Second, we must train aspirant leaders very differently so they have basic quantitative skills and experience of managing complex projects. We should stop selecting leaders from a subset of Oxbridge egomaniacs with a humanities degree and a spell as spin doctor.

In 2012, Fields Medallist Tim Gowers sketched a ‘maths for presidents’ course to teach 16-18 year-olds crucial maths skills, including probability and statistics, that can help solve real problems. It starts next year. [NB. The DfE funded MEI to turn this blog into a real course.] A version should be developed for MPs and officials. (A similar ‘Physics for Presidents‘ course has been a smash hit at Berkeley.) Similarly, pioneering work by Philip Tetlock on ‘The Good Judgement Project‘ has shown that training can reduce common cognitive errors and can sharply improve the quality of political predictions, hitherto characterised by great self-confidence and constant failure.

New interdisciplinary degrees such as ‘World history and maths for presidents’ would improve on PPE but theory isn’t enough. If we want leaders to make good decisions amid huge complexity, and learn how to build great teams, then we should send them to learn from people who’ve proved they can do it. Instead of long summer holidays, embed aspirant leaders with Larry Page or James Dyson so they can experience successful leadership.

Third, because better training can only do so much, we must open political institutions to people and ideas from outside SW1.

A few people prove able repeatedly to solve hard problems in theoretical and practical fields, creating important new ideas and huge value. Whitehall and Westminster operate to exclude them from influence. Instead, they tend to promote hacks and apparatchiks and incentivise psychopathic narcissism and bureaucratic infighting skills – not the pursuit of the public interest.

How to open up the system? First, a Prime Minister should be able to appoint Secretaries of State from outside Parliament. [How? A quick and dirty solution would be: a) shove them in the Lords, b) give Lords ministers ‘rights of audience’ in the Commons, c) strengthen the Select Committee system.]

Second, the 150 year experiment with a permanent civil service should end and Whitehall must open to outsiders. The role of Permanent Secretary should go and ministers should appoint departmental chief executives so they are really responsible for policy and implementation. Expertise should be brought in as needed with no restrictions from the destructive civil service ‘human resources’ system that programmes government to fail. Mass collaborations are revolutionising science [cf. Michael Nielsen’s brilliant book]; they could revolutionise policy. Real openness would bring urgent focus to Whitehall’s disastrous lack of skills in basic functions such as budgeting, contracts, procurement, legal advice, and project management.

Third, Whitehall’s functions should be amputated. The Department for Education improved as Gove shrank it. Other departments would benefit from extreme focus, simplification, and firing thousands of overpaid people. If the bureaucracy ceases to be ‘permanent’, it can adapt quickly. Instead of obsessing on process, distorting targets, and micromanaging methods, it could shift to incentivising goals and decentralising methods.

Fourth, existing legal relationships with the EU and ECHR must change. They are incompatible with democratic and effective government

Fifth, Number Ten must be reoriented from ‘government by punditry’ to a focus on the operational planning and project management needed to convert priorities to reality over months and years.

Technological changes such as genetic engineering and machine intelligence are bringing revolution. It would be better to undertake it than undergo it.