On the referendum #30: Genetics, genomics, predictions & ‘the Gretzky game’ — a chance for Britain to help the world

On the referendum #30: Genetics, genomics, predictions & ‘the Gretzky game’ — a chance for Britain to help the world

Britain could contribute huge value to the world by leveraging existing assets, including scientific talent and how the NHS is structured, to push the frontiers of a rapidly evolving scientific field — genomic prediction — that is revolutionising healthcare in ways that give Britain some natural advantages over Europe and America. We should plan for free universal ‘SNP’ genetic sequencing as part of a shift to genuinely preventive medicine — a shift that will lessen suffering, save money, help British advanced technology companies in genomics and data science/AI, make Britain more attractive for scientists and global investment, and extend human knowledge in a crucial field to the benefit of the whole world.

‘SNP’ sequencing means, crudely, looking at the million or so most informative markers or genetic variants without sequencing every base pair in the genome. SNP sequencing costs ~$50 per person (less at scale), whole genome sequencing costs ~$1,000 per person (less at scale). The former captures most of the predictive power now possible at 1/20th of the cost of the latter.

*

Background: what seemed ‘sci fi’ ~2010-13 is now reality

In my 2013 essay on education and politics, I summarised the view of expert scientists on genetics (HERE between pages 49-51, 72-74, 194-203). Although this was only a small part of the essay most of the media coverage focused on this, particularly controversies about IQ.

Regardless of political affiliation most of the policy/media world, as a subset of ‘the educated classes’ in general, tended to hold a broadly ‘blank slate’ view of the world mostly uninformed by decades of scientific progress. Technical terms like ‘heritability’, which refers to the variance in populations, caused a lot of confusion.

When my essay hit the media, fortunately for me the world’s leading expert, Robert Plomin, told hacks that I had summarised the state of the science accurately. (I never tried to ‘give my views on the science’ as I don’t have ‘views’ — all people like me can try to do with science is summarise the state of knowledge in good faith.) Quite a lot of hacks then spent some time talking to Plomin and some even wrote about how they came to realise that their assumptions about the science had been wrong (e.g Gaby Hinsliff).

Many findings are counterintuitive to say the least. Almost everybody naturally thinks that ‘the shared environment’ in the form of parental influence ‘obviously’ has a big impact on things like cognitive development. The science says this intuition is false. The shared environment is much less important than we assume and has very little measurable effect on cognitive development: e.g an adopted child who does an IQ test in middle age will show on average almost no correlation with the parents who brought them up (genes become more influential as you age). People in the political world assumed a story of causation in which, crudely, wealthy people buy better education and this translates into better exam and IQ scores. The science says this story is false. Environmental effects on things like cognitive ability and education achievement are almost all from what is known as the ‘non-shared environment’ which has proved very hard to pin down (environmental effects that differ for children, like random exposure to chemicals in utero). Further, ‘The case for substantial genetic influence on g [g = general intelligence ≈ IQ] is stronger than for any other human characteristic’ (Plomin) and g/IQ has far more predictive power for future education than class does. All this has been known for years, sometimes decades, by expert scientists but is so contrary to what well-educated people want to believe that it was hardly known at all in ‘educated’ circles that make and report on policy.

Another big problem is that widespread ignorance about genetics extends to social scientists/economists, who are much more influential in politics/government than physical scientists. A useful heuristic is to throw ~100% of what you read from social scientists about ‘social mobility’ in the bin. Report after report repeats the same clichés, repeats factual errors about genetics, and is turned into talking points for MPs as justification for pet projects. ‘Kids who can read well come from homes with lots of books so let’s give families with kids struggling to read more books’ is the sort of argument you read in such reports without any mention of the truth: children and parents share genes that make them good at and enjoy reading, so causation is operating completely differently to the assumptions. It is hard to overstate the extent of this problem. (There are things we can do about ‘social mobility’, my point is Insider debate is awful.)

A related issue is that really understanding the science requires serious understanding of statistics and, now, AI/machine learning (ML). Many social scientists do not have this training. This problem will get worse as data science/AI invades the field. 

A good example is ‘early years’ and James Heckman. The political world is obsessed with ‘early years’ such as Sure Start (UK) and Head Start (US). Politicians latch onto any ‘studies’ that seem to justify it and few have any idea about the shocking state of the studies usually quoted to justify spending decisions. Heckman has published many papers on early years and they are understandably widely quoted by politicians and the media. Heckman is a ‘Nobel Prize’ winner in economics. One of the world’s leading applied mathematicians, Professor Andrew Gelman, has explained how Heckman has repeatedly made statistical errors in his papers but does not correct them: cf. How does a Nobel-prize-winning economist become a victim of bog-standard selection bias?  This really shows the scale of the problem: if a Nobel-winning economist makes ‘bog standard’ statistical errors that confuse him about studies on pre-school, what chance do the rest of us in the political/media world have?

Consider further that genomics now sometimes applies very advanced mathematical ideas such as ‘compressed sensing’. Inevitably few social scientists can judge such papers but they are overwhelmingly responsible for interpreting such things for ministers and senior officials. This is compounded by the dominance of social scientists in Whitehall units responsible for data and evidence. Many of these units are unable to provide proper scientific advice to ministers (I have had personal experience of this in the Department for Education). Two excellent articles by Duncan Watts recently explained fundamental problems with social science and what could be done (e.g a much greater focus on successful prediction) but as far as I can tell they have had no impact on economists and sociologists who do not want to face their lack of credibility and whose incentives in many ways push them towards continued failure (Nature paper HEREScience paper HERE — NB. the Department for Education did not even subscribe to the world’s leading science journals until I insisted in 2011).

1) The problem that the evidence for early years is not what ministers and officials think it is is not a reason to stop funding but I won’t go into this now. 2) This problem is incontrovertible evidence, I think, of the value of an alpha data science unit in Downing Street, able to plug into the best researchers around the world, and ensure that policy decisions are taken on the basis of rational thinking and good science or, just as important, everybody is aware that they have to make decisions in the absence of this. This unit would pay for itself in weeks by identifying flawed reasoning and stopping bad projects, gimmicks etc. Of course, this idea has no chance with those now at the top of Government and the Cabinet Office would crush such a unit as it would threaten the traditional hierarchy. One of the  arguments I made in my essay was that we should try to discover useful and reliable benchmarks for what children of different abilities are really capable of learning and build on things like the landmark Study of Mathematically Precocious Youth. This obvious idea is anathema to the education policy world where there is almost no interest in things like SMPY and almost everybody supports the terrible idea that ‘all children must do the same exams’ (guaranteeing misery for some and boredom/time wasting for others). NB. Most rigorous large-scale educational RCTs are uninformative. Education research, like psychology, produces a lot of what Feynman called ‘cargo cult science’.

Since 2013, genomics has moved fast and understanding in the UK media has changed probably faster in five years than over the previous 35 years. As with the complexities of Brexit, journalists have caught up with reality much better than MPs. It’s still true that almost everything written by MPs about ‘social mobility’ is junk but you could see from the reviews of Plomin’s recent book, Blueprint, that many journalists have a much better sense of the science than they did in 2013. Rare good news, though much more progress is needed…

*

What’s happening now?

Screenshot 2019-02-19 15.35.49

In 2013 it was already the case that the numbers on heritability derived from twin and adoption studies were being confirmed by direct inspection of DNA — therefore many of the arguments about twin/adoption studies were redundant — but this fact was hardly known.

I pointed out that the field would change fast. Both Plomin and another expert, Steve Hsu, made many predictions around 2010-13 some of which I referred to in my 2013 essay. Hsu is a physics professor who is also one of the world’s leading researchers on genomics. 

Hsu predicted that very large samples of DNA would allow scientists over the next few years to start identifying the actual genes responsible for complex traits, such as diseases and intelligence, and make meaningful predictions about the fate of individuals. Hsu gave estimates of the sample sizes that would be needed. His 2011 talk contains some of these predictions and also provides a physicist’s explanation of ‘what is IQ measuring’. As he said at Google in 2011, the technology is ‘right on the cusp of being able to answer fundamental questions’ and ‘if in ten years we all meet again in this room there’s a very good chance that some of the key questions we’ll know the answers to’. His 2014 paper explains the science in detail. If you spend a little time looking at this, you will know more than 99% of high status economists gabbling on TV about ‘social mobility’ saying things like ‘doing well on IQ tests just proves you can do IQ tests’.

In 2013, the world of Westminster thought this all sounded like science fiction and many MP said I sounded like ‘a mad scientist’. Hsu’s predictions have come true and just five years later this is no longer ‘science fiction’. (Also NB. Hsu’s blog was one of the very few places where you would have seen discussion of CDOs and the 2008 financial crash long BEFORE it happened. I have followed his blog since ~2004 and this from 2005, two years before the crash started, was the first time I read about things like ‘synthetic CDOs’: ‘we have yet another ill-understood casino running, with trillions of dollars in play’. The quant-physics network had much better insight into the dynamics behind the 2008 Crash than high status mainstream economists like Larry Summers responsible for regulation.)

His group and others have applied machine learning to very large genetic samples and built predictors of complex traits. Complex traits like general intelligence and most diseases are ‘polygenic’ — they depend on many genes each of which contributes a little (unlike diseases caused by a single gene). 

‘There are now ~20 disease conditions for which we can identify, e.g, the top 1% outliers with 5-10x normal risk for the disease. The papers reporting these results have almost all appeared within the last year or so.’

Screenshot 2019-02-19 15.00.14

For example, the height predictor ‘captures nearly all of the predicted SNP heritability for this trait — actual heights of most individuals in validation tests are within a few cm of predicted heights.’ Height is similar to IQ — polygenic and similar heritability estimates.

Screenshot 2019-02-19 15.00.37

These predictors have been validated with out-of-sample tests. They will get better and better as more and more data is gathered about more and more traits. 

This enables us to take DNA from unborn embryos, do SNP genetic sequencing costing ~$50, and make useful predictions about the odds of the embryo being an outlier for diseases like atrial fibrillation, diabetes, breast cancer, or prostate cancer. NB. It is important that we do not need to sequence the whole genome to do this (see below). We will also be able to make predictions about outliers in cognitive abilities (the high and low ends). (My impression is that predicting Alzheimers is still hampered by a lack of data but this will improve as the data improves.)

There are many big implications. This will obviously revolutionise IVF. ~1 million IVF embryos per year are screened worldwide using less sophisticated tests. Instead of picking embryos at random, parents will start avoiding outliers for disease risks and cognitive problems. Rich people will fly to jurisdictions offering the best services.

Forensics is being revolutionised. First, DNA samples can be used to give useful physical descriptions of suspects because you can identify ethnic group, height, hair colour etc. Second, ‘cold cases’ are now routinely being solved because if a DNA sample exists, then the police can search for cousins of the perpetrator from public DNA databases, then use the cousins to identify suspects. Every month or so now in America a cold case murder is solved and many serial killers are being found using this approach — just this morning I saw what looks to be another example just announced, a murder of an 11 year-old in 1973. (Some companies are resisting this development but they will, I am very confident, be smashed in court and have their reputations trashed unless they change policy fast. The public will have no sympathy for those who stand in the way.)

Hsu recently attended a conference in the UK where he presented some of these ideas to UK policy makers. He wrote this blog about the great advantages the NHS has in developing this science. 

The UK could become the world leader in genomic research by combining population-level genotyping with NHS health records… The US private health insurance system produces the wrong incentives for this kind of innovation: payers are reluctant to fund prevention or early treatment because it is unclear who will capture the ROI [return on investment]… The NHS has the right incentives, the necessary scale, and access to a deep pool of scientific talent. The UK can lead the world into a new era of precision genomic medicine. 

‘NHS has already announced an out-of-pocket genotyping service which allows individuals to pay for their own genotyping and to contribute their health + DNA data to scientific research. In recent years NHS has built an impressive infrastructure for whole genome sequencing (cost ~$1k per individual) that is used to treat cancer and diagnose rare genetic diseases. The NHS subsidiary Genomics England recently announced they had reached the milestone of 100k whole genomes…

‘At the meeting, I emphasized the following:

1. NHS should offer both inexpensive (~$50) genotyping (sufficient for risk prediction of common diseases) along with the more expensive $1k whole genome sequencing. This will alleviate some of the negative reaction concerning a “two-tier” NHS, as many more people can afford the former.

2. An in-depth analysis of cost-benefit for population wide inexpensive genotyping would likely show a large net cost savings: the risk predictors are good enough already to guide early interventions that save lives and money. Recognition of this net benefit would allow NHS to replace the $50 out-of-pocket cost with free standard of care.’ (Emphasis added)

NB. In terms of the short-term practicalities it is important that whole genome sequencing costs ~$1,000 (and falling) but is not necessary: a version 1/20th of the cost, looking just at the most informative genetic variants, captures most of the predictive benefits. Some have incentives to distort this, such as companies like Illumina trying to sell expensive machines for whole genome sequencing, which can distort policy — let’s hope officials are watching carefully. These costs will, obviously, keep falling.

This connects to an interesting question… Why was the likely trend in genomics clear ~2010 to Plomin, Hsu and others but invisible to most? Obviously this involves lots of elements of expertise and feel for the field but also they identified FAVOURABLE EXPONENTIALS. Here is the fall in the cost of sequencing a genome compared to Moore’s Law, another famous exponential. The drop over ~18 years has been a factor of ~100,000. Hsu and Plomin could extrapolate that over a decade and figure out what would be possible when combined with other trends they could see. Researchers are already exploring what will be possible as this trend continues.

Screenshot 2019-02-20 10.32.37

Identifying favourable exponentials is extremely powerful. Back in the early 1970s, the greatest team of computer science researchers ever assembled (PARC) looked out into the future and tried to imagine what could be possible if they brought that future back to the present and built it. They were trying to ‘compute in the future’. They created personal computing. (Chart by Alan Kay, one of the key researchers — he called it ‘the Gretzky game’ because of Gretzky’s famous line ‘I skate to where the puck is going to be, not where it has been.’ The computer is the Alto, the first personal computer that stunned Steve Jobs when he saw a demo. The sketch on the right is of children using a tablet device that Kay drew decades before the iPad was launched.)

Screenshot 2019-02-15 12.42.47

Hopefully the NHS and Department for Health will play ‘the Gretzky game’, take expert advice from the likes of Plomin and Hsu and take this opportunity to make the UK a world leader in one of the most important frontiers in science.

  • We can imagine everybody in the UK being given valuable information about their health for free, truly preventive medicine where we target resources at those most at risk, and early (even in utero) identification of risks.
  • This would help bootstrap British science into a stronger position with greater resources to study things like CRISPR and the next phase of this revolution — editing genes to fix problems, where clinical trials are already showing success.
  • It would also give a boost to British AI/data science companies — the laws, rules on data etc should be carefully shaped to ensure that British companies (not Silicon Valley or China) capture most of the financial value (though everybody will gain from the basic science).
  • These gains would have positive feedback effects on each other, just as investment in basic AI/ML research will have positive feedback effects in many industries.
  • I have argued many times for the creation of a civilian UK ‘ARPA’ — a centre for high-risk-high-payoff research that has been consistently blocked in Whitehall (see HERE for an account of how ARPA-PARC created the internet and personal computing). This fits naturally with Britain seeking to lead in genomics/AI. Thinking about this is part of a desperately needed overall investigation into the productivity of the British economy and the ecosystem of universities, basic science, venture capital, startups, regulation (data, intellectual property etc) and so on.

There will also be many controversies and problems. The ability to edit genomes — and even edit the germline with ‘gene drives’ so all descendants have the same copy of the gene — is a Promethean power implying extreme responsibilities. On a mundane level, embracing new technology is clearly hard for the NHS with its data infrastructure. Almost everyone I speak to using the NHS has had similar problems that I have had — nightmares with GPs, hospitals, consultants et al being able to share data and records, things going missing, etc. The NHS will be crippled if it can’t fix this, but this is another reason to integrate data science as a core ‘utility’ for the NHS.

On a political note…

Few scientists and even fewer in the tech world are aware of the EU’s legal framework for regulating technology and the implications of the recent Charter of Fundamental Rights (the EU’s Charter, NOT the ECHR) which gives the Commission/ECJ the power to regulate any advanced technology, accelerate the EU’s irrelevance, and incentivise investors to invest outside the EU. In many areas, the EU regulates to help the worst sort of giant corporate looters defending their position against entrepreneurs. Post-Brexit Britain will be outside this jurisdiction and able to make faster and better decisions about regulating technology like genomics, AI and robotics. Prediction: just as Insiders now talk of how we ‘dodged a bullet’ in staying out of the euro, within ~10 years Insiders will talk about being outside the Charter/ECJ and the EU’s regulation of data/AI in similar terms (assuming Brexit happens and UK politicians even try to do something other than copy the EU’s rules).

China is pushing very hard on genomics/AI and regards such fields as crucial strategic ground for its struggle for supremacy with America. America has political and regulatory barriers holding it back on genomics that are much weaker here. Britain cannot stop the development of such science. Britain can choose to be a backwater, to ignore such things and listen to MPs telling fairy stories while the Chinese plough ahead, or it can try to lead. But there is no hiding from the truth and ‘for progress there is no cure’ (von Neumann). We will never be the most important manufacturing nation again but we could lead in crucial sub-fields of advanced technology. As ARPA-PARC showed, tiny investments can create entire new industries and trillions of dollars of value.

Sadly most politicians of Left and Right have little interest in science funding with tremendous implications for future growth, or the broader question of productivity and the ecosystem of science, entrepreneurs, universities, funding, regulation etc, and we desperately need institutions that incentivise politicians and senior officials to ‘play the Gretzky game’. The next few months will be dominated by Brexit and, hopefully, the replacement of the May/Hammond government. Those thinking about the post-May landscape and trying to figure out how to navigate in uncharted and turbulent waters should focus on one of the great lessons of politics that is weirdly hard for many MPs to internalise: the public rewards sustained focus on their priorities!

One of the lessons of the 2016 referendum (that many Conservative MPs remain desperate not to face) is the political significance of the NHS. The concept described above is one of those concepts in politics that maximises positive futures for the force that adopts it because it draws on multiple sources of strength. It combines, inter alia, all the political benefits of focus on the NHS, helping domestic technology companies, incentivising global investment, doing something that shows the world that Britain is (contra the May/Hammond outlook) open to science and high skilled immigrants, it is based on intrinsic advantages that Europe and America will find hard to overcome over a decade, it supplies (NB. MPs/spads) a never-ending string of heart-wrenching good news stories, and, very rarely in SW1, those pushing it would be seen as leading something of global importance. It will, therefore, obviously be rejected by a section of Conservative MPs who much prefer to live in a parallel world, who hate anything to do with science and who are ignorant about how new industries and wealth are really created. But for anybody trying to orient themselves to reality, connect themselves to sources of power, and thinking ‘how on earth could we clamber out of this horror show’, it is an obvious home run…

NB. It ought to go without saying that turning this idea into a political/government success requires focus on A) the NHS, health, science, NOT getting sidetracked into B) arguments about things like IQ and social mobility. Over time, the educated classes will continue to be dragged to more realistic views on (B) but this will be a complex process entangled with many hysterical episodes. (A) requires ruthless focus…

Please leave comments, fix errors below. I have not shown this blog in draft to Plomin or Hsu who obviously are not responsible for my errors.

Further reading

Plomin’s excellent new book, Blueprint. I would encourage journalists who want to understand this subject to speak to Plomin who works in London and is able to explain complex technical subjects to very confused arts graduates like me.

On the genetic architecture of intelligence and other quantitative traits, Hsu 2014.

Cf. this thread by researcher Paul Pharaoh on breast cancer.

Hsu blogs on genomics.

Some recent developments with AI/ML, links to papers.

On how ARPA-PARC created the modern computer industry and lessons for high-risk-high-payoff science research.

My 2013 essay.

Effective action #4b: ‘Expertise’, prediction and noise, from the NHS killing people to Brexit

In part A I looked at extreme sports as some background to the question of true expertise and the crucial nature of fast high quality feedback.

This blog looks at studies comparing expertise in many fields over decades, including work by Tetlock and Kahneman, and problems like — why people don’t learn to use even simple tools to stop children dying unnecessarily. There is a summary of some basic lessons at the end.

The reason for writing about this is that we will only improve the performance of government (at individual, team and institutional levels) if we reflect on:

  • what expertise really is and why do some very successful fields cultivate it effectively while others, like government, do not;
  • how to select much higher quality people (it’s insane people as ignorant and limited as me can have the influence we do in the way we do — us limited duffers can help in limited ways but why do we deliberately exclude ~100% of the most intelligent, talented, relentless, high performing people from fields with genuine expertise, why do we not have people like Fields Medallist Tim Gowers or Michael Nielsen as Chief Scientist  sitting ex officio in Cabinet?);
  • how to train people effectively to develop true expertise in skills relevant to government: it needs different intellectual content (PPE/economics are NOT good introductory degrees) and practice in practical skills (project management, making predictions and in general ‘thinking rationally’) with lots of fast, accurate feedback;
  • how to give them effective tools: e.g the Cabinet Room is worse in this respect than it was in July 1914 — at least then the clock and fireplace worked, and Lord Salisbury in the 1890s would walk round the Cabinet table gathering papers to burn in the grate — while today No10 is decades behind the state-of-the-art in old technologies like TV, doesn’t understand simple tools like checklists, and is nowhere with advanced technologies;
  • and how to ‘program’ institutions differently so that 1) people are more incentivised to optimise things we want them to optimise, like error-correction and predictive accuracy, and less incentivised to optimise bureaucratic process, prestige, and signalling as our institutions now do to a dangerous extent, and, connected, so that 2) institutions are much better at building high performance teams rather than continue normal rules that make this practically illegal, and so that 3) we have ‘immune systems’ to minimise the inevitable failures of even the best people and teams .

In SW1 now, those at the apex of power practically never think in a serious way about the reasons for the endemic dysfunctional decision-making that constitutes most of their daily experience or how to change it. What looks like omnishambles to the public and high performers in technology or business is seen by Insiders, always implicitly and often explicitly, as ‘normal performance’. ‘Crises’ such as the collapse of Carillion or our farcical multi-decade multi-billion ‘aircraft carrier’ project occasionally provoke a few days of headlines but it’s very rare anything important changes in the underlying structures and there is no real reflection on system failure.

This fact is why, for example, a startup created in a few months could win a referendum that should have been unwinnable. It was the systemic and consistent dysfunction of Establishment decision-making systems over a long period, with very poor mechanisms for good accurate feedback from reality, that created the space for a guerrilla operation to exploit.

This makes it particularly ironic that even after Westminster and Whitehall have allowed their internal consensus about UK national strategy to be shattered by the referendum, there is essentially no serious reflection on this system failure. It is much more psychologically appealing for Insiders to blame ‘lies’ (Blair and Osborne really say this without blushing), devilish use of technology to twist minds and so on. Perhaps the most profound aspect of broken systems is they cannot reflect on the reasons why they’re broken  — never mind take effective action. Instead of serious thought, we have high status Insiders like Campbell reduced to bathos with whining on social media about Brexit ‘impacting mental health’. This lack of reflection is why Remain-dominated Insiders lurched from failure over the referendum to failure over negotiations. OODA loops across SW1 are broken and this is very hard to fix — if you can’t orient to reality how do you even see your problem well? (NB. It should go without saying that there is a faction of pro-Brexit MPs, ‘campaigners’ and ‘pro-Brexit economists’ who are at least as disconnected from reality, often more, as the May/Hammond bunker.)

Screenshot 2018-06-05 10.05.19

In the commercial world, big companies mostly die within a few decades because they cannot maintain an internal system to keep them aligned to reality plus startups pop up. These two factors create learning at a system level — there is lots of micro failure but macro productivity/learning in which useful information is compressed and abstracted. In the political world, big established failing systems control the rules, suck in more and more resources rather than go bust, make it almost impossible for startups to contribute and so on. Even failures on the scale of the 2008 Crash or the 2016 referendum do not necessarily make broken systems face reality, at least quickly. Watching Parliament’s obsession with trivia in the face of the Cabinet’s and Whitehall’s contemptible failure to protect the interests of millions in the farcical Brexit negotiations is like watching the secretary to the Singapore Golf Club objecting to guns being placed on the links as the Japanese troops advanced.

Neither of the main parties has internalised the reality of these two crises. The Tories won’t face reality on things like corporate looting and the NHS, Labour won’t face reality on things like immigration and the limits of bureaucratic centralism. Neither can cope with the complexity of Brexit and both just look like I would look like in the ring with a professional fighter — baffled, terrified and desperate for a way to escape. There are so many simple ways to improve performance — and their own popularity! — but the system is stuck in such a closed loop it wilfully avoids seeing even the most obvious things and suppresses Insiders who want to do things differently…

But… there is a network of almost entirely younger people inside or close to the system thinking ‘we could do so much better than this’. Few senior Insiders are interested in these questions but that’s OK — few of them listened before the referendum either. It’s not the people now in power and running the parties and Whitehall who will determine whether we make Brexit a platform to contribute usefully to humanity’s biggest challenges but those that take over.

Doing better requires reflecting on what we know about real expertise…

*

How to distinguish between fields dominated by real expertise and those dominated by confident ‘experts’ who make bad predictions?

We know a lot about the distinction between fields in which there is real expertise and fields dominated by bogus expertise. Daniel Kahneman, who has published some of the most important research about expertise and prediction, summarises the two fundamental tests to ask about a field: 1) is there enough informational structure in the environment to allow good predictions, and 2) is there timely and effective feedback that enables error-correction and learning.

‘To know whether you can trust a particular intuitive judgment, there are two questions you should ask: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities? The answer here depends on the professionals’ experience and on the quality and speed with which they discover their mistakes. Anesthesiologists have a better chance to develop intuitions than radiologists do. Many of the professionals we encounter easily pass both tests, and their off-the-cuff judgments deserve to be taken seriously. In general, however, you should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about.’ (Emphasis added.)

In fields where these two elements are present there is genuine expertise and people build new knowledge on the reliable foundations of previous knowledge. Some fields make a transition from stories (e.g Icarus) and authority (e.g ‘witch doctor’) to quantitative models (e.g modern aircraft) and evidence/experiment (e.g some parts of modern medicine/surgery). As scientists have said since Newton, they stand on the shoulders of giants.

How do we assess predictions / judgement about the future?

‘Good judgment is often gauged against two gold standards – coherence and correspondence. Judgments are coherent if they demonstrate consistency with the axioms of probability theory or propositional logic. Judgments are correspondent if they agree with ground truth. When gold standards are unavailable, silver standards such as consistency and discrimination can be used to evaluate judgment quality. Individuals are consistent if they assign similar judgments to comparable stimuli, and they discriminate if they assign different judgments to dissimilar stimuli.

‘Coherence violations range from base rate neglect and confirmation bias to overconfidence and framing effects (Gilovich, Griffith & Kahneman, 2002; Kahneman, Slovic & Tversky, 1982). Experts are not immune. Statisticians (Christensen-Szalanski & Bushyhead, 1981), doctors (Eddy, 1982), and nurses (Bennett, 1980) neglect base rates. Physicians and intelligence professionals are susceptible to framing effects and financial investors are prone to overconfidence.

‘Research on correspondence tells a similar story. Numerous studies show that human predictions are frequently inaccurate and worse than simple linear models in many domains (e.g. Meehl, 1954; Dawes, Faust & Meehl, 1989). Once again, expertise doesn’t necessarily help. Inaccurate predictions have been found in parole officers, court judges, investment managers in the US and Taiwan, and politicians. However, expert predictions are better when the forecasting environment provides regular, clear feedback and there are repeated opportunities to learn (Kahneman & Klein, 2009; Shanteau, 1992). Examples include meteorologists, professional bridge players, and bookmakers at the racetrack, all of whom are well-calibrated in their own domains.‘ (Tetlock, How generalizable is good judgment?, 2017.)

In another 2017 piece Tetlock explored the studies furtherIn the 1920s researchers built simple models based on expert assessments of 500 ears of corn and the price they would fetch in the market. They found that ‘to everyone’s surprise, the models that mimicked the judges’ strategies nearly always performed better than the judges themselves’ (Tetlock, cf. ‘What Is in the Corn Judge’s Mind?’, Journal of American Society for Agronomy, 1923). Banks found the same when they introduced models for credit decisions.

‘In other fields, from predicting the performance of newly hired salespeople to the bankruptcy risks of companies to the life expectancies of terminally ill cancer patients, the experience has been essentially the same. Even though experts usually possess deep knowledge, they often do not make good predictions

When humans make predictions, wisdom gets mixed with “random noise.”… Bootstrapping, which incorporates expert judgment into a decision-making model, eliminates such inconsistencies while preserving the expert’s insights. But this does not occur when human judgment is employed on its own…

In fields ranging from medicine to finance, scores of studies have shown that replacing experts with models of experts produces superior judgments. In most cases, the bootstrapping model performed better than experts on their own. Nonetheless, bootstrapping models tend to be rather rudimentary in that human experts are usually needed to identify the factors that matter most in making predictions. Humans are also instrumental in assigning scores to the predictor variables (such as judging the strength of recommendation letters for college applications or the overall health of patients in medical cases). What’s more, humans are good at spotting when the model is getting out of date and needs updating…

Human experts typically provide signal, noise, and bias in unknown proportions, which makes it difficult to disentangle these three components in field settings. Whether humans or computers have the upper hand depends on many factors, including whether the tasks being undertaken are familiar or unique. When tasks are familiar and much data is available, computers will likely beat humans by being data-driven and highly consistent from one case to the next. But when tasks are unique (where creativity may matter more) and when data overload is not a problem for humans, humans will likely have an advantage…

One might think that humans have an advantage over models in understanding dynamically complex domains, with feedback loops, delays, and instability. But psychologists have examined how people learn about complex relationships in simulated dynamic environments (for example, a computer game modeling an airline’s strategic decisions or those of an electronics company managing a new product). Even after receiving extensive feedback after each round of play, the human subjects improved only slowly over time and failed to beat simple computer models. This raises questions about how much human expertise is desirable when building models for complex dynamic environments. The best way to find out is to compare how well humans and models do in specific domains and perhaps develop hybrid models that integrate different approaches.‘ (Tetlock)

Kahneman also recently published new work relevant to this.

Research has confirmed that in many tasks, experts’ decisions are highly variable: valuing stocks, appraising real estate, sentencing criminals, evaluating job performance, auditing financial statements, and more. The unavoidable conclusion is that professionals often make decisions that deviate significantly from those of their peers, from their own prior decisions, and from rules that they themselves claim to follow.’

In general organisations spend almost no effort figuring out how noisy the predictions made by senior staff are and how much this costs. Kahneman has done some ‘noise audits’ and shown companies that management make MUCH more variable predictions than people realise.

‘What prevents companies from recognizing that the judgments of their employees are noisy? The answer lies in two familiar phenomena: Experienced professionals tend to have high confidence in the accuracy of their own judgments, and they also have high regard for their colleagues’ intelligence. This combination inevitably leads to an overestimation of agreement. When asked about what their colleagues would say, professionals expect others’ judgments to be much closer to their own than they actually are. Most of the time, of course, experienced professionals are completely unconcerned with what others might think and simply assume that theirs is the best answer. One reason the problem of noise is invisible is that people do not go through life imagining plausible alternatives to every judgment they make.

‘High skill develops in chess and driving through years of practice in a predictable environment, in which actions are followed by feedback that is both immediate and clear. Unfortunately, few professionals operate in such a world. In most jobs people learn to make judgments by hearing managers and colleagues explain and criticize—a much less reliable source of knowledge than learning from one’s mistakes. Long experience on a job always increases people’s confidence in their judgments, but in the absence of rapid feedback, confidence is no guarantee of either accuracy or consensus.’

Reviewing the point that Tetlock makes about simple models beating experts in many fields, Kahneman summarises the evidence:

‘People have competed against algorithms in several hundred contests of accuracy over the past 60 years, in tasks ranging from predicting the life expectancy of cancer patients to predicting the success of graduate students. Algorithms were more accurate than human professionals in about half the studies, and approximately tied with the humans in the others. The ties should also count as victories for the algorithms, which are more cost-effective…

‘The common assumption is that algorithms require statistical analysis of large amounts of data. For example, most people we talk to believe that data on thousands of loan applications and their outcomes is needed to develop an equation that predicts commercial loan defaults. Very few know that adequate algorithms can be developed without any outcome data at all — and with input information on only a small number of cases. We call predictive formulas that are built without outcome data “reasoned rules,” because they draw on commonsense reasoning.

‘The construction of a reasoned rule starts with the selection of a few (perhaps six to eight) variables that are incontrovertibly related to the outcome being predicted. If the outcome is loan default, for example, assets and liabilities will surely be included in the list. The next step is to assign these variables equal weight in the prediction formula, setting their sign in the obvious direction (positive for assets, negative for liabilities). The rule can then be constructed by a few simple calculations.

The surprising result of much research is that in many contexts reasoned rules are about as accurate as statistical models built with outcome data. Standard statistical models combine a set of predictive variables, which are assigned weights based on their relationship to the predicted outcomes and to one another. In many situations, however, these weights are both statistically unstable and practically unimportant. A simple rule that assigns equal weights to the selected variables is likely to be just as valid. Algorithms that weight variables equally and don’t rely on outcome data have proved successful in personnel selection, election forecasting, predictions about football games, and other applications.

‘The bottom line here is that if you plan to use an algorithm to reduce noise, you need not wait for outcome data. You can reap most of the benefits by using common sense to select variables and the simplest possible rule to combine them…

‘Uncomfortable as people may be with the idea, studies have shown that while humans can provide useful input to formulas, algorithms do better in the role of final decision maker. If the avoidance of errors is the only criterion, managers should be strongly advised to overrule the algorithm only in exceptional circumstances.

Jim Simons is a mathematician and founder of the world’s most successful ‘quant fund’, Renaissance Technologies. While market prices appear close to random and are therefore extremely hard to predict, they are not quite random and the right models/technology can exploit these small and fleeting opportunities. One of the lessons he learned early was: Don’t turn off the model and go with your gut. At Renaissance, they trust models over instincts. The Bridgewater hedge fund led by Ray Dalio is similar. After near destruction early in his career, Dalio explicitly turned towards explicit model building as the basis for decisions combined with radical attempts to create an internal system that incentivises the optimisation of error-correction. It works.

*

People fail to learn from even the great examples of success and the simplest lessons

One of the most interesting meta-lessons of studying high performance, though, is that simply demonstrating extreme success does NOT lead to much learning. For example:

  • ARPA and PARC created the internet and PC. The PARC research team was an extraordinary collection of about two dozen people who were managed in a very unusual way that created super-productive processes extremely different to normal bureaucracies. XEROX, which owned PARC, had the entire future of the computer industry in its own hands, paid for by its own budgets, and it simultaneously let Bill Gates and Steve Jobs steal everything and XEROX then shut down the research team that did it. And then, as Silicon Valley grew on the back of these efforts, almost nobody, including most of the billionaires who got rich from the dynamics created by ARPA-PARC, studied the nature of the organisation and processes and copied it. Even today, those trying to do edge-of-the-art research in a similar way to PARC right at the heart of the Valley ecosystem are struggling for long-term patient funding. As Alan Kay, one of the PARC team, said, ‘The most interesting thing has been the contrast between appreciation/exploitation of the inventions/contributions [of PARC] versus the almost complete lack of curiosity and interest in the processes that produced them. ARPA survived being abolished in the 1970s but it was significantly changed and is no longer the freewheeling place that it was in the 1960s when it funded the internet. In many ways DARPA’s approach now is explicitly different to the old ARPA (the addition of the ‘D’ was a sign of internal bureaucratic changes).

Screenshot 2018-06-05 14.55.00

  • ‘Systems management’ was invented in the 1950s and 1960s (partly based on wartime experience of large complex projects) to deal with the classified ICBM project and Apollo. It put man on the moon then NASA largely abandoned the approach and reverted to being (relative to 1963-9) a normal bureaucracy. Most of Washington has ignored the lessons ever since — look for example at the collapse of ObamaCare’s rollout, after which Insiders said ‘oh, looks like it was a system failure, wonder how we deal with this’, mostly unaware that America had developed a successful approach to such projects half a century earlier. This is particularly interesting given that China also studied Mueller’s approach to systems management in Apollo and as we speak is copying it in projects across China. The EU’s bureaucracy is, like Whitehall, an anti-checklist to high level systems management — i.e they violate almost every principle of effective action.
  • Buffett and Munger are the most successful investment partnership in world history. Every year for half a century they have explained some basic principles, particularly concerning incentives, behind organisational success. Practically no public companies take their advice and all around us in Britain we see vast corporate looting and politicians of all parties failing to act — they don’t even read the Buffett/Munger lessons and think about them. Even when given these lessons to read, they won’t read them (I know this because I’ve tried).

Perhaps you’re thinking — well, learning from these brilliant examples might be intrinsically really hard, much harder than Cummings thinks. I don’t think this is quite right. Why? Partly because millions of well-educated and normally-ethical people don’t learn even from much simpler things.

I will explore this separately soon but I’ll give just one example. The world of healthcare unnecessarily kills and injures people on a vast scale. Two aspects of this are 1) a deep resistance to learning from the success of very simple tools like checklists and 2) a deep resistance to face the fact that most medical experts do not understand statistics properly and their routine misjudgements cause vast suffering, plus warped incentives encourage widespread lies about statistics and irrational management. E.g People are constantly told things like ‘you’ve tested positive for X therefore you have X’ and they then kill themselves. We KNOW how to practically eliminate certain sorts of medical injury/death. We KNOW how to teach and communicate statistics better. (Cf. Professor Gigerenzer for details. He was the motivation for including things like conditional probabilities in the new National Curriculum.) These are MUCH simpler than building ICBMs, putting man on the moon, creating the internet and PC, or being great investors. Yet our societies don’t do them.

Why?

Because we do not incentivise error-correction and predictive accuracy. People are not incentivised to consider the cost of their noisy judgements. Where incentives and culture are changed, performance magically changes. It is the nature of the systems, not (mostly) the nature of the people, that is the crucial ingredient in learning from proven simple success. In healthcare like in government generally, people are incentivised to engage in wasteful/dangerous signalling to a terrifying degree — not rigorous thinking and not solving problems.

I have experienced the problem with checklists first hand in the Department for Education when trying to get the social worker bureaucracy to think about checklists in the context of avoiding child killings like Baby P. Professionals tend to see them as undermining their status and bureaucracies fight against learning, even when some great officials try really hard (as some in the DfE did such as Pamela Dow and Victoria Woodcock). ‘Social work is not the same as an airline Dominic’. No shit. Airlines can handle millions of people without killing one of them because they align incentives with predictive accuracy and error-correction.

Some appalling killings are inevitable but the social work bureaucracy will keep allowing unnecessary killings because they will not align incentives with error-correction. Undoing flawed incentives threatens the system so they’ll keep killing children instead — and they’re not particularly bad people, they’re normal people in a normal bureaucracy. The pilot dies with the passengers. The ‘CEO’ on over £150,000 a year presiding over another unnecessary death despite constantly increasing taxpayers money pouring in? Issue a statement that ‘this must never happen again’, tell the lawyers to redact embarrassing cockups on the grounds of ‘protecting someone’s anonymity’ (the ECHR is a great tool to cover up death by incompetence), fuck off to the golf course, and wait for the media circus to move on.

Why do so many things go wrong? Because usually nobody is incentivised to work relentlessly to suppress entropy, never mind come up with something new.

*

We can see some reasonably clear conclusions from decades of study on expertise and prediction in many fields.

  • Some fields are like extreme sport or physics: genuine expertise emerges because of fast effective feedback on errors.
  • Abstracting human wisdom into models often works better than relying on human experts as models are often more consistent and less noisy.
  • Models are also often cheaper and simpler to use.
  • Models do not have to be complex to be highly effective — quite the opposite, often simpler models outperform more sophisticated and expensive ones.
  • In many fields (which I’ve explored before but won’t go into again here) low tech very simple checklists have been extremely effective: e.g flying aircraft or surgery.
  • Successful individuals like Warren Buffett and Ray Dalio also create cognitive checklists to trap and correct normal cognitive biases that degrade individual and team performance.
  • Fields make progress towards genuine expertise when they make a transition from stories (e.g Icarus) and authority (e.g ‘witch doctor’) to quantitative models (e.g modern aircraft) and evidence/experiment (e.g some parts of modern medicine/surgery).
  • In the intellectual realm, maths and physics are fields dominated by genuine expertise and provide a useful benchmark to compare others against. They are also hierarchical. Social sciences have little in common with this.
  • Even when we have great examples of learning and progress, and we can see the principles behind them are relatively simple and do not require high intelligence to understand, they are so psychologically hard and run so counter to the dynamics of normal big organisations, that almost nobody learns from them. Extreme success is ‘easy to learn from’ in one sense and ‘the hardest thing in the world to learn from’ in another sense.

It is fascinating how remarkably little interest there is in the world of politics/government, and social sciences analysing politics/government, about all this evidence. This is partly because politics/government is an anti-learning and anti-expertise field, partly because the social sciences are swamped by what Feynman called ‘cargo cult science’ with very noisy predictions, little good feedback and learning, and a lot of chippiness at criticism whether it’s from statistics experts or the ‘ignorant masses’. Fields like ‘education research’ and ‘political science’ are particularly dreadful and packed with charlatans but much of economics is not much better (much pro- and anti-Brexit mainstream economics is classic ‘cargo cult’).

I have found there is overwhelmingly more interest in high technology circles than in government circles, but in high technology circles there is also a lot of incredulity and naivety about how government works — many assume politicians are trying and failing to achieve high performance and don’t realise that in fact nobody is actually trying. This illusion extends to many well-connected businessmen who just can’t internalise the reality of the apex of power. I find that uneducated people on 20k living hundreds of miles from SW1 generally have a more accurate picture of daily No10 work than extremely well-connected billionaires.

This is all sobering and is another reason to be pessimistic about the chances of changing government from ‘normal’ to ‘high performance’ — but, pessimism of the intellect, optimism of the will…

If you are in Whitehall now watching the Brexit farce or abroad looking at similar, you will see from page 26 HERE a checklist for how to manage complex government projects at world class levels (if you find this interesting then read the whole paper). I will elaborate on this. I am also thinking about a project to look at the intersection of (roughly) five fields in order to make large improvements in the quality of people, ideas, tools, and institutions that determine political/government decisions and performance:

  • the science of prediction across different fields (e.g early warning systems, the Tetlock/IARPA project showing dramatic performance improvements),
  • what we know about high performance (individual/team/organisation) in different fields (e.g China’s application of ‘systems management’ to government),
  • technology and tools (e.g Bret Victor’s work, Michael Nielsen’s work on cognitive technologies, work on human-AI ‘minotaur’ teams),
  • political/government decision making affecting millions of people and trillions of dollars (e.g WMD, health), and
  • communication (e.g crisis management, applied psychology).

Progress requires attacking the ‘system of systems’ problem at the right ‘level’. Attacking the problems directly — let’s improve policy X and Y, let’s swap ‘incompetent’ A for ‘competent’ B — cannot touch the core problems, particularly the hardest meta-problem that government systems bitterly fight improvement. Solving the explicit surface problems of politics and government is best approached by a more general focus on applying abstract principles of effective action. We need to surround relatively specific problems with a more general approach. Attack at the right level will see specific solutions automatically ‘pop out’ of the system. One of the most powerful simplicities in all conflict (almost always unrecognised) is: ‘winning without fighting is the highest form of war’. If we approach the problem of government performance at the right level of generality then we have a chance to solve specific problems ‘without fighting’ — or, rather, without fighting nearly so much and the fighting will be more fruitful.

This is not a theoretical argument. If you look carefully at ancient texts and modern case studies, you see that applying a small number of very simple, powerful, but largely unrecognised principles (that are very hard for organisations to operationalise) can produce extremely surprising results.

How to jump from the Idea to Reality? More soon…


Ps. Just as I was about to hit publish on this, the DCMS Select Committee released their report on me. The sentence about the Singapore golf club at the top comes to mind.