On the referendum #33: High performance government, ‘cognitive technologies’, Michael Nielsen, Bret Victor, & ‘Seeing Rooms’

On the referendum #33: High performance government, ‘cognitive technologies’, Michael Nielsen, Bret Victor, & ‘Seeing Rooms’

‘People, ideas, machines — in that order!’ Colonel Boyd.

‘The main thing that’s needed is simply the recognition of how important seeing is, and the will to do something about it.’ Bret Victor.

‘[T]he transfer of an entirely new and quite different framework for thinking about, designing, and using information systems … is immensely more difficult than transferring technology.’ Robert Taylor, one of the handful most responsible for the creation of the internet and personal computing, and in inspiration to Bret Victor.

‘[M]uch of our intellectual elite who think they have “the solutions” have actually cut themselves off from understanding the basis for much of the most important human progress.’ Michael Nielsen, physicist. 

Introduction

This blog looks at an intersection of decision-making, technology, high performance teams and government. It sketches some ideas of physicist Michael Nielsen about cognitive technologies and of computer visionary Bret Victor about the creation of dynamic tools to help understand complex systems and ‘argue with evidence’, such as tools for authoring dynamic documents’, and ‘Seeing Rooms’ for decision-makers — i.e rooms designed to support decisions in complex environments. It compares normal Cabinet rooms, such as that used in summer 1914 or October 1962, with state-of-the-art Seeing Rooms. There is very powerful feedback between: a) creating dynamic tools to see complex systems deeper (to see inside, see across time, and see across possibilities), thus making it easier to work with reliable knowledge and interactive quantitative models, semi-automating error-correction etc, and b) the potential for big improvements in the performance of political and government decision-making.

It is relevant to Brexit and anybody thinking ‘how on earth do we escape this nightmare’ but 1) these ideas are not at all dependent on whether you support or oppose Brexit, about which reasonable people disagree, and 2) they are generally applicable to how to improve decision-making — for example, they are relevant to problems like ‘how to make decisions during a fast moving nuclear crisis’ which I blogged about recently, or if you are a journalist ‘what future media could look like to help improve debate of politics’. One of the tools Nielsen discusses is a tool to make memory a choice by embedding learning in long-term memory rather than, as it is for almost all of us, an accident. I know from my days working on education reform in government that it’s almost impossible to exaggerate how little those who work on education policy think about ‘how to improve learning’.

Fields make huge progress when they move from stories (e.g Icarus)  and authority (e.g ‘witch doctor’) to evidence/experiment (e.g physics, wind tunnels) and quantitative models (e.g design of modern aircraft). Political ‘debate’ and the processes of government are largely what they have always been largely conflict over stories and authorities where almost nobody even tries to keep track of the facts/arguments/models they’re supposedly arguing about, or tries to learn from evidence, or tries to infer useful principles from examples of extreme success/failure. We can see much better than people could in the past how to shift towards processes of government being ‘partially rational discussion over facts and models and learning from the best examples of organisational success‘. But one of the most fundamental and striking aspects of government is that practically nobody involved in it has the faintest interest in or knowledge of how to create high performance teams to make decisions amid uncertainty and complexity. This blindness is connected to another fundamental fact: critical institutions (including the senior civil service and the parties) are programmed to fight to stay dysfunctional, they fight to stay closed and avoid learning about high performance, they fight to exclude the most able people.

I wrote about some reasons for this before the referendum (cf. The Hollow Men). The Westminster and Whitehall response was along the lines of ‘natural party of government’, ‘Rolls Royce civil service’ blah blah. But the fact that Cameron, Heywood (the most powerful civil servant) et al did not understand many basic features of how the world works is why I and a few others gambled on the referendum — we knew that the systemic dysfunction of our institutions and the influence of grotesque incompetents provided an opportunity for extreme leverage. 

Since then, after three years in which the parties, No10 and the senior civil service have imploded (after doing the opposite of what Vote Leave said should happen on every aspect of the negotiations) one thing has held steady — Insiders refuse to ask basic questions about the reasons for this implosion, such as: ‘why Heywood didn’t even put together a sane regular weekly meeting schedule and ministers didn’t even notice all the tricks with agendas/minutes etc’, how are decisions really made in No10, why are so many of the people below some cognitive threshold for understanding basic concepts (cf. the current GATT A24 madness), what does it say about Westminster that both the Adonis-Remainers and the Cash-ERGers have become more detached from reality while a large section of the best-educated have effectively run information operations against their own brains to convince themselves of fairy stories about Facebook, Russia and Brexit…

It’s a mix of amusing and depressing — but not surprising to me — to hear Heywood explain HERE how the British state decided it couldn’t match the resources of a single multinational company or a single university in funding people to think about what the future might hold, which is linked to his failure to make serious contingency plans for losing the referendum. And of course Heywood claimed after the referendum that we didn’t need to worry about the civil service because on project management it has ‘nothing to learn’ from the best private companies. The elevation of Heywood in the pantheon of SW1 is the elevation of the courtier-fixer at the expense of the thinker and the manager — the universal praise for him recently is a beautifully eloquent signal that those in charge are the blind leading the blind and SW1 has forgotten skills of high value, the skills of public servants such as Alanbrooke or Michael Quinlan.

This blog is hopefully useful for some of those thinking about a) improving government around the world and/or b) ‘what comes after the coming collapse and reshaping of the British parties, and how to improve drastically the performance of critical institutions?’

Some old colleagues have said ‘Don’t put this stuff on the internet, we don’t want the second referendum mob looking at it.’ Don’t worry! Ideas like this have to be forced down people’s throats practically at gunpoint. Silicon Valley itself has barely absorbed Bret Victor’s ideas so how likely is it that there will be a rush to adopt them by the world of Blair and Grieve?! These guys can’t tell the difference between courtier-fixers and people with models for truly effective action like General Groves (HERE). Not one in a thousand will read a 10,000 word blog on the intersection of management and technology and the few who do will dismiss it as the babbling of a deluded fool, they won’t learn any more than they learned from the 2004 referendum or from Vote Leave. And if I’m wrong? Great. Things will improve fast and a second referendum based on both sides applying lessons from Bret Victor would be dynamite.

NB. Bret Victor’s project, Dynamic Land, is a non-profit. For an amount of money that a government department like the Department for Education loses weekly without any minister realising it’s lost (in the millions per week in my experience because the quality of financial control is so bad), it could provide crucial funding for Victor and help itself. Of course, any minister who proposed such a thing would be told by officials ‘this is illegal under EU procurement law and remember minister that we must obey EU procurement law forever regardless of Brexit’ — something I know from experience officials say to ministers whether it is legal or not when they don’t like something. And after all, ministers meekly accepted the Kafka-esque order from Heywood to prioritise duties of goodwill to the EU under A50 over preparations to leave A50, so habituated had Cameron’s children become to obeying the real deputy prime minister…

Below are 4 sections:

  1. The value found in intersections of fields
  2. Some ideas of Bret Victor
  3. Some ideas of Michael Nielsen
  4. A summary

*

1. Extreme value is often found in the intersection of fields

The legendary Colonel Boyd (he of the ‘OODA loop’) would shout at audiences ‘People, ideas, machines — in that order.‘ Fundamental political problems we face require large improvements in the quality of all three and, harder, systems to integrate all three. Such improvements require looking carefully at the intersection of roughly five entangled areas of study. Extreme value is often found at such intersections.

  • Explore what we know about the selection, education and training of people for high performance (individual/team/organisation) in different fields. We should be selecting people much deeper in the tails of the ability curve — people who are +3 (~1:1,000) or +4 (~1:30,000) standard deviations above average on intelligence, relentless effort, operational ability and so on (now practically entirely absent from the ’50 most powerful people in Britain’). We should  train them in the general art of ‘thinking rationally’ and making decisions amid uncertainty (e.g Munger/Tetlock-style checklists, exercises on SlateStarCodex blog). We should train them in the practical reasons for normal ‘mega-project failure’ and case studies such as the Manhattan Project (General Groves), ICBMs (Bernard Schriever), Apollo (George Mueller), ARPA-PARC (Robert Taylor) that illustrate how the ‘unrecognised simplicities’ of high performance bring extreme success and make them work on such projects before they are responsible for billions rather than putting people like Cameron in charge (after no experience other than bluffing through PPE then PR). NB. China’s leaders have studied these episodes intensely while American and British institutions have actively ‘unlearned’ these lessons.
  • Explore the frontiers of the science of prediction across different fields from physics to weather forecasting to finance and epidemiology. For example, ideas from physics about early warning systems in physical systems have application in many fields, including questions like: to what extent is it possible to predict which news will persist over different timescales, or predict wars from news and social media? There is interesting work combining game theory, machine learning, and Red Teams to predict security threats and improve penetration testing (physical and cyber). The Tetlock/IARPA project showed dramatic performance improvements in political forecasting are possible, contra what people such as Kahneman had thought possible. A recent Nature article by Duncan Watts explained fundamental problems with the way normal social science treats prediction and suggested new approaches — which have been almost entirely ignored by mainstream economists/social scientists. There is vast scope for applying ideas and tools from the physical sciences and data science/AI — largely ignored by mainstream social science, political parties, government bureaucracies and media — to social/political/government problems (as Vote Leave showed in the referendum, though this has been almost totally obscured by all the fake news: clue — it was not ‘microtargeting’).
  • Explore technology and tools. For example, Bret Victor’s work and Michael Nielsen’s work on cognitive technologies. The edge of performance in politics/government will be defined by teams that can combine the ancient ‘unrecognised simplicities of high performance’ with edge-of-the-art technology. No10 is decades behind the pace in old technologies like TV, doesn’t understand simple tools like checklists, and is nowhere with advanced technologies.
  • Explore the frontiers of communication (e.g crisis management, applied psychology). Technology enables people to improve communication with unprecedented speed, scale and iterative testing. It also allows people to wreak chaos with high leverage. The technologies are already beyond the ability of traditional government centralised bureaucracies to cope with. They will develop rapidly such that most such centralised bureaucracies lose more and more control while a few high performance governments use the leverage they bring (c.f China’s combination of mass surveillance, AI, genetic identification, cellphone tracking etc as they desperately scramble to keep control). The better educated think that psychological manipulation is something that happens to ‘the uneducated masses’ but they are extremely deluded — in many ways people like FT pundits are much easier to manipulate, their education actually makes them more susceptible to manipulation, and historically they are the ones who fall for things like Russian fake news (cf. the Guardian and New York Times on Stalin/terror/famine in the 1930s) just as now they fall for fake news about fake news. Despite the centrality of communication to politics it is remarkable how little attention Insiders pay to what works — never mind the question ‘what could work much better?’.  The fact that so much of the media believes total rubbish about social media and Brexit shows that the media is incapable of analysing the intersection of politics and technology but, although it is obviously bad that the media disinforms the public, the only rational planning assumption is that this problem will continue and even get worse. The media cannot explain either the use of TV or traditional polling well, these have been extremely important for over 70 years, and there is no trend towards improvement so a sound planning assumption is surely that the media will do even worse with new technologies and data science. This will provide large opportunities for good and evil. A new approach able to adapt to the environment an order of magnitude faster than now would disorient political opponents (desperately scrolling through Twitter) to such a degree — in Boyd’s terms it would ‘collapse their OODA loops’ — that it could create crucial political space for focus on the extremely hard process of rewiring government institutions which now seems impossible for Insiders to focus on given their psychological/operational immersion in the hysteria of 24 hour rolling news and the constant crises generated by dysfunctional bureaucracies.
  • Explore how to re-program political/government institutions at the apex of decision-making authority so that a) people are more incentivised to optimise things we want them to optimise, like error-correction and predictive accuracy, and less incentivised to optimise bureaucratic process, prestige, and signalling as our institutions now do; b) institutions are incentivised to build high performance teams rather than make this practically illegal at the apex of government; and c) we have ‘immune systems’ based on decentralisation and distributed control to minimise the inevitable failures of even the best people and teams.

Example 1: Red Teams and pre-mortems can combat groupthink and normal cognitive biases but they are practically nowhere in the formal structure of governments. There is huge scope for a Parliament-mandated small and extremely elite Red Team operating next to, and in some senses above, the Cabinet Office to ensure diversity of opinions, fight groupthink and other standard biases, make sure lessons are learned and so on. Cost: a few million that it would recoup within weeks by stopping blunders.

Example 2: prediction tournaments/markets could improve policy and project management, with people able to ‘short’ official delivery timetables — imagine being able to short Grayling’s transport announcements, for example. In many areas new markets could help — e.g markets to allow shorting of house prices to dampen bubbles, as Chris Dillow and others have suggested. The way in which the IARPA/Tetlock work has been ignored in SW1 is proof that MPs and civil servants are not actually interested in — or incentivised to be interested in — who is right, who is actually an ‘expert’, and so on. There are tools available if new people do want to take these things seriously. Cost: a few million at most, possibly thousands, that it would recoup within a year by stopping blunders.

Example 3: we need to consider projects that could bootstrap new international institutions that help solve more general coordination problems such as the risk of accidental nuclear war. The most obvious example of a project like this I can think of is a manned international lunar base which would be useful for a) basic science, b) the practical purposes of building urgently needed near-Earth infrastructure for space industrialisation, and c) to force the creation of new practical international institutions for cooperation between Great Powers. George Mueller’s team that put man on the moon in 1969 developed a plan to do this that would have been built by now if their plans had not been tragically abandoned in the 1970s. Jeff Bezos is explicitly trying to revive the Mueller vision and Britain should be helping him do it much faster. The old institutions like the UN and EU — built on early 20th Century assumptions about the performance of centralised bureaucracies — are incapable of solving global coordination problems. It seems to me more likely that institutions with qualities we need are much more likely to emerge out of solving big problems than out of think tank papers about reforming existing institutions. Cost = 10s/100s of billions, return = trillions, or near infinite if shifting our industrial/psychological frontiers into space drastically reduces the chances of widespread destruction.

A) Some fields have fantastic predictive models and there is a huge amount of high quality research, though there is a lot of low-hanging fruit in bringing methods from one field to another.

B) We know a lot about high performance including ‘systems management’ for complex projects but very few organisations use this knowledge and government institutions overwhelmingly try to ignore and suppress the knowledge we have.

C) Some fields have amazing tools for prediction and visualisation but very few organisations use these tools and almost nobody in government (where colour photocopying is a major challenge).

D) We know a lot about successful communication but very few organisations use this knowledge and most base action on false ideas. E.g political parties spend millions on spreading ideas but almost nothing on thinking about whether the messages are psychologically compelling or their methods/distribution work, and TV companies spend billions on news but almost nothing understanding what science says about how to convey complex ideas — hence why you see massively overpaid presenters like Evan Davis babbling metaphors like ‘economic takeoff’ in front of an airport while his crew films a plane ‘taking off’, or ‘the economy down the plughole’ with pictures of — a plughole.

E) Many thousands worldwide are thinking about all sorts of big government issues but very few can bring them together into coherent plans that a government can deliver and there is almost no application of things like Red Teams and prediction markets. E.g it is impossible to describe the extent to which politicians in Britain do not even consider ‘the timetable and process for turning announcement X into reality’ as something to think about — for people like Cameron and Blair the announcement IS the only reality and ‘management’ is a dirty word for junior people to think about while they focus on ‘strategy’. As I have pointed out elsewhere, it is fascinating that elite business schools have been collecting billions in fees to teach their students WRONGLY that operational excellence is NOT a source of competitive advantage, so it is no surprise that politicians and bureaucrats get this wrong.

But I can see almost nobody integrating the very best knowledge we have about A+B+C+D with E and I strongly suspect there are trillion dollar bills lying on the ground that could be grabbed for trivial cost — trillion dollar bills that people with power are not thinking about and are incentivised not to think about. I might be wrong but I would remind readers that Vote Leave was itself a bet on this proposition being right and I think its success should make people update their beliefs on the competence of elite political institutions and the possibilities for improvement.

Here I want to explore one set of intersections — the ideas of Bret Victor and Michael Nielsen.

*

2. Bret Victor: Cognitive technologies, dynamic tools, interactive quantitative models, Seeing Rooms — making it as easy to insert facts, data, and models in political discussion as it is to insert emoji 

In the 1960s visionaries such as Joseph Licklider, Robert Taylor and Doug Engelbart developed a vision of networked interactive computing that provided the foundation not just for new technologies (the internet, PC etc) but for whole new industries. Licklider, Sutherland,Taylor et al provided a model (ARPA) for how science funding can work. Taylor provided a model (PARC) of how to manage a team of extremely talented people who turned a profound vision into reality. The original motivation for the vision of networked interactive computing was to help humans make good decisions in a complex world — or, ‘augmenting human intelligence’ and ‘man-machine symbiosis’. This story shows how to make big improvements in the world with very few resources if they are structured right: PARC involved ~25 key people and tens of millions over roughly a decade and generated trillions of dollars in value. If interested in the history and the super-productive processes behind the success of ARPA-PARC read THIS.

It’s fascinating that in many ways the original 1960s Licklider vision has still not been implemented. The Silicon Valley ecosystem developed parts of the vision but not others for complex reasons I don’t understand (cf. The Future of Programming). One of those who is trying to implement parts of the vision that have not been implemented is Bret Victor. Bret Victor is a rare thing: a genuine visionary in the computing world according to some of those ‘present at the creation’ of ARPA-PARC such as Alan Kay. His ideas lie at critical intersections between fields sketched above. Watch talks such as Inventing on Principle and Media for Thinking the Unthinkable and explore his current project, Dynamic Land in Berkeley.

Victor has described, and now demonstrates in Dynamic Land, how existing tools fail and what is possible. His core principle is that creators need an immediate connection to what they are creating. Current programming languages and tools are mostly based on very old ideas before computers even had screens and there was essentially no interactivity — they date from the era of punched cards. They do not allow users to interact dynamically. New dynamic tools enable us to think previously unthinkable thoughts and allow us to see and interact with complex systems: to see inside, see across time, and see across possibilities.

I strongly recommend spending a few days exploring his his whole website but I will summarise below his ideas on two things:

  1. His ideas about how to build new dynamic tools for working with data and interactive models.
  2. His ideas about transforming the physical spaces in which teams work so that dynamic tools are embedded in their environment — people work inside a tool.

Applying these ideas would radically improve how people make decisions in government and how the media reports politics/government.

Language and writing were cognitive technologies created thousands of years ago which enabled us to think previously unthinkable thoughts. Mathematical notation did the same over the past 1,000 years. For example, take a mathematics problem described by the 9th Century mathematician al-Khwarizmi (who gave us the word algorithm):

screenshot 2019-01-28 23.46.10

Once modern notation was invented, this could be written instead as:

x2 + 10x = 39

Michael Nielsen uses a similar analogy. Descartes and Fermat demonstrated that equations can be represented on a diagram and a diagram can be represented as an equation. This was a new cognitive technology, a new way of seeing and thinking: algebraic geometry. Changes to the ‘user interface’ of mathematics were critical to its evolution and allowed us to think unthinkable thoughts (Using Artificial Intelligence to Augment Human Intelligence, see below).

Screenshot 2019-03-06 11.33.19

Similarly in the 18th Century, there was the creation of data graphics to demonstrate trade figures. Before this, people could only read huge tables. This is the first data graphic:

screenshot 2019-01-29 00.28.21

The Jedi of data visualisation, Edward Tufte, describes this extraordinary graphic of Napoleon’s invasion of Russia as ‘probably the best statistical graphic ever drawn’. It shows the losses of Napoleon’s army: from the Polish-Russian border, the thick band shows the size of the army at each position, the path of Napoleon’s winter retreat from Moscow is shown by the dark lower band, which is tied to temperature and time scales (you can see some of the disastrous icy river crossings famously described by Tolstoy). NB. The Cabinet makes life-and-death decisions now with far inferior technology to this from the 19th Century (see below).

screenshot 2019-01-29 10.37.05

If we look at contemporary scientific papers they represent extremely compressed information conveyed through a very old fashioned medium, the scientific journal. Printed journals are centuries old but the ‘modern’ internet versions are usually similarly static. They do not show the behaviour of systems in a visual interactive way so we can see the connections between changing values in the models and changes in behaviour of the system. There is no immediate connection. Everything is pretty much the same as a paper and pencil version of a paper. In Media for Thinking the Unthinkable, Victor shows how dynamic tools can transform normal static representations so systems can be explored with immediate feedback. This dramatically shows how much more richly and deeply ideas can be explored. With Victor’s tools we can interact with the systems described and immediately grasp important ideas that are hidden in normal media.

Picture: the very dense writing of a famous paper (by chance the paper itself is at the intersection of politics/technology and Watts has written excellent stuff on fake news but has been ignored because it does not fit what ‘the educated’ want to believe)

screenshot 2019-01-29 10.55.01

Picture: the same information presented differently. Victor’s tools make the information less compressed so there’s less work for the brain to do ‘decompressing’. They not only provide visualisations but the little ‘sliders’ over the graphics are to drag buttons and interact with the data so you see the connection between changing data and changing model. A dynamic tool transforms a scientific paper from ‘pencil and paper’ technology to modern interactive technology.

screenshot 2019-01-29 10.58.38

Victor’s essay on climate change

Victor explains in detail how policy analysis and public debate of climate change could be transformed. Leave aside the subject matter — of course it’s extremely important, anybody interested in this issue will gain from reading the whole thing and it would be great material for a school to use for an integrated science / economics / programming / politics project, but my focus is on his ideas about tools and thinking, not the specific subject matter.

Climate change is a great example to consider because it involves a) a lot of deep scientific knowledge, b) complex computer modelling which is understood in detail by a tiny fraction of 1% (and almost none of the social science trained ‘experts’ who are largely responsible for interpreting such models for politicians/journalists, cf HERE for the science of this), c) many complex political, economic, cultural issues, d) very tricky questions about how policy is discussed in mainstream culture, and e) the problem of how governments try to think about and act on important, complex, and long-term problems. Scientific knowledge is crucial but it cannot by itself answer the question: what to do? The ideas BV describes to transform the debate on climate change apply generally to how we approach all important political issues.

In the section Languages for technical computing, BV describes his overall philosophy (if you look at the original you will see dynamic graphics to help make each point but I can’t make them play on my blog — a good example of the failure of normal tools!):

‘The goal of my own research has been tools where scientists see what they’re doing in realtime, with immediate visual feedback and interactive exploration. I deeply believe that a sea change in invention and discovery is possible, once technologists are working in environments designed around:

  • ubiquitous visualization and in-context manipulation of the system being studied;
  • actively exploring system behavior across multiple levels of abstraction in parallel;
  • visually investigating system behavior by transforming, measuring, searching, abstracting;
  • seeing the values of all system variables, all at once, in context;
  • dynamic notations that embed simulation, and show the effects of parameter changes;
  • visually improvising special-purpose dynamic visualizations as needed.’

He then describes how the community of programming language developers have failed to create appropriate languages for scientists, which I won’t go into but which is fascinating.

He then describes the problem of how someone can usefully get to grips with a complex policy area involving technological elements.

‘How can an eager technologist find their way to sub-problems within other people’s projects where they might have a relevant idea? How can they be exposed to process problems common across many projects?… She wishes she could simply click on “gas turbines”, and explore the space:

  • What are open problems in the field?
  • Who’s working on which projects?
  • What are the fringe ideas?
  • What are the process bottlenecks?
  • What dominates cost? What limits adoption?
  • Why make improvements here? How would the world benefit?

‘None of this information is at her fingertips. Most isn’t even openly available — companies boast about successes, not roadblocks. For each topic, she would have to spend weeks tracking down and meeting with industry insiders. What she’d like is a tool that lets her skim across entire fields, browsing problems and discovering where she could be most useful…

‘Suppose my friend uncovers an interesting problem in gas turbines, and comes up with an idea for an improvement. Now what?

  • Is the improvement significant?
  • Is the solution technically feasible?
  • How much would the solution cost to produce?
  • How much would it need to cost to be viable?
  • Who would use it? What are their needs?
  • What metrics are even relevant?

‘Again, none of this information is at her fingertips, or even accessible. She’d have to spend weeks doing an analysis, tracking down relevant data, getting price quotes, talking to industry insiders.

‘What she’d like are tools for quickly estimating the answers to these questions, so she can fluidly explore the space of possibilities and identify ideas that have some hope of being important, feasible, and viable.

‘Consider the Plethora on-demand manufacturing service, which shows the mechanical designer an instant price quote, directly inside the CAD software, as they design a part in real-time. In what other ways could inventors be given rapid feedback while exploring ideas?’

Victor then describes a public debate over a public policy. Ideas were put forward. Everybody argued.

‘Who to believe? The real question is — why are readers and decision-makers forced to “believe” anything at all? Many claims made during the debate offered no numbers to back them up. Claims with numbers rarely provided context to interpret those numbers. And never — never! — were readers shown the calculations behind any numbers. Readers had to make up their minds on the basis of hand-waving, rhetoric, bombast.’

And there was no progress because nobody could really learn from the debate or even just be clear about exactly what was being proposed. Sound familiar?!! This is absolutely normal and Victor’s description applies to over 99% of public policy debates.

Victor then describes how you can take the policy argument he had sketched and change its nature. Instead of discussing words and stories, DISCUSS INTERACTIVE MODELS. 

Here you need to click to the original to understand the power of what he is talking about as he programs a simple example.

‘The reader can explore alternative scenarios, understand the tradeoffs involved, and come to an informed conclusion about whether any such proposal could be a good decision.

‘This is possible because the author is not just publishing words. The author has provided a model — a set of formulas and algorithms that calculate the consequences of a given scenario… Notice how the model’s assumptions are clearly visible, and can even be adjusted by the reader.

‘Readers are thus encouraged to examine and critique the model. If they disagree, they can modify it into a competing model with their own preferred assumptions, and use it to argue for their position. Model-driven material can be used as grounds for an informed debate about assumptions and tradeoffs.

‘Modeling leads naturally from the particular to the general. Instead of seeing an individual proposal as “right or wrong”, “bad or good”, people can see it as one point in a large space of possibilities. By exploring the model, they come to understand the landscape of that space, and are in a position to invent better ideas for all the proposals to come. Model-driven material can serve as a kind of enhanced imagination.

Victor then looks at some standard materials from those encouraging people to take personal action on climate change and concludes:

‘These are lists of proverbs. Little action items, mostly dequantified, entirely decontextualized. How significant is it to “eat wisely” and “trim your waste”? How does it compare to other sources of harm? How does it fit into the big picture? How many people would have to participate in order for there to be appreciable impact? How do you know that these aren’t token actions to assauge guilt?

‘And why trust them? Their rhetoric is catchy, but so is the horrific “denialist” rhetoric from the Cato Institute and similar. When the discussion is at the level of “trust me, I’m a scientist” and “look at the poor polar bears”, it becomes a matter of emotional appeal and faith, a form of religion.

‘Climate change is too important for us to operate on faith. Citizens need and deserve reading material which shows context — how significant suggested actions are in the big picture — and which embeds models — formulas and algorithms which calculate that significance, for different scenarios, from primary-source data and explicit assumptions.’

Even the supposed ‘pros’ — Insiders at the top of research fields in politically relevant areas — have to scramble around typing words into search engines, crawling around government websites, and scrolling through PDFs. Reliable data takes ages to find. Reliable models are even harder to find. Vast amounts of useful data and models exist but they cannot be found and used effectively because we lack the tools.

‘Authoring tools designed for arguing from evidence’

Why don’t we conduct public debates in the way his toy example does with interactive models? Why aren’t paragraphs in supposedly serious online newspapers written like this? Partly because of the culture, including the education of those who run governments and media organisations, but also because the resources for creating this sort of material don’t exist.

‘In order for model-driven material to become the norm, authors will need data, models, tools, and standards…

‘Suppose there were good access to good data and good models. How would an author write a document incorporating them? Today, even the most modern writing tools are designed around typing in words, not facts. These tools are suitable for promoting preconceived ideas, but provide no help in ensuring that words reflect reality, or any plausible model of reality. They encourage authors to fool themselves, and fool others

‘Imagine an authoring tool designed for arguing from evidence. I don’t mean merely juxtaposing a document and reference material, but literally “autocompleting” sourced facts directly into the document. Perhaps the tool would have built-in connections to fact databases and model repositories, not unlike the built-in spelling dictionary. What if it were as easy to insert facts, data, and models as it is to insert emoji and cat photos?

‘Furthermore, the point of embedding a model is that the reader can explore scenarios within the context of the document. This requires tools for authoring “dynamic documents” — documents whose contents change as the reader explores the model. Such tools are pretty much non-existent.’

These sorts of tools for authoring dynamic documents should be seen as foundational technology like the integrated circuit or the internet.

‘Foundational technology appears essential only in retrospect. Looking forward, these things have the character of “unknown unknowns” — they are rarely sought out (or funded!) as a solution to any specific problem. They appear out of the blue, initially seem niche, and eventually become relevant to everything.

‘They may be hard to predict, but they have some common characteristics. One is that they scale well. Integrated circuits and the internet both scaled their “basic idea” from a dozen elements to a billion. Another is that they are purpose-agnostic. They are “material” or “infrastructure”, not applications.’

Victor ends with a very potent comment — that much of what we observe is ‘rearranging  app icons on the deck of the Titanic’. Commercial incentives drive people towards trying to create ‘the next Facebook’ — not fixing big social problems. I will address this below.

If you are an arts graduate interested in these subjects but not expert (like me), here is an example that will be more familiar… If you look at any big historical subject, such as ‘why/how did World War I start?’ and examine leading scholarship carefully, you will see that all the leading books on such subjects provide false chronologies and mix facts with errors such that it is impossible for a careful reader to be sure about crucial things. It is routine for famous historians to write that ‘X happened because Y’ when Y happened after X. Part of the problem is culture but this could potentially be improved by tools. A very crude example: why doesn’t Kindle make it possible for readers to log factual errors, with users’ reliability ranked by others, so authors can easily check potential errors and fix them in online versions of books? Even better, this could be part of a larger system to develop gold standard chronologies with each ‘fact’ linked to original sources and so on. This would improve the reliability of historical analysis and it would create an ‘anti-entropy’ ratchet — now, entropy means that errors spread across all books on a subject and there is no mechanism to reverse this…

 

‘Seeing Rooms’: macro-tools to help make decisions

Victor also discusses another fundamental issue: the rooms/spaces in which most modern work and thinking occurs are not well-suited to the problems being tackled and we could do much better. Victor is addressing advanced manufacturing and robotics but his argument applies just as powerfully, perhaps more powerfully, to government analysis and decision-making.

Now, ‘software based tools are trapped in tiny rectangles’. We have very sophisticated tools but they all sit on computer screens on desks, just as you are reading this blog.

In contrast, ‘Real-world tools are in rooms where workers think with their bodies.’ Traditional crafts occur in spatial environments designed for that purpose. Workers walk around, use their hands, and think spatially. ‘The room becomes a macro-tool they’re embedded inside, an extension of the body.’ These rooms act like tools to help them understand their problems in detail and make good decisions.

Picture: rooms designed for the problems being tackled

Screenshot 2017-03-20 14.29.19

The wave of 3D printing has developed ‘maker rooms’ and ‘Fab Labs’ where people work with a set of tools that are too expensive for an individual. The room is itself a network of tools. This approach is revolutionising manufacturing.

Why is this useful?

‘Modern projects have complex behavior… Understanding requires seeing and the best seeing tools are rooms.’ This is obviously particularly true of politics and government.

Here is a photo of a recent NASA mission control room. The room is set up so that all relevant people can see relevant data and models at different scales and preserve a common picture of what is important. NASA pioneered thinking about such rooms and the technology and tools needed in the 1960s.

Screenshot 2017-03-20 14.35.35

Here are pictures of two control rooms for power grids.

Screenshot 2017-03-20 14.37.28

Here is a panoramic photo of the unified control centre for the Large Hadron Collider – the biggest of ‘big data’ projects. Notice details like how they have removed all pillars so nothing interrupts visual communication between teams.

Screenshot 2017-03-20 15.31.33

Now contrast these rooms with rooms from politics.

Here is the Cabinet room. I have been in this room. There are effectively no tools. In the 19th Century at least Lord Salisbury used the fireplace as a tool. He would walk around the table, gather sensitive papers, and burn them at the end of meetings. The fire is now blocked. The only other tool, the clock, did not work when I was last there. Over a century, the physical space in which politicians make decisions affecting potentially billions of lives has deteriorated.

British Cabinet room practically as it was July 1914

Screenshot 2017-03-20 15.42.59

Here are JFK and EXCOM making decisions during the Cuban Missile Crisis that moved much faster than July 1914, compressing decisions leading to the destruction of global civilisation potentially into just minutes.

Screenshot 2019-02-14 16.06.04

Here is the only photo in the public domain of the room known as ‘COBRA’ (Cabinet Office Briefing Room) where a shifting set of characters at the apex of power in Britain meet to discuss crises.

Screenshot 2017-03-20 14.39.41

Notice how poor it is compared to NASA, the LHC etc. There has clearly been no attempt to learn from our best examples about how to use the room as a tool. The screens at the end are a late add-on to a room that is essentially indistinguishable from the room in which Prime Minister Asquith sat in July 1914 while doodling notes to his girlfriend as he got bored. I would be surprised if the video technology used is as good as what is commercially available cheaper, the justification will be ‘security’, and I would bet that many of the decisions about the operation of this room would not survive scrutiny from experts in how to construct such rooms.

I have not attended a COBRA meeting but I’ve spoken to many who have. The meetings, as you would expect looking at this room, are often normal political meetings. That is:

  • aims are unclear,
  • assumptions are not made explicit,
  • there is no use of advanced tools,
  • there is no use of quantitative models,
  • discussions are often dominated by lawyers so many actions are deemed ‘unlawful’ without proper scrutiny (and this device is routinely used by officials to stop discussion of options they dislike for non-legal reasons),
  • there is constant confusion between policy, politics and PR then the cast disperses without clarity about what was discussed and agreed.

Here is a photo of the American equivalent – the Situation Room.

Screenshot 2017-03-20 15.51.12.png

It has a few more screens but the picture is essentially the same: there are no interactive tools beyond the ability to speak and see someone at a distance which was invented back in the 1950s/1960s in the pioneering programs of SAGE (automated air defence) and Apollo (man on the moon). Tools to help thinking in powerful ways are not taken seriously. It is largely the same, and decisions are made the same, as in the Cuban Missile Crisis. In some ways the use of technology now makes management worse as it encourages Presidents and their staff to try to micromanage things they should not be managing, often in response to or fear of the media.

Individual ministers’ officers are also hopeless. The computers are old and rubbish. Even colour printing is often a battle. Walls are for kids’ pictures. In the DfE officials resented even giving us paper maps of where schools were and only did it when bullied by the private office. It was impossible for officials to work on interactive documents. They had no technology even for sharing documents in a way that was then (2011) normal even in low-performing organisations. Using GoogleDocs was ‘against the rules’. (I’m told this has slightly improved.) The whole structure of ‘submissions’ and ‘red boxes’ is hopeless. It is extremely bureaucratic and slow. It prevents serious analysis of quantitative models. It reinforces the lack of proper scientific thinking in policy analysis. It guarantees confusion as ministers scribble notes and private offices interpret rushed comments by exhausted ministers after dinner instead of having proper face-to-face meetings that get to the heart of problems and resolve conflicts quickly. The whole approach reinforces the abject failure of the senior civil service to think about high performance project management.

Of course, most of the problems with the standards of policy and management in the civil service are low or no-tech problems — they involve the ‘unrecognised simplicities’ that are independent of, and prior to, the use of technology — but all these things negatively reinforce each other. Anybody who wants to do things much better is scuppered by Whitehall’s entangled disaster zone of personnel, training, management, incentives and tools.

*

Dynamic Land: ‘amazing’

I won’t go into this in detail. Dynamic Land is in a building in Berkeley. I visited last year. It is Victor’s attempt to turn the ideas above into a sort of living laboratory. It is a large connected set of rooms that have computing embedded in surfaces. For example, you can scribble equations on a bit of paper, cameras in the ceiling read your scribbles automatically, turn them into code, and execute them — for example, by producing graphics. You can then physically interact with models that appear on the table or wall while the cameras watch your hands and instantly turn gestures into new code and change the graphics or whatever you are doing. Victor has put these cutting edge tools into a space and made it open to the Berkeley community. This is all hard to explain/understand because you haven’t seen anything like it even in sci-fi films (it’s telling the media still uses the 15 year-old Minority Report as its sci-fi illustration for such things).

This video gives a little taste. I visited with a physicist who works on the cutting edge of data science/AI. I was amazed but I know nothing about such things — I was interested to see his reaction as he scribbled gravitational equations on paper and watched the cameras turn them into models on the table in real-time, then he changed parameters and watched the graphics change in real-time on the table (projected from the ceiling): ‘Ohmygod, this is just obviously the future, absolutely amazing.’ The thought immediately struck us: imagine the implications of having policy discussions with such tools instead of the usual terrible meetings. Imagine discussing HS2 budgets or possible post-Brexit trading arrangements with the models running like this for decision-makers to interact with.

Video of Dynamic Land: the bits of coloured paper are ‘code’, graphics are projected from the ceiling

 

screenshot 2019-01-29 15.01.20

screenshot 2019-01-29 15.27.05

*

3. Michael Nielsen and cognitive technologies

Connected to Victor’s ideas are those of the brilliant physicist, Michael Nielsen. Nielsen wrote the textbook on quantum computation and a great book, Reinventing Discovery, on the evolution of the scientific method. For example, instead of waiting for the coincidence of Grossmann helping out Einstein with some crucial maths, new tools could create a sort of ‘designed serendipity’ to help potential collaborators find each other.

In his essay Thought as a Technology, Nielsen describes the feedback between thought and interfaces:

‘In extreme cases, to use such an interface is to enter a new world, containing objects and actions unlike any you’ve previously seen. At first these elements seem strange. But as they become familiar, you internalize the elements of this world. Eventually, you become fluent, discovering powerful and surprising idioms, emergent patterns hidden within the interface. You begin to think with the interface, learning patterns of thought that would formerly have seemed strange, but which become second nature. The interface begins to disappear, becoming part of your consciousness. You have been, in some measure, transformed.’

He describes how normal language and computer interfaces are cognitive technologies:

‘Language is an example of a cognitive technology: an external artifact, designed by humans, which can be internalized, and used as a substrate for cognition. That technology is made up of many individual pieces – words and phrases, in the case of language – which become basic elements of cognition. These elements of cognition are things we can think with…

‘In a similar way to language, maps etc, a computer interface can be a cognitive technology. To master an interface requires internalizing the objects and operations in the interface; they become elements of cognition. A sufficiently imaginative interface designer can invent entirely new elements of cognition… In general, what makes an interface transformational is when it introduces new elements of cognition that enable new modes of thought. More concretely, such an interface makes it easy to have insights or make discoveries that were formerly difficult or impossible. At the highest level, it will enable discoveries (or other forms of creativity) that go beyond all previous human achievement.’

Nielsen describes how powerful ways of thinking among mathematicians and physicists are hidden from view and not part of textbooks and normal teaching.

The reason is that traditional media are poorly adapted to working with such representations… If experts often develop their own representations, why do they sometimes not share those representations? To answer that question, suppose you think hard about a subject for several years… Eventually you push up against the limits of existing representations. If you’re strongly motivated – perhaps by the desire to solve a research problem – you may begin inventing new representations, to provide insights difficult through conventional means. You are effectively acting as your own interface designer. But the new representations you develop may be held entirely in your mind, and so are not constrained by traditional static media forms. Or even if based on static media, they may break social norms about what is an “acceptable” argument. Whatever the reason, they may be difficult to communicate using traditional media. And so they remain private, or are only discussed informally with expert colleagues.’

If we can create interfaces that reify deep principles, then ‘mastering the subject begins to coincide with mastering the interface.’ He gives the example of Photoshop which builds in many deep principles of image manipulation.

‘As you master interface elements such as layers, the clone stamp, and brushes, you’re well along the way to becoming an expert in image manipulation… By contrast, the interface to Microsoft Word contains few deep principles about writing, and as a result it is possible to master Word‘s interface without becoming a passable writer. This isn’t so much a criticism of Word, as it is a reflection of the fact that we have relatively few really strong and precise ideas about how to write well.’

He then describes what he calls ‘the cognitive outsourcing model’: ‘we specify a problem, send it to our device, which solves the problem, perhaps in a way we-the-user don’t understand, and sends back a solution.’ E.g we ask Google a question and Google sends us an answer.

This is how most of us think about the idea of augmenting the human intellect but it is not the best approach. ‘Rather than just solving problems expressed in terms we already understand, the goal is to change the thoughts we can think.’

‘One challenge in such work is that the outcomes are so difficult to imagine. What new elements of cognition can we invent? How will they affect the way human beings think? We cannot know until they’ve been invented.

‘As an analogy, compare today’s attempts to go to Mars with the exploration of the oceans during the great age of discovery. These appear similar, but while going to Mars is a specific, concrete goal, the seafarers of the 15th through 18th centuries didn’t know what they would find. They set out in flimsy boats, with vague plans, hoping to find something worth the risks. In that sense, it was even more difficult than today’s attempts on Mars.

‘Something similar is going on with intelligence augmentation. There are many worthwhile goals in technology, with very specific ends in mind. Things like artificial intelligence and life extension are solid, concrete goals. By contrast, new elements of cognition are harder to imagine, and seem vague by comparison. By definition, they’re ways of thinking which haven’t yet been invented. There’s no omniscient problem-solving box or life-extension pill to imagine. We cannot say a priori what new elements of cognition will look like, or what they will bring. But what we can do is ask good questions, and explore boldly.

In another essay, Using Artificial Intelligence to Augment Human Intelligence, Nielsen points out that breakthroughs in creating powerful new cognitive technologies such as musical notation or Descartes’ invention of algebraic geometry are rare but ‘modern computers are a meta-medium enabling the rapid invention of many new cognitive technologies‘ and, further, AI will help us ‘invent new cognitive technologies which transform the way we think.’

Further, historically powerful new cognitive technologies, such as ‘Feynman diagrams’, have often appeared strange at first. We should not assume that new interfaces should be ‘user friendly’. Powerful interfaces that repay mastery may require sacrifices.

‘The purpose of the best interfaces isn’t to be user-friendly in some shallow sense. It’s to be user-friendly in a much stronger sense, reifying deep principles about the world, making them the working conditions in which users live and create. At that point what once appeared strange can instead becomes comfortable and familiar, part of the pattern of thought…

‘Unfortunately, many in the AI community greatly underestimate the depth of interface design, often regarding it as a simple problem, mostly about making things pretty or easy-to-use. In this view, interface design is a problem to be handed off to others, while the hard work is to train some machine learning system.

‘This view is incorrect. At its deepest, interface design means developing the fundamental primitives human beings think and create with. This is a problem whose intellectual genesis goes back to the inventors of the alphabet, of cartography, and of musical notation, as well as modern giants such as Descartes, Playfair, Feynman, Engelbart, and Kay. It is one of the hardest, most important and most fundamental problems humanity grapples with.

‘As discussed earlier, in one common view of AI our computers will continue to get better at solving problems, but human beings will remain largely unchanged. In a second common view, human beings will be modified at the hardware level, perhaps directly through neural interfaces, or indirectly through whole brain emulation.

We’ve described a third view, in which AIs actually change humanity, helping us invent new cognitive technologies, which expand the range of human thought. Perhaps one day those cognitive technologies will, in turn, speed up the development of AI, in a virtuous feedback cycle:

Screenshot 2019-02-04 18.16.42

It would not be a Singularity in machines. Rather, it would be a Singularity in humanity’s range of thought… The long-term test of success will be the development of tools which are widely used by creators. Are artists using these tools to develop remarkable new styles? Are scientists in other fields using them to develop understanding in ways not otherwise possible?’

I would add: are governments using these tools to help them think in ways we already know are more powerful and to explore new ways of making decisions and shaping the complex systems on which we rely?

Nielsen also wrote this fascinating essay ‘Augmenting long-term memory’. This involves a computer tool (Anki) to aid long-term memory using ‘spaced repetition’ — i.e testing yourself at intervals which is shown to counter the normal (for most people) process of forgetting. This allows humans to turn memory into a choice so we can decide what to remember and achieve it systematically (without a ‘weird/extreme gift’ which is how memory is normally treated). (It’s fascinating that educated Greeks 2,500 years ago could build sophisticated mnemonic systems allowing them to remember vast amounts while almost all educated people now have no idea about such techniques.)

Connected to this, Nielsen also recently wrote an essay teaching fundamentals of quantum mechanics and quantum computers — but it is an essay with a twist:

‘[It] incorporates new user interface ideas to help you remember what you read… this essay isn’t just a conventional essay, it’s also a new medium, a mnemonic medium which integrates spaced-repetition testing. The medium itself makes memory a choice This essay will likely take you an hour or two to read. In a conventional essay, you’d forget most of what you learned over the next few weeks, perhaps retaining a handful of ideas. But with spaced-repetition testing built into the medium, a small additional commitment of time means you will remember all the core material of the essay. Doing this won’t be difficult, it will be easier than the initial read. Furthermore, you’ll be able to read other material which builds on these ideas; it will open up an entire world…

‘Mastering new subjects requires internalizing the basic terminology and ideas of the subject. The mnemonic medium should radically speed up this memory step, converting it from a challenging obstruction into a routine step. Frankly, I believe it would accelerate human progress if all the deepest ideas of our civilization were available in a form like this.’

This obviously has very important implications for education policy. It also shows how computers could be used to improve learning — something that has generally been a failure since the great hopes at PARC in the 1970s. I have used Anki since reading Nielsen’s blog and I can feel it making a big difference to my mind/thoughts — how often is this true of things you read? DOWNLOAD ANKI NOW AND USE IT!

We need similarly creative experiments with new mediums that are designed to improve  standards of high stakes decision-making.

*

4. Summary

We could create systems for those making decisions about m/billions of lives and b/trillions of dollars, such as Downing Street or The White House, that integrate inter alia:

  • Cognitive toolkits compressing already existing useful knowledge such as checklists for rational thinking developed by the likes of Tetlock, Munger, Yudkowsky et al.
  • A Nielsen/Victor research program on ‘Seeing Rooms’, interface design, authoring tools, and cognitive technologies. Start with bunging a few million to Victor immediately in return for allowing some people to study what he is doing and apply it in Whitehall, then grow from there.
  • An alpha data science/AI operation — tapping into the world’s best minds including having someone like David Deutsch or Tim Gowers as a sort of ‘chief rationalist’ in the Cabinet (with Scott Alexander as deputy!) — to support rational decision-making where this is possible and explain when it is not possible (just as useful).
  • Tetlock/Hanson prediction tournaments could easily and cheaply be extended to consider ‘clusters’ of issues around themes like Brexit to improve policy and project management.
  • Groves/Mueller style ‘systems management’ integrated with the data science team.
  • Legally entrenched Red Teams where incentives are aligned to overcoming groupthink and error-correction of the most powerful. Warren Buffett points out that public companies considering an acquisition should employ a Red Team whose fees are dependent on the deal NOT going ahead. This is the sort of idea we need in No10.

Researchers could see the real operating environment of decision-makers at the apex of power, the sort of problems they need to solve under pressure, and the constraints of existing centralised systems. They could start with the safe level of ‘tools that we already know work really well’ — i.e things like cognitive toolkits and Red Teams — while experimenting with new tools and new ways of thinking.

Hedge funds like Bridgewater and some other interesting organisations think about such ideas though without the sophistication of Victor’s approach. The world of MPs, officials, the Institute for Government (a cheerleader for ‘carry on failing’), and pundits will not engage with these ideas if left to their own devices.

This is not the place to go into how to change this. We know that the normal approach is doomed to produce the normal results and normal results applied to things like repeated WMD crises means disaster sooner or later. As Buffett points out, ‘If there is only one chance in thirty of an event occurring in a given year, the likelihood of it occurring at least once in a century is 96.6%.’ It is not necessary to hope in order to persevere: optimism of the will, pessimism of the intellect…

*

A final thought…

A very interesting comment that I have heard from some of the most important scientists involved in the creation of advanced technologies is that ‘artists see things first’ — that is, artists glimpse possibilities before most technologists and long before most businessmen and politicians.

Pixar came from a weird combination of George Lucas getting divorced and the visionary Alan Kay suggesting to Steve Jobs that he buy a tiny special effects unit from Lucas, which Jobs did with completely wrong expectations about what would happen. For unexpected reasons this tiny unit turned into a huge success — as Jobs put it later, he was ‘sort of snookered’ into creating Pixar. Now Alan Kay says he struggles to get tech billionaires to understand the importance of Victor’s ideas.

The same story repeats: genuinely new ideas that could create huge value always seem so odd that almost all people in almost all organisations cannot see new possibilities. If this is true in Silicon Valley, how much more true is it in Whitehall or Washington… 

If one were setting up a new party in Britain, one could incorporate some of these ideas. This would of course also require recruiting very different types of people to the norm in politics. The closed nature of Westminster/Whitehall combined with first-past-the-post means it is very hard to solve the coordination problem of how to break into this system with a new way of doing things. Even those interested in principle don’t want to commit to a 10-year (?) project that might get them blasted on the front pages. Vote Leave hacked the referendum but such opportunities are much rarer than VC-funded ‘unicorns’. On the other hand, arguably what is happening now is a once in 50 or 100 year crisis and such crises also are the waves that can be ridden to change things normally unchangeable. A second referendum in 2020 is quite possible (or two referendums under PM Corbyn, propped up by the SNP?) and might be the ideal launchpad for a completely new sort of entity, not least because if it happens the Conservative Party may well not exist in any meaningful sense (whether there is or isn’t another referendum). It’s very hard to create a wave and it’s much easier to ride one. It’s more likely in a few years you will see some of the above ideas in novels or movies or video games than in government — their pickup in places like hedge funds and intelligence services will be discrete — but you never know…

*

Ps. While I have talked to Michael Nielsen and Bret Victor about their ideas, in no way should this blog be taken as their involvement in anything to do with my ideas or plans or agreement with anything written above. I did not show this to them or even tell them I was writing about their work, we do not work together in any way, I have just read and listened to their work over a few years and thought about how their ideas could improve government.

Further Reading

If interested in how to make things work much better, read this (lessons for government from the Apollo project) and this (lessons for government from ARPA-PARC’s creation of the internet and PC).

Links to recent reports on AI/ML.

On the referendum #30: Genetics, genomics, predictions & ‘the Gretzky game’ — a chance for Britain to help the world

On the referendum #30: Genetics, genomics, predictions & ‘the Gretzky game’ — a chance for Britain to help the world

Britain could contribute huge value to the world by leveraging existing assets, including scientific talent and how the NHS is structured, to push the frontiers of a rapidly evolving scientific field — genomic prediction — that is revolutionising healthcare in ways that give Britain some natural advantages over Europe and America. We should plan for free universal ‘SNP’ genetic sequencing as part of a shift to genuinely preventive medicine — a shift that will lessen suffering, save money, help British advanced technology companies in genomics and data science/AI, make Britain more attractive for scientists and global investment, and extend human knowledge in a crucial field to the benefit of the whole world.

‘SNP’ sequencing means, crudely, looking at the million or so most informative markers or genetic variants without sequencing every base pair in the genome. SNP sequencing costs ~$50 per person (less at scale), whole genome sequencing costs ~$1,000 per person (less at scale). The former captures most of the predictive power now possible at 1/20th of the cost of the latter.

*

Background: what seemed ‘sci fi’ ~2010-13 is now reality

In my 2013 essay on education and politics, I summarised the view of expert scientists on genetics (HERE between pages 49-51, 72-74, 194-203). Although this was only a small part of the essay most of the media coverage focused on this, particularly controversies about IQ.

Regardless of political affiliation most of the policy/media world, as a subset of ‘the educated classes’ in general, tended to hold a broadly ‘blank slate’ view of the world mostly uninformed by decades of scientific progress. Technical terms like ‘heritability’, which refers to the variance in populations, caused a lot of confusion.

When my essay hit the media, fortunately for me the world’s leading expert, Robert Plomin, told hacks that I had summarised the state of the science accurately. (I never tried to ‘give my views on the science’ as I don’t have ‘views’ — all people like me can try to do with science is summarise the state of knowledge in good faith.) Quite a lot of hacks then spent some time talking to Plomin and some even wrote about how they came to realise that their assumptions about the science had been wrong (e.g Gaby Hinsliff).

Many findings are counterintuitive to say the least. Almost everybody naturally thinks that ‘the shared environment’ in the form of parental influence ‘obviously’ has a big impact on things like cognitive development. The science says this intuition is false. The shared environment is much less important than we assume and has very little measurable effect on cognitive development: e.g an adopted child who does an IQ test in middle age will show on average almost no correlation with the parents who brought them up (genes become more influential as you age). People in the political world assumed a story of causation in which, crudely, wealthy people buy better education and this translates into better exam and IQ scores. The science says this story is false. Environmental effects on things like cognitive ability and education achievement are almost all from what is known as the ‘non-shared environment’ which has proved very hard to pin down (environmental effects that differ for children, like random exposure to chemicals in utero). Further, ‘The case for substantial genetic influence on g [g = general intelligence ≈ IQ] is stronger than for any other human characteristic’ (Plomin) and g/IQ has far more predictive power for future education than class does. All this has been known for years, sometimes decades, by expert scientists but is so contrary to what well-educated people want to believe that it was hardly known at all in ‘educated’ circles that make and report on policy.

Another big problem is that widespread ignorance about genetics extends to social scientists/economists, who are much more influential in politics/government than physical scientists. A useful heuristic is to throw ~100% of what you read from social scientists about ‘social mobility’ in the bin. Report after report repeats the same clichés, repeats factual errors about genetics, and is turned into talking points for MPs as justification for pet projects. ‘Kids who can read well come from homes with lots of books so let’s give families with kids struggling to read more books’ is the sort of argument you read in such reports without any mention of the truth: children and parents share genes that make them good at and enjoy reading, so causation is operating completely differently to the assumptions. It is hard to overstate the extent of this problem. (There are things we can do about ‘social mobility’, my point is Insider debate is awful.)

A related issue is that really understanding the science requires serious understanding of statistics and, now, AI/machine learning (ML). Many social scientists do not have this training. This problem will get worse as data science/AI invades the field. 

A good example is ‘early years’ and James Heckman. The political world is obsessed with ‘early years’ such as Sure Start (UK) and Head Start (US). Politicians latch onto any ‘studies’ that seem to justify it and few have any idea about the shocking state of the studies usually quoted to justify spending decisions. Heckman has published many papers on early years and they are understandably widely quoted by politicians and the media. Heckman is a ‘Nobel Prize’ winner in economics. One of the world’s leading applied mathematicians, Professor Andrew Gelman, has explained how Heckman has repeatedly made statistical errors in his papers but does not correct them: cf. How does a Nobel-prize-winning economist become a victim of bog-standard selection bias?  This really shows the scale of the problem: if a Nobel-winning economist makes ‘bog standard’ statistical errors that confuse him about studies on pre-school, what chance do the rest of us in the political/media world have?

Consider further that genomics now sometimes applies very advanced mathematical ideas such as ‘compressed sensing’. Inevitably few social scientists can judge such papers but they are overwhelmingly responsible for interpreting such things for ministers and senior officials. This is compounded by the dominance of social scientists in Whitehall units responsible for data and evidence. Many of these units are unable to provide proper scientific advice to ministers (I have had personal experience of this in the Department for Education). Two excellent articles by Duncan Watts recently explained fundamental problems with social science and what could be done (e.g a much greater focus on successful prediction) but as far as I can tell they have had no impact on economists and sociologists who do not want to face their lack of credibility and whose incentives in many ways push them towards continued failure (Nature paper HEREScience paper HERE — NB. the Department for Education did not even subscribe to the world’s leading science journals until I insisted in 2011).

1) The problem that the evidence for early years is not what ministers and officials think it is is not a reason to stop funding but I won’t go into this now. 2) This problem is incontrovertible evidence, I think, of the value of an alpha data science unit in Downing Street, able to plug into the best researchers around the world, and ensure that policy decisions are taken on the basis of rational thinking and good science or, just as important, everybody is aware that they have to make decisions in the absence of this. This unit would pay for itself in weeks by identifying flawed reasoning and stopping bad projects, gimmicks etc. Of course, this idea has no chance with those now at the top of Government and the Cabinet Office would crush such a unit as it would threaten the traditional hierarchy. One of the  arguments I made in my essay was that we should try to discover useful and reliable benchmarks for what children of different abilities are really capable of learning and build on things like the landmark Study of Mathematically Precocious Youth. This obvious idea is anathema to the education policy world where there is almost no interest in things like SMPY and almost everybody supports the terrible idea that ‘all children must do the same exams’ (guaranteeing misery for some and boredom/time wasting for others). NB. Most rigorous large-scale educational RCTs are uninformative. Education research, like psychology, produces a lot of what Feynman called ‘cargo cult science’.

Since 2013, genomics has moved fast and understanding in the UK media has changed probably faster in five years than over the previous 35 years. As with the complexities of Brexit, journalists have caught up with reality much better than MPs. It’s still true that almost everything written by MPs about ‘social mobility’ is junk but you could see from the reviews of Plomin’s recent book, Blueprint, that many journalists have a much better sense of the science than they did in 2013. Rare good news, though much more progress is needed…

*

What’s happening now?

Screenshot 2019-02-19 15.35.49

In 2013 it was already the case that the numbers on heritability derived from twin and adoption studies were being confirmed by direct inspection of DNA — therefore many of the arguments about twin/adoption studies were redundant — but this fact was hardly known.

I pointed out that the field would change fast. Both Plomin and another expert, Steve Hsu, made many predictions around 2010-13 some of which I referred to in my 2013 essay. Hsu is a physics professor who is also one of the world’s leading researchers on genomics. 

Hsu predicted that very large samples of DNA would allow scientists over the next few years to start identifying the actual genes responsible for complex traits, such as diseases and intelligence, and make meaningful predictions about the fate of individuals. Hsu gave estimates of the sample sizes that would be needed. His 2011 talk contains some of these predictions and also provides a physicist’s explanation of ‘what is IQ measuring’. As he said at Google in 2011, the technology is ‘right on the cusp of being able to answer fundamental questions’ and ‘if in ten years we all meet again in this room there’s a very good chance that some of the key questions we’ll know the answers to’. His 2014 paper explains the science in detail. If you spend a little time looking at this, you will know more than 99% of high status economists gabbling on TV about ‘social mobility’ saying things like ‘doing well on IQ tests just proves you can do IQ tests’.

In 2013, the world of Westminster thought this all sounded like science fiction and many MP said I sounded like ‘a mad scientist’. Hsu’s predictions have come true and just five years later this is no longer ‘science fiction’. (Also NB. Hsu’s blog was one of the very few places where you would have seen discussion of CDOs and the 2008 financial crash long BEFORE it happened. I have followed his blog since ~2004 and this from 2005, two years before the crash started, was the first time I read about things like ‘synthetic CDOs’: ‘we have yet another ill-understood casino running, with trillions of dollars in play’. The quant-physics network had much better insight into the dynamics behind the 2008 Crash than high status mainstream economists like Larry Summers responsible for regulation.)

His group and others have applied machine learning to very large genetic samples and built predictors of complex traits. Complex traits like general intelligence and most diseases are ‘polygenic’ — they depend on many genes each of which contributes a little (unlike diseases caused by a single gene). 

‘There are now ~20 disease conditions for which we can identify, e.g, the top 1% outliers with 5-10x normal risk for the disease. The papers reporting these results have almost all appeared within the last year or so.’

Screenshot 2019-02-19 15.00.14

For example, the height predictor ‘captures nearly all of the predicted SNP heritability for this trait — actual heights of most individuals in validation tests are within a few cm of predicted heights.’ Height is similar to IQ — polygenic and similar heritability estimates.

Screenshot 2019-02-19 15.00.37

These predictors have been validated with out-of-sample tests. They will get better and better as more and more data is gathered about more and more traits. 

This enables us to take DNA from unborn embryos, do SNP genetic sequencing costing ~$50, and make useful predictions about the odds of the embryo being an outlier for diseases like atrial fibrillation, diabetes, breast cancer, or prostate cancer. NB. It is important that we do not need to sequence the whole genome to do this (see below). We will also be able to make predictions about outliers in cognitive abilities (the high and low ends). (My impression is that predicting Alzheimers is still hampered by a lack of data but this will improve as the data improves.)

There are many big implications. This will obviously revolutionise IVF. ~1 million IVF embryos per year are screened worldwide using less sophisticated tests. Instead of picking embryos at random, parents will start avoiding outliers for disease risks and cognitive problems. Rich people will fly to jurisdictions offering the best services.

Forensics is being revolutionised. First, DNA samples can be used to give useful physical descriptions of suspects because you can identify ethnic group, height, hair colour etc. Second, ‘cold cases’ are now routinely being solved because if a DNA sample exists, then the police can search for cousins of the perpetrator from public DNA databases, then use the cousins to identify suspects. Every month or so now in America a cold case murder is solved and many serial killers are being found using this approach — just this morning I saw what looks to be another example just announced, a murder of an 11 year-old in 1973. (Some companies are resisting this development but they will, I am very confident, be smashed in court and have their reputations trashed unless they change policy fast. The public will have no sympathy for those who stand in the way.)

Hsu recently attended a conference in the UK where he presented some of these ideas to UK policy makers. He wrote this blog about the great advantages the NHS has in developing this science. 

The UK could become the world leader in genomic research by combining population-level genotyping with NHS health records… The US private health insurance system produces the wrong incentives for this kind of innovation: payers are reluctant to fund prevention or early treatment because it is unclear who will capture the ROI [return on investment]… The NHS has the right incentives, the necessary scale, and access to a deep pool of scientific talent. The UK can lead the world into a new era of precision genomic medicine. 

‘NHS has already announced an out-of-pocket genotyping service which allows individuals to pay for their own genotyping and to contribute their health + DNA data to scientific research. In recent years NHS has built an impressive infrastructure for whole genome sequencing (cost ~$1k per individual) that is used to treat cancer and diagnose rare genetic diseases. The NHS subsidiary Genomics England recently announced they had reached the milestone of 100k whole genomes…

‘At the meeting, I emphasized the following:

1. NHS should offer both inexpensive (~$50) genotyping (sufficient for risk prediction of common diseases) along with the more expensive $1k whole genome sequencing. This will alleviate some of the negative reaction concerning a “two-tier” NHS, as many more people can afford the former.

2. An in-depth analysis of cost-benefit for population wide inexpensive genotyping would likely show a large net cost savings: the risk predictors are good enough already to guide early interventions that save lives and money. Recognition of this net benefit would allow NHS to replace the $50 out-of-pocket cost with free standard of care.’ (Emphasis added)

NB. In terms of the short-term practicalities it is important that whole genome sequencing costs ~$1,000 (and falling) but is not necessary: a version 1/20th of the cost, looking just at the most informative genetic variants, captures most of the predictive benefits. Some have incentives to distort this, such as companies like Illumina trying to sell expensive machines for whole genome sequencing, which can distort policy — let’s hope officials are watching carefully. These costs will, obviously, keep falling.

This connects to an interesting question… Why was the likely trend in genomics clear ~2010 to Plomin, Hsu and others but invisible to most? Obviously this involves lots of elements of expertise and feel for the field but also they identified FAVOURABLE EXPONENTIALS. Here is the fall in the cost of sequencing a genome compared to Moore’s Law, another famous exponential. The drop over ~18 years has been a factor of ~100,000. Hsu and Plomin could extrapolate that over a decade and figure out what would be possible when combined with other trends they could see. Researchers are already exploring what will be possible as this trend continues.

Screenshot 2019-02-20 10.32.37

Identifying favourable exponentials is extremely powerful. Back in the early 1970s, the greatest team of computer science researchers ever assembled (PARC) looked out into the future and tried to imagine what could be possible if they brought that future back to the present and built it. They were trying to ‘compute in the future’. They created personal computing. (Chart by Alan Kay, one of the key researchers — he called it ‘the Gretzky game’ because of Gretzky’s famous line ‘I skate to where the puck is going to be, not where it has been.’ The computer is the Alto, the first personal computer that stunned Steve Jobs when he saw a demo. The sketch on the right is of children using a tablet device that Kay drew decades before the iPad was launched.)

Screenshot 2019-02-15 12.42.47

Hopefully the NHS and Department for Health will play ‘the Gretzky game’, take expert advice from the likes of Plomin and Hsu and take this opportunity to make the UK a world leader in one of the most important frontiers in science.

  • We can imagine everybody in the UK being given valuable information about their health for free, truly preventive medicine where we target resources at those most at risk, and early (even in utero) identification of risks.
  • This would help bootstrap British science into a stronger position with greater resources to study things like CRISPR and the next phase of this revolution — editing genes to fix problems, where clinical trials are already showing success.
  • It would also give a boost to British AI/data science companies — the laws, rules on data etc should be carefully shaped to ensure that British companies (not Silicon Valley or China) capture most of the financial value (though everybody will gain from the basic science).
  • These gains would have positive feedback effects on each other, just as investment in basic AI/ML research will have positive feedback effects in many industries.
  • I have argued many times for the creation of a civilian UK ‘ARPA’ — a centre for high-risk-high-payoff research that has been consistently blocked in Whitehall (see HERE for an account of how ARPA-PARC created the internet and personal computing). This fits naturally with Britain seeking to lead in genomics/AI. Thinking about this is part of a desperately needed overall investigation into the productivity of the British economy and the ecosystem of universities, basic science, venture capital, startups, regulation (data, intellectual property etc) and so on.

There will also be many controversies and problems. The ability to edit genomes — and even edit the germline with ‘gene drives’ so all descendants have the same copy of the gene — is a Promethean power implying extreme responsibilities. On a mundane level, embracing new technology is clearly hard for the NHS with its data infrastructure. Almost everyone I speak to using the NHS has had similar problems that I have had — nightmares with GPs, hospitals, consultants et al being able to share data and records, things going missing, etc. The NHS will be crippled if it can’t fix this, but this is another reason to integrate data science as a core ‘utility’ for the NHS.

On a political note…

Few scientists and even fewer in the tech world are aware of the EU’s legal framework for regulating technology and the implications of the recent Charter of Fundamental Rights (the EU’s Charter, NOT the ECHR) which gives the Commission/ECJ the power to regulate any advanced technology, accelerate the EU’s irrelevance, and incentivise investors to invest outside the EU. In many areas, the EU regulates to help the worst sort of giant corporate looters defending their position against entrepreneurs. Post-Brexit Britain will be outside this jurisdiction and able to make faster and better decisions about regulating technology like genomics, AI and robotics. Prediction: just as Insiders now talk of how we ‘dodged a bullet’ in staying out of the euro, within ~10 years Insiders will talk about being outside the Charter/ECJ and the EU’s regulation of data/AI in similar terms (assuming Brexit happens and UK politicians even try to do something other than copy the EU’s rules).

China is pushing very hard on genomics/AI and regards such fields as crucial strategic ground for its struggle for supremacy with America. America has political and regulatory barriers holding it back on genomics that are much weaker here. Britain cannot stop the development of such science. Britain can choose to be a backwater, to ignore such things and listen to MPs telling fairy stories while the Chinese plough ahead, or it can try to lead. But there is no hiding from the truth and ‘for progress there is no cure’ (von Neumann). We will never be the most important manufacturing nation again but we could lead in crucial sub-fields of advanced technology. As ARPA-PARC showed, tiny investments can create entire new industries and trillions of dollars of value.

Sadly most politicians of Left and Right have little interest in science funding with tremendous implications for future growth, or the broader question of productivity and the ecosystem of science, entrepreneurs, universities, funding, regulation etc, and we desperately need institutions that incentivise politicians and senior officials to ‘play the Gretzky game’. The next few months will be dominated by Brexit and, hopefully, the replacement of the May/Hammond government. Those thinking about the post-May landscape and trying to figure out how to navigate in uncharted and turbulent waters should focus on one of the great lessons of politics that is weirdly hard for many MPs to internalise: the public rewards sustained focus on their priorities!

One of the lessons of the 2016 referendum (that many Conservative MPs remain desperate not to face) is the political significance of the NHS. The concept described above is one of those concepts in politics that maximises positive futures for the force that adopts it because it draws on multiple sources of strength. It combines, inter alia, all the political benefits of focus on the NHS, helping domestic technology companies, incentivising global investment, doing something that shows the world that Britain is (contra the May/Hammond outlook) open to science and high skilled immigrants, it is based on intrinsic advantages that Europe and America will find hard to overcome over a decade, it supplies (NB. MPs/spads) a never-ending string of heart-wrenching good news stories, and, very rarely in SW1, those pushing it would be seen as leading something of global importance. It will, therefore, obviously be rejected by a section of Conservative MPs who much prefer to live in a parallel world, who hate anything to do with science and who are ignorant about how new industries and wealth are really created. But for anybody trying to orient themselves to reality, connect themselves to sources of power, and thinking ‘how on earth could we clamber out of this horror show’, it is an obvious home run…

NB. It ought to go without saying that turning this idea into a political/government success requires focus on A) the NHS, health, science, NOT getting sidetracked into B) arguments about things like IQ and social mobility. Over time, the educated classes will continue to be dragged to more realistic views on (B) but this will be a complex process entangled with many hysterical episodes. (A) requires ruthless focus…

Please leave comments, fix errors below. I have not shown this blog in draft to Plomin or Hsu who obviously are not responsible for my errors.

Further reading

Plomin’s excellent new book, Blueprint. I would encourage journalists who want to understand this subject to speak to Plomin who works in London and is able to explain complex technical subjects to very confused arts graduates like me.

On the genetic architecture of intelligence and other quantitative traits, Hsu 2014.

Cf. this thread by researcher Paul Pharaoh on breast cancer.

Hsu blogs on genomics.

Some recent developments with AI/ML, links to papers.

On how ARPA-PARC created the modern computer industry and lessons for high-risk-high-payoff science research.

My 2013 essay.

#29 On the referendum & #4c on Expertise: On the ARPA/PARC ‘Dream Machine’, science funding, high performance, and UK national strategy

Post-Brexit Britain should be considering the intersection of 1) ARPA/PARC-style science research and ‘systems management’ for managing complex projects with 2) the reform of government institutions so that high performance teams — with different education/training (‘Tetlock processes’) and tools (including data science and visualisations of interactive models of complex systems) — can make ‘better decisions in a complex world’.  

This paper examines the ARPA/PARC vision for computing and the nature of the two organisations. In the 1960s visionaries such as Joseph Licklider, Robert Taylor and Doug Engelbart developed a vision of networked interactive computing that provided the foundation not just for new technologies but for whole new industries. Licklider, Sutherland, Taylor et al provided a model (ARPA) for how science funding can work. Taylor provided a model (PARC) of how to manage a team of extremely talented people who turned a profound vision into reality. The original motivation for the vision of networked interactive computing was to help humans make good decisions in a complex world.

This story suggests ideas about how to make big improvements in the world with very few resources if they are structured right. From a British perspective it also suggests ideas about what post-Brexit Britain should do to help itself and the world and how it might be possible to force some sort of ‘phase transition’ on the rotten Westminster/Whitehall system.

For the PDF of the paper click HERE. Please correct errors with page numbers below. I will update it after feedback.

Further Reading

The Dream Machine.

Dealers of Lightning.

‘Sketchpad: A man-machine graphical communication system’, Ivan Sutherland 1963.

Oral history interview with Sutherland, head of ARPA’s IPTO division 1963-5.

This link has these seminal papers:

  • Man-Computer Symbiosis, Licklider (1960)
  • The computer as a communications device, Licklider & Taylor (1968)

Watch Alan Kay explain how to invent the future to YCombinator classes HERE and HERE.  

HERE for Kay quotes from emails with Bret Victor.

HERE for Kay’s paper on PARC, The Power of the Context.

Kay’s Early History of Smalltalk.

HERE for a conversation between Kay and Engelbart.

Alan Kay’s tribute to Ted Nelson at “Intertwingled” Fest (an Alto using Smalltalk).

Personal Distributed Computing: The Alto and Ethernet Software1, Butler Lampson. 

You and Your Research, Richard Hamming.

AI nationalism, essay by Ian Hogarth. This concerns implications of AI for geopolitics.

Drones go to work, Chris Anderson (one of the pioneers of commercial drones). This explains the economics of the drone industry.

Meditations on Moloch, Scott Alexander. This is an extremely good essay in general about deep problems with our institutions.

Intelligence Explosion Microeconomics, Yudkowsky.

Autonomous technology and the greater human good. Omohundro.

Can intelligence explode? Hutter.

For the issue of IQ, genetics and the distribution of talent (and much much more), cf. Steve Hsu’s brilliant blog.

Bret Victor.

Michael Nielsen.

For some pre-history on computers, cf. The birth of computational thinking (some of the history of computing devices before the Turing/von Neumann revolution) and The crisis of mathematical paradoxes, Gödel, Turing and the basis of computing (some of the history of ideas about mathematical foundations and logic such as the famous papers by Gödel and Turing in the 1930s)

Part I of this series of blogs is HERE.

Part II on the emergence of ‘systems management’, how George Mueller used it to put man on the moon, and a checklist of how successful management of complex projects is systematically different to how Whitehall works is HERE.

Effective action #4a: ‘Expertise’ from fighting and physics to economics, politics and government

‘We learn most when we have the most to lose.’ Michael Nielsen, author of the brilliant book Reinventing Discovery.

‘There isn’t one novel thought in all of how Berkshire [Hathaway] is run. It’s all about … exploiting unrecognized simplicities…Warren [Buffett] and I aren’t prodigies.We can’t play chess blindfolded or be concert pianists. But the results are prodigious, because we have a temperamental advantage that more than compensates for a lack of IQ points.’ Charlie Munger,Warren Buffett’s partner.

I’m going to do a series of blogs on the differences between fields dominated by real expertise (like fighting and physics) and fields dominated by bogus expertise (like macroeconomic forecasting, politics/punditry, active fund management).

Fundamental to real expertise is 1) whether the informational structure of the environment is sufficiently regular that it’s possible to make good predictions and 2) does it allow high quality feedback and therefore error-correction. Physics and fighting: Yes. Predicting recessions, forex trading and politics: not so much. I’ll look at studies comparing expert performance in different fields and the superior performance of relatively very simple models over human experts in many fields.

This is useful background to consider a question I spend a lot of time thinking about: how to integrate a) ancient insights and modern case studies about high performance with b) new technology and tools in order to improve the quality of individual, team, and institutional decision-making in politics and government.

I think that fixing the deepest problems of politics and government requires a more general and abstract approach to principles of effective action than is usually considered in political discussion and such an approach could see solutions to specific problems almost magically appear, just as you see happen in a very small number of organisations — e.g Mueller’s Apollo program (man on the moon), PARC (interactive computing), Berkshire Hathaway (most successful investors in history), all of which have delivered what seems almost magical performance because they embody a few simple, powerful, but largely unrecognised principles. There is no ‘solution’ to the fundamental human problem of decision-making amid extreme complexity and uncertainty but we know a) there are ways to do things much better and b) governments mostly ignore them, so there is extremely valuable low-hanging fruit if, but it’s a big if, we can partially overcome the huge meta-problem that governments tend to resist the institutional changes needed to become a learning system.

This blog presents some basic background ideas and examples…

*

Extreme sports: fast feedback = real expertise 

In the 1980s and early 1990s, there was an interesting case study in how useful new knowledge jumped from a tiny isolated group to the general population with big effects on performance in a community. Expertise in Brazilian jiu-jitsu was taken from Brazil to southern California by the Gracie family. There were many sceptics but they vanished rapidly because the Gracies were empiricists. They issued ‘the Gracie challenge’.

All sorts of tough guys, trained in all sorts of ways, were invited to come to their garage/academy in Los Angeles to fight one of the Gracies or their trainees. Very quickly it became obvious that the Gracie training system was revolutionary and they were real experts because they always won. There was very fast and clear feedback on predictions. Gracie jiujitsu quickly jumped from an LA garage to TV. At the televised UFC 1 event in 1993 Royce Gracie defeated everyone and a multi-billion dollar business was born.

People could see how training in this new skill could transform performance. Unarmed combat changed across the world. Disciplines other than jiu jitsu have had to make a choice: either isolate themselves and not compete with jiu jitsu or learn from it. If interested watch the first twenty minutes of this documentary (via professor Steve Hsu, physicist, amateur jiu jitsu practitioner, and predictive genomics expert).

Video: Jiu Jitsu comes to Southern California

Royce Gracie, UFC 1 1993 

Screenshot 2018-05-22 10.41.20

 

Flow, deep in the zone

Another field where there is clear expertise is extreme skiing and snowboarding. One of the leading pioneers, Jeremy Jones, describes how he rides ‘spines’ hurtling down the side of mountains:

‘The snow is so deep you need to use your arms and chest to swim, and your legs to ride. They also collapse underfoot, so you’re riding mini-avalanches and dodging slough slides. Spines have blind rollovers, so you can’t see below. Or to the side. Every time the midline is crossed, it’s a leap into the abyss. Plus, there’s no way to stop and every move is amplified by complicated forces. A tiny hop can easily become a twenty-foot ollie. It’s the absolute edge of chaos. But the easiest way to live in the moment is to put yourself in a situation where there’s no other choice. Spines demand that, they hurl you deep into the zone.’ Emphasis added.

Video: Snowboarder Jeremy Jones

What Jones calls ‘the zone’ is also known as ‘flow‘ — a particular mental state, triggered by environmental cues, that brings greatly enhanced performance. It is the object of study in extreme sports and by the military and intelligence services: for example DARPA is researching whether stimulating the brain can trigger ‘flow’ in snipers.

Flow — or control on ‘the edge of chaos’ where ‘every move is amplified by complicated forces’ — comes from training in which people learn from very rapid feedback between predictions and reality. In ‘flow’, brains very rapidly and accurately process environmental signals and generate hypothetical scenarios/predictions and possible solutions based on experience and training. Jones’s performance is inseparable from developing this fingertip feeling. Similarly, an expert fireman feels the glow of heat on his face in a slightly odd way and runs out of the building just before it collapses without consciously knowing why he did it: his intuition has been trained to learn from feedback and make predictions. Experts operating in ‘flow’ do not follow what is sometimes called the ‘rational model’ of decision-making in which they sequentially interrogate different options — they pattern-match solutions extremely quickly based on experience and intuition.

The video below shows extreme expertise in a state of ‘flow’ with feedback on predictions within milliseconds. This legendary ride is so famous not because of the size of the wave but its odd, and dangerous, nature. If you watch carefully you will see what a true expert in ‘flow’ can do: after committing to the wave Hamilton suddenly realises that unless he reaches back with the opposite hand to normal and drags it against the wall of water behind him, he will get sucked up the wave and might die. (This wave had killed someone a few weeks earlier.) Years of practice and feedback honed the intuition that, when faced with a very dangerous and fast moving problem, almost instantly (few seconds maximum) pattern-matched an innovative solution.

Video: surfer Laird Hamilton in one of the greatest ever rides

 

The faster the feedback cycle, the more likely you are to develop a qualitative improvement in speed that destroys an opponent’s decision-making cycle. If you can reorient yourself faster to the ever-changing environment than your opponent, then you operate inside their ‘OODA loop’ (Observe-Orient-Decide-Act) and the opponent’s performance can quickly degrade and collapse.

This lesson is vital in politics. You can read it in Sun Tzu and see it with Alexander the Great. Everybody can read such lessons and most people will nod along. But it is very hard to apply because most political/government organisations are programmed by their incentives to prioritise seniority, process and prestige over high performance and this slows and degrades decisions. Most organisations don’t do it. Further, political organisations tend to make too slowly those decisions that should be fast and too quickly those decisions that should be slow — they are simultaneously both too sluggish and too impetuous, which closes off favourable branching histories of the future.

Video: Boxer Floyd Mayweather, best fighter of his generation and one of the quickest and best defensive fighters ever

The most extreme example in extreme sports is probably ‘free soloing’ — climbing mountains without ropes where one mistake means instant death. If you want to see an example of genuine expertise and the value of fast feedback then watch Alex Honnold.

Video: Alex Honnold ‘free solos’ El Sendero Luminoso (terrifying)

Music is similar to sport. There is very fast feedback, learning, and a clear hierarchy of expertise.

Video: Glenn Gould playing the Goldberg Variations (slow version)

Our culture treats expertise/high performance in fields like sport and music very differently to maths/science education and politics/government. As Alan Kay observes, music and sport expertise is embedded in the broader culture. Millions of children spend large amounts of time practising hard skills. Attacks on them as ‘elitist’ don’t get the same damaging purchase as in other fields and the public don’t mind about elite selection for sports teams or orchestras.

‘Two ideas about this are that a) these [sport/music] are activities in which the basic act can be seen clearly from the first, and b) are already part of the larger culture. There are levels that can be seen to be inclusive starting with modest skills. I think a very large problem for the learning of both science and math is just how invisible are their processes, especially in schools.’ Kay 

When it comes to maths and science education, the powers-that-be (in America and Britain) try very hard and mostly successfully to ignore the question: where are critical thresholds for valuable skills that develop true expertise. This is even more a problem with the concept of ‘thinking rationally’, for which some basic logic, probability, and understanding of scientific reasoning is a foundation. Discussion of politics and government almost totally ignores the concept of training people to update their opinions in response to new evidence — i.e adapt to feedback. The ‘rationalist community’ — people like Scott Alexander who wrote this fantastic essay (Moloch) about why so much goes wrong, or the recent essays by Eliezer Yudkowsky — are ignored at the apex of power. I will return to the subject of how to create new education and training programmes for elite decision-makers. It is a good time for UK universities to innovate in this field, as places like Stanford are already doing. Instead of training people like Cameron and Adonis to bluff with PPE, we need courses that combine rational thinking with practical training in managing complex projects. We need people who practice really hard making predictions in ways we know work well (cf. Tetlock) then update in response to errors.

*

A more general/abstract approach to reforming government

If we want to get much higher performance in government, then we need to think rigorously about: the selection of people and teams, their education and training, their tools, and the institutions (incentives and so on) that surround and shape them.

Almost all analysis of politics and government considers relatively surface phenomena. For example, the media briefly blasts headlines about Carillion’s collapse or our comical aircraft carriers but there is almost no consideration of the deep reasons for such failures and therefore nothing tends to happen — the media caravan moves on and the officials and ministers keep failing in the same ways. This is why, for example, the predicted abject failure of the traditional Westminster machinery to cope with Brexit negotiations has not led to self-examination and learning but, instead, mostly to a visible determination across both sides of the Brexit divide in SW1 to double down on long-held delusions.

Progress requires attacking the ‘system of systems’ problem at the right ‘level’. Attacking the problems directly — let’s improve policy X and Y, let’s swap ‘incompetent’ A for ‘competent’ B — cannot touch the core problems, particularly the hardest meta-problem that government systems bitterly fight improvement. Solving the explicit surface problems of politics and government is best approached by a more general focus on applying abstract principles of effective action. We need to surround relatively specific problems with a more general approach. Attack at the right level will see specific solutions automatically ‘pop out’ of the system. One of the most powerful simplicities in all conflict (almost always unrecognised) is: ‘winning without fighting is the highest form of war’. If we approach the problem of government performance at the right level of generality then we have a chance to solve specific problems ‘without fighting’ — or, rather, without fighting nearly so much and the fighting will be more fruitful.

This is not a theoretical argument. If you look carefully at ancient texts and modern case studies, you see that applying a small number of very simple, powerful, but largely unrecognised principles (that are very hard for organisations to operationalise) can produce extremely surprising results.

We have no alternative to trying. Without fundamental changes to government, we will lose our hourly game of Russian roulette with technological progress.

‘The combination of physics and politics could render the surface of the earth uninhabitable… [T]he ever accelerating progress of technology and changes in the mode of human life … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.’ John von Neumann

As Steve Hsu says: Pessimism of the Intellect, Optimism of the Will.


Ps. There is an interesting connection between the nature of counterfactual reasoning in the fast-moving world of extreme sports and the theoretical paper I posted yesterday on state-of-the-art AI. The human ability to interrogate stored representations of their environment with counter-factual questions is fundamental to the nature of intelligence and developing expertise in physical and mental skills. It is, for now, absent in machines.

On the referendum #23, a year after victory: ‘a change of perspective is worth 80 IQ points’ & ‘how to capture the heavens’

‘Just like all British governments, they will act more or less in a hand to mouth way on the spur of the moment, but they will not think out and adopt a steady policy.’ Earl Cromer, 1896.

Fascinating that the same problems recur time after time, in almost every program, and that the management of the program, whether it happened to be government or industry, continues to avoid reality.’ George Mueller, pioneer of systems management and head of the Apollo programme to put man on the moon.

Traditional cultures, those that all humans lived in until quite recently and which still survive in pockets, don’t realise that they are living inside a particular perspective. They think that what they see is ‘reality’. It is, obviously, not their fault. It is not because they are stupid. It is a historical accident that they did/do not have access to mental models that help more accurate thinking about reality.

Westminster and the other political cultures dotted around the world are similar to these traditional cultures. They think they they are living in ‘reality’. The MPs and pundits get up, read each other, tweet at each other, give speeches, send press releases, have dinner, attack, fuck or fight each other, do the same tomorrow and think ‘this is reality’. Like traditional cultures they are wrong. They are living inside a particular perspective that enormously distorts reality. 

They are trapped in thinking about today and their careers. They are trapped in thinking about incremental improvements. Almost nobody has ever been part of a high performance team responsible for a complex project. The speciality is a hot take to explain post facto what one cannot predict. They mostly don’t know what they don’t know. They don’t understand the decentralised information processing that allows markets to enable complex coordination. They don’t understand how scientific research works and they don’t value it. Their daily activity is massively constrained by the party and state bureaucracies that incentivise behaviour very different to what humanity needs to create long-term value. As Michael Nielsen (author of Reinventing Science) writes:

‘[M]uch of our intellectual elite who think they have “the solutions” have actually cut themselves off from understanding the basis for much of the most important human progress.’

Unlike traditional cultures, our modern political cultures don’t have the excuse of our hunter-gatherer ancestors. We could do better. But it is very very hard to escape the core imperatives that make big bureaucracies — public companies as well as state bureaucracies — so bad at learning. Warren Buffet explained decades ago how institutions actively fight against learning and fight to stay in a closed and vicious feedback loop:

‘My most surprising discovery: the overwhelming importance in business of an unseen force that we might call “the institutional imperative”. In business school, I was given no hint of the imperative’s existence and I did not intuitively understand it when I entered the business world. I thought then that decent, intelligence, and experienced managers would automatically make rational business decisions. But I learned the hard way that isn’t so. Instead rationality frequently wilts when the institutional imperative comes into play.

‘For example, 1) As if governed by Newton’s First Law, any institution will resist any change in its current direction. 2) … Corporate projects will materialise to soak up available funds. 3) Any business craving of the leader, however foolish, will quickly be supported by … his troops. 4) The behaviour of peer companies … will be mindlessly imitated.’

Almost nobody really learns from the world’s most successful investor about investing and how to run a successful business with good corporate governance. (People read what he writes but almost no investors choose to operate long-term like him, I think it is still true that not a single public company has copied his innovations with corporate governance like ‘no pay for company directors’, and governments have consistently rejected his and Munger’s advice about controlling the looting of public companies by management.) Almost nobody really learns how to do things better from the experience of dealing with this ‘institutional imperative’. We fail over and over again in the same way, trusting in institutions that are programmed to fail.

It is very very hard for humans to lift our eyes from today and to go out into the future and think about what could be done to bring the future back to the present. Like ants crawling around on the leaf, we political people only know our leaf.

Science has shown us a different way. Newton looked up from his leaf, looked far away from today, and created a new perspective — a new model of reality. It took an extreme genius to discover something like calculus but once discovered billions of people who are far from being geniuses can use this new perspective. Science advances by turning new ideas into standard ideas so each generation builds on the last.

Politics does the equivalent of constantly trying to reinvent children’s arithmetic and botching it. It does not build reliable foundations of knowledge. Archimedes is no longer cutting edge. Thucydides and Sun Tzu are still cutting edge. Even though Tetlock and others have shown how to start making similar progress with politics, our political cultures fiercely resist learning and fight ferociously to stay in closed and failing feedback loops.

In many ways our political culture has regressed as it has become more and more audio-visual and less and less literate. (Only 31% of US college graduates can read at a basic level. I’d guess it’s similar here. See end.) I’ve experimented with the way Jeff Bezos runs meetings at Amazon: i.e start the meeting with giving people a 5-10 page memo to read. Impossible in Westminster, nobody will sit and read like that! Officials have tried and failed for a year to get senior ministers to engage with complex written material about the EU negotiations. TV news dominates politics and is extremely low-bandwidth: it contains a few hundred words and rarely uses graphics properly. Evan Davis illustrates a comment about ‘going down the plughole’ with a picture of water down a plughole and Nick Robinson illustrates a comment about ‘the economy taking off’ with a picture of a plane taking off. The constant flow of bullshit from the likes of Robert Peston and Jon Snow dominates the medium because competition has been impossible until recently. BUT, although technology is making these charlatans less relevant (good) it also creates new problems and will not necessarily improve the culture.

Watching political news makes you dumber — switch it off and read books! If you work in it, either QUIT or go on holiday and come back determined to subvert it. How? Start with a previous blog which has some ideas, like tracking properly which people have a record of getting things right and wrong. Every editor I’ve suggested this to winces and says ‘impossible’. Insiders fear accountability and competition.

Today, the anniversary of the referendum, is a good day to forget the babble in the bubble and think about lessons from another project that changed the world, the famous ARPA/PARC team of the 1960s and 1970s.

*

ARPA/PARC and ‘capturing the heavens’: The best way to predict the future is to invent it

The panic over Sputnik brought many good things such as a huge increase in science funding. America also created the Advanced Research Projects Agency (ARPA, which later added ‘Defense’ and became DARPA). Its job was to fund high risk / high payoff technology development. In the 1960s and 1970s, a combination of unusual people and unusually wise funding from ARPA created a community that in turn invented the internet, or ‘the intergalactic network’ as Licklider originally called it, and the personal computer. One of the elements of this community was PARC, a research centre working for Xerox. As Bill Gates said, he and Steve Jobs essentially broke into PARC, stole their ideas, and created Microsoft and Apple.

The ARPA/PARC project has created over 35 TRILLION DOLLARS of value for society and counting.

The whole story is fascinating in many ways. I won’t go into the technological aspects. I just want to say something about the process.

What does a process that produces ideas that change the world look like?

One of the central figures was Alan Kay. One of the most interesting things about the project is that not only has almost nobody tried to repeat this sort of research but the business world has even gone out of its way to spread mis-information about it because it was seen as so threatening to business-as-usual.

I will sketch a few lessons from one of Kay’s pieces but I urge you to read the whole thing.

‘This is what I call “The power of the context” or “Point of view is worth 80 IQ points”. Science and engineering themselves are famous examples, but there are even more striking processes within these large disciplines. One of the greatest works of art from that fruitful period of ARPA/PARC research in the 60s and 70s was the almost invisible context and community that catalysed so many researchers to be incredibly better dreamers and thinkers. That it was a great work of art is confirmed by the world-changing results that appeared so swiftly, and almost easily. That it was almost invisible, in spite of its tremendous success, is revealed by the disheartening fact today that, as far as I’m aware, no governments and no companies do edge-of-the-art research using these principles.’

‘[W]hen I think of ARPA/PARC, I think first of good will, even before brilliant people… Good will and great interest in graduate students as “world-class researchers who didn’t have PhDs yet” was the general rule across the ARPA community.

‘[I]t is no exaggeration to say that ARPA/PARC had “visions rather than goals” and “funded people, not projects”. The vision was “interactive computing as a complementary intellectual partner for people pervasively networked world-wide”. By not trying to derive specific goals from this at the funding side, ARPA/PARC was able to fund rather different and sometimes opposing points of view.

‘The pursuit of Art always sets off plans and goals, but plans and goals don’t always give rise to Art. If “visions not goals” opens the heavens, it is important to find artistic people to conceive the projects.

‘Thus the “people not projects” principle was the other cornerstone of ARPA/PARC’s success. Because of the normal distribution of talents and drive in the world, a depressingly large percentage of organizational processes have been designed to deal with people of moderate ability, motivation, and trust. We can easily see this in most walks of life today, but also astoundingly in corporate, university, and government research. ARPA/PARC had two main thresholds: self-motivation and ability. They cultivated people who “had to do, paid or not” and “whose doings were likely to be highly interesting and important”. Thus conventional oversight was not only not needed, but was not really possible. “Peer review” wasn’t easily done even with actual peers. The situation was “out of control”, yet extremely productive and not at all anarchic.

‘”Out of control” because artists have to do what they have to do. “Extremely productive” because a great vision acts like a magnetic field from the future that aligns all the little iron particle artists to point to “North” without having to see it. They then make their own paths to the future. Xerox often was shocked at the PARC process and declared it out of control, but they didn’t understand that the context was so powerful and compelling and the good will so abundant, that the artists worked happily at their version of the vision. The results were an enormous collection of breakthroughs.

‘Our game is more like art and sports than accounting, in that high percentages of failure are quite OK as long as enough larger processes succeed… [I]n most processes today — and sadly in most important areas of technology research — the administrators seem to prefer to be completely in control of mediocre processes to being “out of control” with superproductive processes. They are trying to “avoid failure” rather than trying to “capture the heavens”.

‘All of these principles came together a little over 30 years ago to eventually give rise to 1500 Altos, Ethernetworked to: each other, Laserprinters, file servers and the ARPAnet, distributed to many kinds of end-users to be heavily used in real situations. This anticipated the commercial availability of this genre by 10-15 years. The best way to predict the future is to invent it.

‘[W]e should realize that many of the most important ARPA/PARC ideas haven’t yet been adopted by the mainstream. For example, it is amazing to me that most of Doug Engelbart’s big ideas about “augmenting the collective intelligence of groups working together” have still not taken hold in commercial systems. What looked like a real revolution twice for end-users, first with spreadsheets and then with Hypercard, didn’t evolve into what will be commonplace 25 years from now, even though it could have. Most things done by most people today are still “automating paper, records and film” rather than “simulating the future”. More discouraging is that most computing is still aimed at adults in business, and that aimed at nonbusiness and children is mainly for entertainment and apes the worst of television. We see almost no use in education of what is great and unique about computer modeling and computer thinking. These are not technological problems but a lack of perspective. Must we hope that the open-source software movements will put things right?

‘The ARPA/PARC history shows that a combination of vision, a modest amount of funding, with a felicitous context and process can almost magically give rise to new technologies that not only amplify civilization, but also produce tremendous wealth for the society. Isn’t it time to do this again by Reason, even with no Cold War to use as an excuse? How about helping children of the world grow up to think much better than most adults do today? This would truly create “The Power of the Context”.’

Note how this story runs contrary to how free market think tanks and pundits describe technological development. The impetus for most of this development came from government funding, not markets.

Also note that every attempt since the 1950s to copy ARPA and JASON (the semi-classified group that partly gave ARPA its direction) in the UK has been blocked by Whitehall. The latest attempt was in 2014 when the Cabinet Office swatted aside the idea. Hilariously its argument was ‘DARPA has had a lot of failures’ thus demonstrating extreme ignorance about the basic idea — the whole point is you must have failures and if you don’t have lots of failures then you are failing!

People later claimed that while PARC may have changed the world it never made any money for XEROX. This is ‘absolute bullshit’ (Kay). It made billions from the laser printer alone and overall Xerox made 250 times what they invested in PARC before they went bust. In 1983 they fired Bob Taylor, the manager of PARC and the guy who made it all happen.

‘They hated [Taylor] for the very reason that most companies hate people who are doing something different, because it makes middle and upper management extremely uncomfortable. The last thing they want to do is make trillions, they want to make a few millions in a comfortable way’ (Kay).

Someone finally listened to Kay recently. ‘YC Research’, the research arm of the world’s most successful (by far) technology incubator, is starting to fund people in this way. I am not aware of any similar UK projects though I know that a small network of people are thinking again about how something like this could be done here. If you can help them, take a risk and help them! Someone talk to science minister Jo Johnson but be prepared for the Treasury’s usual ignorant bullshit — ‘what are we buying for our money, and how can we put in place appropriate oversight and compliance?’ they will say!

Why is this relevant to the referendum?

As we ponder the future of the UK-EU relationship shaped amid the farce of modern Whitehall, we should think hard about the ARPA/PARC example: how a small group of people can make a huge breakthrough with little money but the right structure, the right ways of thinking, and the right motives.

Those of us outside the political system thinking ‘we know we can do so much better than this but HOW can we break through the bullshit?’ need to change our perspective and gain 80 IQ points.

This real picture is a metaphor for the political culture: ad hoc solutions that are either bad or don’t scale.

Screenshot 2017-06-14 16.58.14.png

ARPA said ‘Let’s get rid of all the wires’. How do we ‘get rid of all the wires’ and build something different that breaks open the closed and failing political cultures? Winning the referendum was just one step that helps clear away dead wood but we now need to build new things.

The ARPA vision that aligned the artists ‘like little iron filings’ was:

‘Computers are destined to become interactive intellectual amplifiers for everyone in the world universally networked worldwide’ (Licklider).

We need a motivating vision aimed not at tomorrow but at changing the basic wiring of  the whole system, a vision that can align ‘the little iron filings’, and then start building for the long-term.

I will go into what I think this vision could be and how to do it another day. I think it is possible to create something new that could scale very fast and enable us to do politics and government extremely differently, as different to today as the internet and PC were to the post-war mainframes. This would enable us to build huge long-term value for humanity in a relatively short time (less than 20 years). To create it we need a process as well suited to the goal as the ARPA/PARC project was and incorporating many of its principles.

We must try to escape the current system with its periodic meltdowns and international crises. These crises move 500-1,000 times faster than that of summer 1914. Our destructive potential is at least a million-fold greater than it was in 1914. Yet we have essentially the same hierarchical command-and-control decision-making systems in place now that could not even cope with 1914 technology and pace. We have dodged nuclear wars by fluke because individuals made snap judgements in minutes. Nobody who reads the history of these episodes can think that this is viable long-term, and we will soon have another wave of innovation to worry about with autonomous robots and genetic engineering. Technology gives us no option but to try to overcome evolved instincts like destroying out-group competitors.

In a previous blog I outlined how the ‘systems management’ approach used to put man on the moon provides principles for a new approach.

*

Ironically, one of the very few people in politics who understood the sort of thinking needed was … Jean Monnet, the architect of the EEC/EU! Monnet understood how to step back from today and build institutions. He worked operationally to prepare the future:

‘If there was stiff competition round the centres of power, there was practically none in the area where I wanted to work – preparing the future.’

Monnet was one of the few people in modern politics who really deserve the label ‘genius’. The story of how he wangled the creation of his institutions through the daily chaos of post-war politics is a lesson to anybody who wants to get things done.

But the institutions he created are in many ways the opposite of what the world needs. Their core operating principle is perpetual centralisation of power in the hands of an all powerful bureaucracy (Commission) and Court (ECJ). Nothing that works well in the world works like this!

Thanks to the prominence of Farage the dominant story among educated people is that those who got us out of the EU want to take us back to the pre-1914 era of hostile competing nation states. Nothing could be further from the truth. The key people in Vote Leave wanted and want not just what is best for Britain but what is best for all humanity. We want more international cooperation, not less. The problem with the EU is not that it is about international cooperation but that it is so bad at it and actually undermines it.

Britain leaving forces those with power to ask: how can all European countries trade freely and cooperate without subscribing to Monnet’s bureaucratic centralism? This will help Europe in the long-term. To those who favour this bureaucratic centralism and uniformity, reflect on the different trajectories of Europe and China post-Renaissance. In Europe, regulatory competition (so Columbus could chase funding in Spain after rejection in Portugal) brought immense gains. In China, centrally directed uniformity led to centuries of stagnation. America’s model of competitive federalism created by the founding fathers has been a far more effective engine of civilisation, growth, and new knowledge than the Monnet-Delors Single Market model.

If Britain were to focus on science and education with huge resources and a new-found seriousness, then this regulatory diversity would help not just Britain but all Europe and the global science community. We could make Britain the best place in the world to be for those who can invent the future. Like Alan Kay and his colleagues, we could create whole new industries. We could call Jeff Bezos and say, ‘Ok Jeff, you want a permanent international manned moon base, let’s talk about who does what, but not with that old rocket technology.’ No country on earth funds science as well as we already know how it could be done — that is something for Britain to do that would create real long-term value for humanity, instead of the ‘punching above our weight’ and ‘special relationship’ bullshit that passes for strategy in London. How we change our domestic institutions is within our power and will have much much greater influence on our long-term future than whatever deal is botched together with Brussels. We have the resources. But can we break the system open? If we don’t then we’re likely to go down the path we were already going down inside the EU, like the deluded Norma Desmond in Sunset Boulevard claiming ‘I am big, it’s the pictures that got small.’

*

Vote Leave and ‘good will’

Although Vote Leave was enmeshed in a sort of collective lunacy we managed, barely, to fend it off from the inner working of the campaign. Much of my job (sadly) was just trying to maintain a cordon around the core team so they could deliver the campaign with as little disruption as possible. We managed this because among the core people we had great good will. The stories of the campaign focus on the lunacy, but the people who really made it work remember the goodwill.

A year ago tonight I was sitting alone in a room thinking ‘we’ve won, now…’ when the walls started rumbling. At first I couldn’t make it out then, as Tim Shipman tells the story in his definitive book on the campaign, I heard ‘Dom, Dom, DOM’ — the team had declared victory. I went next door…

Thanks to everybody who sacrificed something. As I said that night and as I said in my long blog on the campaign, I’ve been given credit I don’t deserve and which rightly belongs to others — Cleo Watson, Richard ‘Ricardo’ Howell, Brother Starkie, Oliver Lewis, Lord Suart et al. Now, let’s think about what should come next…

 

Watch Alan Kay explain how to invent the future HERE and HERE.


Ps. Kay also points out that the real computer revolution won’t happen until people fulfil the original vision of enabling children to use this powerful way of thinking:

‘The real printing revolution was a qualitative change in thought and argument that lagged the hardware inventions by almost two centuries. The special quality of computers is their ability to rapidly simulate arbitrary descriptions, and the real computer revolution won’t happen until children can learn to read, write, argue and think in this powerful new way. We should all try to make this happen much sooner than 200 or even 20 more years!’

Almost nobody in education policy is aware of the educational context for the ARPA/PARC project which also speaks volumes about the abysmal field of ‘education research/policy’.

* Re the US literacy statistic, cf. A First Look at the Literacy of America’s Adults in the 21st Century, National Assessment of Adult Literacy, U.S. Dept of Education, NCES 2006.

 

 

 

Complexity and Prediction Part V: The crisis of mathematical paradoxes, Gödel, Turing and the basis of computing

Before the referendum I started a series of blogs and notes exploring the themes of complexity and prediction. This was part of a project with two main aims: first, to sketch a new approach to education and training in general but particularly for those who go on to make important decisions in political institutions and, second, to suggest a new approach to political priorities in which progress with education and science becomes a central focus for the British state. The two are entangled: progress with each will hopefully encourage progress with the other.

I was working on this paper when I suddenly got sidetracked by the referendum and have just looked at it again for the first time in about two years.

The paper concerns a fascinating episode in the history of ideas that saw the most esoteric and unpractical field, mathematical logic, spawn a revolutionary technology, the modern computer. NB. a great lesson to science funders: it’s a great mistake to cut funding on theory and assume that you’ll get more bang for buck from ‘applications’.

Apart from its inherent fascination, knowing something of the history is helpful for anybody interested in the state-of-the-art in predicting complex systems which involves the intersection between different fields including: maths, computer science, economics, cognitive science, and artificial intelligence. The books on it are either technical, and therefore inaccessible to ~100% of the population, or non-chronological so it is impossible for someone like me to get a clear picture of how the story unfolded.

Further, there are few if any very deep ideas in maths or science that are so misunderstood and abused as Gödel’s results. As Alan Sokal, author of the brilliant hoax exposing post-modernist academics, said, ‘Gödel’s theorem is an inexhaustible source of intellectual abuses.’ I have tried to make clear some of these using the best book available by Franzen, which explains why almost everything you read about it is wrong. If even Stephen Hawking can cock it up, the rest of us should be particularly careful.

I sketched these notes as I tried to pull together the story from many different books. I hope they are useful particularly for some 15-25 year-olds who like chronological accounts about ideas. I tried to put the notes together in the way that I wish I had been able to read at that age. I tried hard to eliminate errors but they are inevitable given how far I am from being competent to write about such things. I wish someone who is competent would do it properly. It would take time I don’t now have to go through and finish it the way I originally intended to so I will just post it as it was 2 years ago when I got calls saying ‘about this referendum…’

The only change I think I have made since May 2015 is to shove in some notes from a great essay later that year by the man who wrote the textbook on quantum computers, Michael Nielsen, which would be useful to read as an introduction or instead, HERE.

As always on this blog there is not a single original thought and any value comes from the time I have spent condensing the work of others to save you the time. Please leave corrections in comments.

The PDF of the paper is HERE (amended since first publication to correct an error, see Comments).

 

‘Gödel’s achievement in modern logic is singular and monumental – indeed it is more than a monument, it is a land mark which will remain visible far in space and time.’  John von Neumann.

‘Einstein had often told me that in the late years of his life he has continually sought Gödel’s company in order to have discussions with him. Once he said to me that his own work no longer meant much, that he came to the Institute merely in order to have the privilege of walking home with Gödel.’ Oskar Morgenstern (co-author with von Neumann of the first major work on Game Theory).

‘The world is rational’, Kurt Gödel.

Specialist maths schools – some facts

The news reports that the Government will try to promote more ‘specialist maths schools’ similar to the King’s College and Exeter schools.

The idea for these schools came when I read about Perelman, the Russian mathematician who in 2003 suddenly posted on arXiv a solution to the Poincaré Conjecture, one of the most important open problems in mathematics. Perelman went to one of the famous Russian specialist maths schools that were set up by one of the most important mathematicians of the 20th Century, Kolmogorov.

I thought – a) given the fall in standards in maths and physics because of the corruption of the curriculum and exams started by the Tories and continued by Blair, b) the way in which proper teaching of advanced maths and physics is increasingly limited to a tiny number of schools many of which are private, and c) the huge gains for our civilisation from the proper education of the unusual small fraction of children who are very gifted in maths and physics, why not try to set up something similar.

Gove’s team therefore pushed the idea through the DfE. Dean Acheson, US Secretary of State, said, ‘I have long been the advocate of the heretical view that, whatever political scientists might say, policy in this country is made, as often as not, by the necessity of finding something to say for an important figure committed to speak without a prearranged subject.’ This is quite true (it also explains a lot about how Monnet created the ECSC and EEC). Many things that the Gove team did relied on this. We prepared the maths school idea and waited our chance. Sure enough, the word came through from Downing Street – ‘the Chancellor needs an announcement for the Budget, something on science’. We gave them this, he announced it, and bureaucratic resistance was largely broken.

If interested in some details, then look at pages 75ff of my 2013 essay for useful links. Other countries have successfully pursued similar ideas, including France for a couple of centuries and Singapore recently.

One of the interesting aspects of trying to get them going was the way in which a) the official ‘education world’ loathed not just the idea but also the idea about the idea – they hated thinking about ‘very high ability’ and specialist teaching; b) when I visited maths departments they all knew about these schools because university departments in the West employ a large number of people who were educated in these schools but they all said ‘we can’t help you with this even though it’s a good idea because we’d be killed politically for supporting “elitism” [fingers doing quote marks in the air], good luck I hope you succeed but we’ll probably attack you on the record.’ They mostly did.

The only reason why the King’s project happened is because Alison Wolf made it a personal crusade to defeat all the entropic forces that elsewhere killed the idea (with the exception of Exeter). Without her it would have had no chance. I found few equivalents elsewhere and where I did they were smashed by their VCs.

A few points…

1) Kolmogorov-type schools are a particular thing. They undoubtedly work. But they are aimed at a small fraction of the population. Given what the products of these schools go on to contribute to human civilisation they are extraordinarily cheap. They are also often a refuge for children who have a terrible time in normal schools. If they were as different to normal kids in a negative sense as they are in a positive sense then there would be no argument about whether they have ‘special needs’.

2) Don’t believe the rubbish in things like Gladwell’s book about maths and IQ. There is now very good data on this particularly in the form of the unprecedented SMPY multi-decade study. Even a short crude test at 11-13 gives very good predictions of who is likely to be very good at maths/physics. Further there is a strong correlation between performance at the top 1% / 1:1,000 / 1:10,000 level and many outcomes in later life such as getting a doctorate, a patent, writing a paper in Science and Nature, high income, health etc. The education world has been ~100% committed to rejecting the science of this subject though this resistance is cracking.

This chart shows the SMPY results (maths ability at 13) for the top 1% of maths ability broken down into quartiles 1-4: the top quartile of the top 1% clearly outperforms viz tenure, publication and patent rates.  

screenshot-2017-01-23-11-53-01

3) The arguments for Kolmogorov schools do not translate to arguments for selection in general – ie. they are specific to the subject. It is the structure of maths and the nature of the brain that allows very young people to make rapid progress. These features are not there for English, history and so on. I am not wading into the grammar school argument on either side – I am just pointing out a fact that the arguments for such maths schools are clear but should not be confused with the wider arguments over selection that involve complicated trade-offs. People on both sides of the grammar debate should, if rational, be able to support this policy.

4) These schools are not ‘maths hot houses’. Kolmogorov took the children to see  Shakespeare plays, music and so on. It is important to note that teaching English and other subjects is normal – other than you are obviously dealing with unusually bright children. If these children are not in specialist schools, then the solution is a) specialist maths teaching (including help from university-level mathematicians) and b) keeping other aspects of their education normal. Arguably the greatest mathematician in the world, Terry Tao, had wise parents and enjoyed this combination. So it is of course possible to educate such children without specialist schools but the risks are higher that either parents or teachers cock it up.

5) Extended wisely across Britain they could have big benefits not just for those children and elite universities but they could also play an important role in raising standards generally in their area by being a focus for high quality empirical training. One of the worst aspects of the education world is the combination of low quality training and resistance to experiments. This has improved since the Gove reforms but the world of education research continues to be dominated by what Feynman called ‘cargo cult science’.

6) We also worked with a physicist at Cambridge, Professor Mark Warner, to set up a project to improve the quality of 6th form physics. This project has been a great success thanks to his extraordinary efforts and the enthusiasm of young Cambridge physicists. Thousands of questions have been answered on their online platform from many schools. This project gives kids the chance to learn proper problem solving – that is the core skill that the corruption of the exam system has devalued and increasingly pushed into a ghetto of of private education. Needless to say the education world also was hostile to this project. Anything that suggests that we can do much much better is generally hated by all elements of the bureaucracy, including even elements such as the Institute of Physics that supposedly exist to support exactly this. A handful of officials helped us push through projects like this and of course most of them have since left Whitehall in disgust, thus does the system protect itself against improvement while promoting the worst people.

7) This idea connects to a broader idea. Kids anywhere in the state system should be able to apply some form of voucher to buy high quality advanced teaching from outside their school for a wide range of serious subjects from music to physics.

8) One of the few projects that the Gove team tried and failed to get going was to break the grip of GCSEs on state schools (Cameron sided with Clegg and although we cheated a huge amount through the system we hit a wall on this project). It is extremely wasteful for the system and boring for many children for them to be focused on existing exams that do not develop serious skills. Maths already has the STEP paper. There should be equivalents in other subjects at age 16. There is nothing that the bureaucracy will fight harder than this and it will probably only happen if excellent private schools decide to do it themselves and political pressure then forces the Government to allow state schools to do them.

Any journalists who want to speak to people about this should try to speak to Dan Abramson (the head of the King’s school), Alison Wolf, or Alexander Borovik (a mathematician at Manchester University who attended one of these schools in Russia).

It is hopeful that No10 is backing this idea but of course they will face determined resistance. It will only happen if at least one special adviser in the DfE makes it a priority and has the support of No10 so officials know they might as well fight about other things…


This is the most interesting comment probably ever left on this blog and it is much more interesting than the blog itself so I have copied it below. It is made by Borovik, mentioned above, who attended one of these schools in Russia and knows many who attended similar…

‘There is one more aspect of (high level) selective specialist mathematics education that is unknown outside the professional community of mathematicians.

I am not an expert on “gifted and talented” education. On the other hand, I spent my life surrounded by people who got exclusive academically selective education in mathematics and physics, whether it was in the Lavrentiev School in Siberia, or Lycée Louis-le-Grand in Paris, or Fazekas in Budapest, or Galatasaray Lisesi (aka Lycée de Galatasaray) in Istanbul — the list can be continued.

The schools have nothing in common, with the exception of being unique, each one in its own way.

I had research collaborators and co-authors from each of the schools that Ilisted above. Why was it so easy for us to find a common language?

Well, the explanation can be found in the words of Stanislas Dehaene, the leading researcher of neurophysiology of mathematical thinking:

“We have to do mathematics using the brain which evolved 30 000 years ago for survival in the African savanna.”

In humans, the speed of totally controlled mental operations is at most 16 bits per second. Standard school maths education trains children to work at that speed.

The visual processing module in the brain crunches 10,000,000,000 bits per second.

I offer a simple thought experiment to the readers who have some knowledge of school level geometry.

Imagine that you are given a triangle; mentally rotate it about the longest side. What is the resulting solid of revolution? Describe it. And then try to reflect: where the answer came from?

The best kept secret of mathematics: it is done by subconsciousness.

Mathematics is a language for communication with subconsciousness.

There are four conversants in a conversation between two mathematicians: two people and two their “inner”, “intuitive” brains.

When mathematicians talk about mathematics face-to-face, they
* frequently use language which is very fluid and informal;
* improvised on the spot;
* includes pauses (for a lay observer—very strange and awkwardly timed) for absorbtion of thought;
* has almost nothing in common with standardised mathematics “in print”.

Mathematician is trying to convey a message from his “intuitive brain” directly to his colleagues’ “intuitive brain”.

Alumni of high level specialist mathematics schools are “birds of feather” because they have been initiated into this mode of communication at the most susceptible age, as teenagers, at the peak of intensity of their socialisation / shaping group identity stream of self-actualisation.

In that aspect, mathematics is not much different from arts. Part of the skills that children get in music schools, acting schools, dancing school, and art schools is the ability to talk about music, acting, dancing, art with intuitive, subconscious parts of their minds — and with their peers, in a secret language which is not recognised (and perhaps not even registered) by uninitiated.

However, specialist mathematics schools form a continuous spectrum from just ordinary, with standard syllabus, but good schools with good maths teachers to the likes of Louis-le-Grand and Fazekas. My comments apply mostly to the top end of the spectrum. I have a feeling that the Green Paper is less ambitious and does not call for setting up mathematics boarding schools using Chetham’s School of Music as a model. However, middle tier maths school could also be very useful — if they are set up with realistic expectations, properly supported, and have strong connections with universities.’

A Borovik