On the referendum #33: High performance government, ‘cognitive technologies’, Michael Nielsen, Bret Victor, & ‘Seeing Rooms’

On the referendum #33: High performance government, ‘cognitive technologies’, Michael Nielsen, Bret Victor, & ‘Seeing Rooms’

‘People, ideas, machines — in that order!’ Colonel Boyd.

‘The main thing that’s needed is simply the recognition of how important seeing is, and the will to do something about it.’ Bret Victor.

‘[T]he transfer of an entirely new and quite different framework for thinking about, designing, and using information systems … is immensely more difficult than transferring technology.’ Robert Taylor, one of the handful most responsible for the creation of the internet and personal computing, and in inspiration to Bret Victor.

‘[M]uch of our intellectual elite who think they have “the solutions” have actually cut themselves off from understanding the basis for much of the most important human progress.’ Michael Nielsen, physicist. 

Introduction

This blog looks at an intersection of decision-making, technology, high performance teams and government. It sketches some ideas of physicist Michael Nielsen about cognitive technologies and of computer visionary Bret Victor about the creation of dynamic tools to help understand complex systems and ‘argue with evidence’, such as tools for authoring dynamic documents’, and ‘Seeing Rooms’ for decision-makers — i.e rooms designed to support decisions in complex environments. It compares normal Cabinet rooms, such as that used in summer 1914 or October 1962, with state-of-the-art Seeing Rooms. There is very powerful feedback between: a) creating dynamic tools to see complex systems deeper (to see inside, see across time, and see across possibilities), thus making it easier to work with reliable knowledge and interactive quantitative models, semi-automating error-correction etc, and b) the potential for big improvements in the performance of political and government decision-making.

It is relevant to Brexit and anybody thinking ‘how on earth do we escape this nightmare’ but 1) these ideas are not at all dependent on whether you support or oppose Brexit, about which reasonable people disagree, and 2) they are generally applicable to how to improve decision-making — for example, they are relevant to problems like ‘how to make decisions during a fast moving nuclear crisis’ which I blogged about recently, or if you are a journalist ‘what future media could look like to help improve debate of politics’. One of the tools Nielsen discusses is a tool to make memory a choice by embedding learning in long-term memory rather than, as it is for almost all of us, an accident. I know from my days working on education reform in government that it’s almost impossible to exaggerate how little those who work on education policy think about ‘how to improve learning’.

Fields make huge progress when they move from stories (e.g Icarus)  and authority (e.g ‘witch doctor’) to evidence/experiment (e.g physics, wind tunnels) and quantitative models (e.g design of modern aircraft). Political ‘debate’ and the processes of government are largely what they have always been largely conflict over stories and authorities where almost nobody even tries to keep track of the facts/arguments/models they’re supposedly arguing about, or tries to learn from evidence, or tries to infer useful principles from examples of extreme success/failure. We can see much better than people could in the past how to shift towards processes of government being ‘partially rational discussion over facts and models and learning from the best examples of organisational success‘. But one of the most fundamental and striking aspects of government is that practically nobody involved in it has the faintest interest in or knowledge of how to create high performance teams to make decisions amid uncertainty and complexity. This blindness is connected to another fundamental fact: critical institutions (including the senior civil service and the parties) are programmed to fight to stay dysfunctional, they fight to stay closed and avoid learning about high performance, they fight to exclude the most able people.

I wrote about some reasons for this before the referendum (cf. The Hollow Men). The Westminster and Whitehall response was along the lines of ‘natural party of government’, ‘Rolls Royce civil service’ blah blah. But the fact that Cameron, Heywood (the most powerful civil servant) et al did not understand many basic features of how the world works is why I and a few others gambled on the referendum — we knew that the systemic dysfunction of our institutions and the influence of grotesque incompetents provided an opportunity for extreme leverage. 

Since then, after three years in which the parties, No10 and the senior civil service have imploded (after doing the opposite of what Vote Leave said should happen on every aspect of the negotiations) one thing has held steady — Insiders refuse to ask basic questions about the reasons for this implosion, such as: ‘why Heywood didn’t even put together a sane regular weekly meeting schedule and ministers didn’t even notice all the tricks with agendas/minutes etc’, how are decisions really made in No10, why are so many of the people below some cognitive threshold for understanding basic concepts (cf. the current GATT A24 madness), what does it say about Westminster that both the Adonis-Remainers and the Cash-ERGers have become more detached from reality while a large section of the best-educated have effectively run information operations against their own brains to convince themselves of fairy stories about Facebook, Russia and Brexit…

It’s a mix of amusing and depressing — but not surprising to me — to hear Heywood explain HERE how the British state decided it couldn’t match the resources of a single multinational company or a single university in funding people to think about what the future might hold, which is linked to his failure to make serious contingency plans for losing the referendum. And of course Heywood claimed after the referendum that we didn’t need to worry about the civil service because on project management it has ‘nothing to learn’ from the best private companies. The elevation of Heywood in the pantheon of SW1 is the elevation of the courtier-fixer at the expense of the thinker and the manager — the universal praise for him recently is a beautifully eloquent signal that those in charge are the blind leading the blind and SW1 has forgotten skills of high value, the skills of public servants such as Alanbrooke or Michael Quinlan.

This blog is hopefully useful for some of those thinking about a) improving government around the world and/or b) ‘what comes after the coming collapse and reshaping of the British parties, and how to improve drastically the performance of critical institutions?’

Some old colleagues have said ‘Don’t put this stuff on the internet, we don’t want the second referendum mob looking at it.’ Don’t worry! Ideas like this have to be forced down people’s throats practically at gunpoint. Silicon Valley itself has barely absorbed Bret Victor’s ideas so how likely is it that there will be a rush to adopt them by the world of Blair and Grieve?! These guys can’t tell the difference between courtier-fixers and people with models for truly effective action like General Groves (HERE). Not one in a thousand will read a 10,000 word blog on the intersection of management and technology and the few who do will dismiss it as the babbling of a deluded fool, they won’t learn any more than they learned from the 2004 referendum or from Vote Leave. And if I’m wrong? Great. Things will improve fast and a second referendum based on both sides applying lessons from Bret Victor would be dynamite.

NB. Bret Victor’s project, Dynamic Land, is a non-profit. For an amount of money that a government department like the Department for Education loses weekly without any minister realising it’s lost (in the millions per week in my experience because the quality of financial control is so bad), it could provide crucial funding for Victor and help itself. Of course, any minister who proposed such a thing would be told by officials ‘this is illegal under EU procurement law and remember minister that we must obey EU procurement law forever regardless of Brexit’ — something I know from experience officials say to ministers whether it is legal or not when they don’t like something. And after all, ministers meekly accepted the Kafka-esque order from Heywood to prioritise duties of goodwill to the EU under A50 over preparations to leave A50, so habituated had Cameron’s children become to obeying the real deputy prime minister…

Below are 4 sections:

  1. The value found in intersections of fields
  2. Some ideas of Bret Victor
  3. Some ideas of Michael Nielsen
  4. A summary

*

1. Extreme value is often found in the intersection of fields

The legendary Colonel Boyd (he of the ‘OODA loop’) would shout at audiences ‘People, ideas, machines — in that order.‘ Fundamental political problems we face require large improvements in the quality of all three and, harder, systems to integrate all three. Such improvements require looking carefully at the intersection of roughly five entangled areas of study. Extreme value is often found at such intersections.

  • Explore what we know about the selection, education and training of people for high performance (individual/team/organisation) in different fields. We should be selecting people much deeper in the tails of the ability curve — people who are +3 (~1:1,000) or +4 (~1:30,000) standard deviations above average on intelligence, relentless effort, operational ability and so on (now practically entirely absent from the ’50 most powerful people in Britain’). We should  train them in the general art of ‘thinking rationally’ and making decisions amid uncertainty (e.g Munger/Tetlock-style checklists, exercises on SlateStarCodex blog). We should train them in the practical reasons for normal ‘mega-project failure’ and case studies such as the Manhattan Project (General Groves), ICBMs (Bernard Schriever), Apollo (George Mueller), ARPA-PARC (Robert Taylor) that illustrate how the ‘unrecognised simplicities’ of high performance bring extreme success and make them work on such projects before they are responsible for billions rather than putting people like Cameron in charge (after no experience other than bluffing through PPE then PR). NB. China’s leaders have studied these episodes intensely while American and British institutions have actively ‘unlearned’ these lessons.
  • Explore the frontiers of the science of prediction across different fields from physics to weather forecasting to finance and epidemiology. For example, ideas from physics about early warning systems in physical systems have application in many fields, including questions like: to what extent is it possible to predict which news will persist over different timescales, or predict wars from news and social media? There is interesting work combining game theory, machine learning, and Red Teams to predict security threats and improve penetration testing (physical and cyber). The Tetlock/IARPA project showed dramatic performance improvements in political forecasting are possible, contra what people such as Kahneman had thought possible. A recent Nature article by Duncan Watts explained fundamental problems with the way normal social science treats prediction and suggested new approaches — which have been almost entirely ignored by mainstream economists/social scientists. There is vast scope for applying ideas and tools from the physical sciences and data science/AI — largely ignored by mainstream social science, political parties, government bureaucracies and media — to social/political/government problems (as Vote Leave showed in the referendum, though this has been almost totally obscured by all the fake news: clue — it was not ‘microtargeting’).
  • Explore technology and tools. For example, Bret Victor’s work and Michael Nielsen’s work on cognitive technologies. The edge of performance in politics/government will be defined by teams that can combine the ancient ‘unrecognised simplicities of high performance’ with edge-of-the-art technology. No10 is decades behind the pace in old technologies like TV, doesn’t understand simple tools like checklists, and is nowhere with advanced technologies.
  • Explore the frontiers of communication (e.g crisis management, applied psychology). Technology enables people to improve communication with unprecedented speed, scale and iterative testing. It also allows people to wreak chaos with high leverage. The technologies are already beyond the ability of traditional government centralised bureaucracies to cope with. They will develop rapidly such that most such centralised bureaucracies lose more and more control while a few high performance governments use the leverage they bring (c.f China’s combination of mass surveillance, AI, genetic identification, cellphone tracking etc as they desperately scramble to keep control). The better educated think that psychological manipulation is something that happens to ‘the uneducated masses’ but they are extremely deluded — in many ways people like FT pundits are much easier to manipulate, their education actually makes them more susceptible to manipulation, and historically they are the ones who fall for things like Russian fake news (cf. the Guardian and New York Times on Stalin/terror/famine in the 1930s) just as now they fall for fake news about fake news. Despite the centrality of communication to politics it is remarkable how little attention Insiders pay to what works — never mind the question ‘what could work much better?’.  The fact that so much of the media believes total rubbish about social media and Brexit shows that the media is incapable of analysing the intersection of politics and technology but, although it is obviously bad that the media disinforms the public, the only rational planning assumption is that this problem will continue and even get worse. The media cannot explain either the use of TV or traditional polling well, these have been extremely important for over 70 years, and there is no trend towards improvement so a sound planning assumption is surely that the media will do even worse with new technologies and data science. This will provide large opportunities for good and evil. A new approach able to adapt to the environment an order of magnitude faster than now would disorient political opponents (desperately scrolling through Twitter) to such a degree — in Boyd’s terms it would ‘collapse their OODA loops’ — that it could create crucial political space for focus on the extremely hard process of rewiring government institutions which now seems impossible for Insiders to focus on given their psychological/operational immersion in the hysteria of 24 hour rolling news and the constant crises generated by dysfunctional bureaucracies.
  • Explore how to re-program political/government institutions at the apex of decision-making authority so that a) people are more incentivised to optimise things we want them to optimise, like error-correction and predictive accuracy, and less incentivised to optimise bureaucratic process, prestige, and signalling as our institutions now do; b) institutions are incentivised to build high performance teams rather than make this practically illegal at the apex of government; and c) we have ‘immune systems’ based on decentralisation and distributed control to minimise the inevitable failures of even the best people and teams.

Example 1: Red Teams and pre-mortems can combat groupthink and normal cognitive biases but they are practically nowhere in the formal structure of governments. There is huge scope for a Parliament-mandated small and extremely elite Red Team operating next to, and in some senses above, the Cabinet Office to ensure diversity of opinions, fight groupthink and other standard biases, make sure lessons are learned and so on. Cost: a few million that it would recoup within weeks by stopping blunders.

Example 2: prediction tournaments/markets could improve policy and project management, with people able to ‘short’ official delivery timetables — imagine being able to short Grayling’s transport announcements, for example. In many areas new markets could help — e.g markets to allow shorting of house prices to dampen bubbles, as Chris Dillow and others have suggested. The way in which the IARPA/Tetlock work has been ignored in SW1 is proof that MPs and civil servants are not actually interested in — or incentivised to be interested in — who is right, who is actually an ‘expert’, and so on. There are tools available if new people do want to take these things seriously. Cost: a few million at most, possibly thousands, that it would recoup within a year by stopping blunders.

Example 3: we need to consider projects that could bootstrap new international institutions that help solve more general coordination problems such as the risk of accidental nuclear war. The most obvious example of a project like this I can think of is a manned international lunar base which would be useful for a) basic science, b) the practical purposes of building urgently needed near-Earth infrastructure for space industrialisation, and c) to force the creation of new practical international institutions for cooperation between Great Powers. George Mueller’s team that put man on the moon in 1969 developed a plan to do this that would have been built by now if their plans had not been tragically abandoned in the 1970s. Jeff Bezos is explicitly trying to revive the Mueller vision and Britain should be helping him do it much faster. The old institutions like the UN and EU — built on early 20th Century assumptions about the performance of centralised bureaucracies — are incapable of solving global coordination problems. It seems to me more likely that institutions with qualities we need are much more likely to emerge out of solving big problems than out of think tank papers about reforming existing institutions. Cost = 10s/100s of billions, return = trillions, or near infinite if shifting our industrial/psychological frontiers into space drastically reduces the chances of widespread destruction.

A) Some fields have fantastic predictive models and there is a huge amount of high quality research, though there is a lot of low-hanging fruit in bringing methods from one field to another.

B) We know a lot about high performance including ‘systems management’ for complex projects but very few organisations use this knowledge and government institutions overwhelmingly try to ignore and suppress the knowledge we have.

C) Some fields have amazing tools for prediction and visualisation but very few organisations use these tools and almost nobody in government (where colour photocopying is a major challenge).

D) We know a lot about successful communication but very few organisations use this knowledge and most base action on false ideas. E.g political parties spend millions on spreading ideas but almost nothing on thinking about whether the messages are psychologically compelling or their methods/distribution work, and TV companies spend billions on news but almost nothing understanding what science says about how to convey complex ideas — hence why you see massively overpaid presenters like Evan Davis babbling metaphors like ‘economic takeoff’ in front of an airport while his crew films a plane ‘taking off’, or ‘the economy down the plughole’ with pictures of — a plughole.

E) Many thousands worldwide are thinking about all sorts of big government issues but very few can bring them together into coherent plans that a government can deliver and there is almost no application of things like Red Teams and prediction markets. E.g it is impossible to describe the extent to which politicians in Britain do not even consider ‘the timetable and process for turning announcement X into reality’ as something to think about — for people like Cameron and Blair the announcement IS the only reality and ‘management’ is a dirty word for junior people to think about while they focus on ‘strategy’. As I have pointed out elsewhere, it is fascinating that elite business schools have been collecting billions in fees to teach their students WRONGLY that operational excellence is NOT a source of competitive advantage, so it is no surprise that politicians and bureaucrats get this wrong.

But I can see almost nobody integrating the very best knowledge we have about A+B+C+D with E and I strongly suspect there are trillion dollar bills lying on the ground that could be grabbed for trivial cost — trillion dollar bills that people with power are not thinking about and are incentivised not to think about. I might be wrong but I would remind readers that Vote Leave was itself a bet on this proposition being right and I think its success should make people update their beliefs on the competence of elite political institutions and the possibilities for improvement.

Here I want to explore one set of intersections — the ideas of Bret Victor and Michael Nielsen.

*

2. Bret Victor: Cognitive technologies, dynamic tools, interactive quantitative models, Seeing Rooms — making it as easy to insert facts, data, and models in political discussion as it is to insert emoji 

In the 1960s visionaries such as Joseph Licklider, Robert Taylor and Doug Engelbart developed a vision of networked interactive computing that provided the foundation not just for new technologies (the internet, PC etc) but for whole new industries. Licklider, Sutherland,Taylor et al provided a model (ARPA) for how science funding can work. Taylor provided a model (PARC) of how to manage a team of extremely talented people who turned a profound vision into reality. The original motivation for the vision of networked interactive computing was to help humans make good decisions in a complex world — or, ‘augmenting human intelligence’ and ‘man-machine symbiosis’. This story shows how to make big improvements in the world with very few resources if they are structured right: PARC involved ~25 key people and tens of millions over roughly a decade and generated trillions of dollars in value. If interested in the history and the super-productive processes behind the success of ARPA-PARC read THIS.

It’s fascinating that in many ways the original 1960s Licklider vision has still not been implemented. The Silicon Valley ecosystem developed parts of the vision but not others for complex reasons I don’t understand (cf. The Future of Programming). One of those who is trying to implement parts of the vision that have not been implemented is Bret Victor. Bret Victor is a rare thing: a genuine visionary in the computing world according to some of those ‘present at the creation’ of ARPA-PARC such as Alan Kay. His ideas lie at critical intersections between fields sketched above. Watch talks such as Inventing on Principle and Media for Thinking the Unthinkable and explore his current project, Dynamic Land in Berkeley.

Victor has described, and now demonstrates in Dynamic Land, how existing tools fail and what is possible. His core principle is that creators need an immediate connection to what they are creating. Current programming languages and tools are mostly based on very old ideas before computers even had screens and there was essentially no interactivity — they date from the era of punched cards. They do not allow users to interact dynamically. New dynamic tools enable us to think previously unthinkable thoughts and allow us to see and interact with complex systems: to see inside, see across time, and see across possibilities.

I strongly recommend spending a few days exploring his his whole website but I will summarise below his ideas on two things:

  1. His ideas about how to build new dynamic tools for working with data and interactive models.
  2. His ideas about transforming the physical spaces in which teams work so that dynamic tools are embedded in their environment — people work inside a tool.

Applying these ideas would radically improve how people make decisions in government and how the media reports politics/government.

Language and writing were cognitive technologies created thousands of years ago which enabled us to think previously unthinkable thoughts. Mathematical notation did the same over the past 1,000 years. For example, take a mathematics problem described by the 9th Century mathematician al-Khwarizmi (who gave us the word algorithm):

screenshot 2019-01-28 23.46.10

Once modern notation was invented, this could be written instead as:

x2 + 10x = 39

Michael Nielsen uses a similar analogy. Descartes and Fermat demonstrated that equations can be represented on a diagram and a diagram can be represented as an equation. This was a new cognitive technology, a new way of seeing and thinking: algebraic geometry. Changes to the ‘user interface’ of mathematics were critical to its evolution and allowed us to think unthinkable thoughts (Using Artificial Intelligence to Augment Human Intelligence, see below).

Screenshot 2019-03-06 11.33.19

Similarly in the 18th Century, there was the creation of data graphics to demonstrate trade figures. Before this, people could only read huge tables. This is the first data graphic:

screenshot 2019-01-29 00.28.21

The Jedi of data visualisation, Edward Tufte, describes this extraordinary graphic of Napoleon’s invasion of Russia as ‘probably the best statistical graphic ever drawn’. It shows the losses of Napoleon’s army: from the Polish-Russian border, the thick band shows the size of the army at each position, the path of Napoleon’s winter retreat from Moscow is shown by the dark lower band, which is tied to temperature and time scales (you can see some of the disastrous icy river crossings famously described by Tolstoy). NB. The Cabinet makes life-and-death decisions now with far inferior technology to this from the 19th Century (see below).

screenshot 2019-01-29 10.37.05

If we look at contemporary scientific papers they represent extremely compressed information conveyed through a very old fashioned medium, the scientific journal. Printed journals are centuries old but the ‘modern’ internet versions are usually similarly static. They do not show the behaviour of systems in a visual interactive way so we can see the connections between changing values in the models and changes in behaviour of the system. There is no immediate connection. Everything is pretty much the same as a paper and pencil version of a paper. In Media for Thinking the Unthinkable, Victor shows how dynamic tools can transform normal static representations so systems can be explored with immediate feedback. This dramatically shows how much more richly and deeply ideas can be explored. With Victor’s tools we can interact with the systems described and immediately grasp important ideas that are hidden in normal media.

Picture: the very dense writing of a famous paper (by chance the paper itself is at the intersection of politics/technology and Watts has written excellent stuff on fake news but has been ignored because it does not fit what ‘the educated’ want to believe)

screenshot 2019-01-29 10.55.01

Picture: the same information presented differently. Victor’s tools make the information less compressed so there’s less work for the brain to do ‘decompressing’. They not only provide visualisations but the little ‘sliders’ over the graphics are to drag buttons and interact with the data so you see the connection between changing data and changing model. A dynamic tool transforms a scientific paper from ‘pencil and paper’ technology to modern interactive technology.

screenshot 2019-01-29 10.58.38

Victor’s essay on climate change

Victor explains in detail how policy analysis and public debate of climate change could be transformed. Leave aside the subject matter — of course it’s extremely important, anybody interested in this issue will gain from reading the whole thing and it would be great material for a school to use for an integrated science / economics / programming / politics project, but my focus is on his ideas about tools and thinking, not the specific subject matter.

Climate change is a great example to consider because it involves a) a lot of deep scientific knowledge, b) complex computer modelling which is understood in detail by a tiny fraction of 1% (and almost none of the social science trained ‘experts’ who are largely responsible for interpreting such models for politicians/journalists, cf HERE for the science of this), c) many complex political, economic, cultural issues, d) very tricky questions about how policy is discussed in mainstream culture, and e) the problem of how governments try to think about and act on important, complex, and long-term problems. Scientific knowledge is crucial but it cannot by itself answer the question: what to do? The ideas BV describes to transform the debate on climate change apply generally to how we approach all important political issues.

In the section Languages for technical computing, BV describes his overall philosophy (if you look at the original you will see dynamic graphics to help make each point but I can’t make them play on my blog — a good example of the failure of normal tools!):

‘The goal of my own research has been tools where scientists see what they’re doing in realtime, with immediate visual feedback and interactive exploration. I deeply believe that a sea change in invention and discovery is possible, once technologists are working in environments designed around:

  • ubiquitous visualization and in-context manipulation of the system being studied;
  • actively exploring system behavior across multiple levels of abstraction in parallel;
  • visually investigating system behavior by transforming, measuring, searching, abstracting;
  • seeing the values of all system variables, all at once, in context;
  • dynamic notations that embed simulation, and show the effects of parameter changes;
  • visually improvising special-purpose dynamic visualizations as needed.’

He then describes how the community of programming language developers have failed to create appropriate languages for scientists, which I won’t go into but which is fascinating.

He then describes the problem of how someone can usefully get to grips with a complex policy area involving technological elements.

‘How can an eager technologist find their way to sub-problems within other people’s projects where they might have a relevant idea? How can they be exposed to process problems common across many projects?… She wishes she could simply click on “gas turbines”, and explore the space:

  • What are open problems in the field?
  • Who’s working on which projects?
  • What are the fringe ideas?
  • What are the process bottlenecks?
  • What dominates cost? What limits adoption?
  • Why make improvements here? How would the world benefit?

‘None of this information is at her fingertips. Most isn’t even openly available — companies boast about successes, not roadblocks. For each topic, she would have to spend weeks tracking down and meeting with industry insiders. What she’d like is a tool that lets her skim across entire fields, browsing problems and discovering where she could be most useful…

‘Suppose my friend uncovers an interesting problem in gas turbines, and comes up with an idea for an improvement. Now what?

  • Is the improvement significant?
  • Is the solution technically feasible?
  • How much would the solution cost to produce?
  • How much would it need to cost to be viable?
  • Who would use it? What are their needs?
  • What metrics are even relevant?

‘Again, none of this information is at her fingertips, or even accessible. She’d have to spend weeks doing an analysis, tracking down relevant data, getting price quotes, talking to industry insiders.

‘What she’d like are tools for quickly estimating the answers to these questions, so she can fluidly explore the space of possibilities and identify ideas that have some hope of being important, feasible, and viable.

‘Consider the Plethora on-demand manufacturing service, which shows the mechanical designer an instant price quote, directly inside the CAD software, as they design a part in real-time. In what other ways could inventors be given rapid feedback while exploring ideas?’

Victor then describes a public debate over a public policy. Ideas were put forward. Everybody argued.

‘Who to believe? The real question is — why are readers and decision-makers forced to “believe” anything at all? Many claims made during the debate offered no numbers to back them up. Claims with numbers rarely provided context to interpret those numbers. And never — never! — were readers shown the calculations behind any numbers. Readers had to make up their minds on the basis of hand-waving, rhetoric, bombast.’

And there was no progress because nobody could really learn from the debate or even just be clear about exactly what was being proposed. Sound familiar?!! This is absolutely normal and Victor’s description applies to over 99% of public policy debates.

Victor then describes how you can take the policy argument he had sketched and change its nature. Instead of discussing words and stories, DISCUSS INTERACTIVE MODELS. 

Here you need to click to the original to understand the power of what he is talking about as he programs a simple example.

‘The reader can explore alternative scenarios, understand the tradeoffs involved, and come to an informed conclusion about whether any such proposal could be a good decision.

‘This is possible because the author is not just publishing words. The author has provided a model — a set of formulas and algorithms that calculate the consequences of a given scenario… Notice how the model’s assumptions are clearly visible, and can even be adjusted by the reader.

‘Readers are thus encouraged to examine and critique the model. If they disagree, they can modify it into a competing model with their own preferred assumptions, and use it to argue for their position. Model-driven material can be used as grounds for an informed debate about assumptions and tradeoffs.

‘Modeling leads naturally from the particular to the general. Instead of seeing an individual proposal as “right or wrong”, “bad or good”, people can see it as one point in a large space of possibilities. By exploring the model, they come to understand the landscape of that space, and are in a position to invent better ideas for all the proposals to come. Model-driven material can serve as a kind of enhanced imagination.

Victor then looks at some standard materials from those encouraging people to take personal action on climate change and concludes:

‘These are lists of proverbs. Little action items, mostly dequantified, entirely decontextualized. How significant is it to “eat wisely” and “trim your waste”? How does it compare to other sources of harm? How does it fit into the big picture? How many people would have to participate in order for there to be appreciable impact? How do you know that these aren’t token actions to assauge guilt?

‘And why trust them? Their rhetoric is catchy, but so is the horrific “denialist” rhetoric from the Cato Institute and similar. When the discussion is at the level of “trust me, I’m a scientist” and “look at the poor polar bears”, it becomes a matter of emotional appeal and faith, a form of religion.

‘Climate change is too important for us to operate on faith. Citizens need and deserve reading material which shows context — how significant suggested actions are in the big picture — and which embeds models — formulas and algorithms which calculate that significance, for different scenarios, from primary-source data and explicit assumptions.’

Even the supposed ‘pros’ — Insiders at the top of research fields in politically relevant areas — have to scramble around typing words into search engines, crawling around government websites, and scrolling through PDFs. Reliable data takes ages to find. Reliable models are even harder to find. Vast amounts of useful data and models exist but they cannot be found and used effectively because we lack the tools.

‘Authoring tools designed for arguing from evidence’

Why don’t we conduct public debates in the way his toy example does with interactive models? Why aren’t paragraphs in supposedly serious online newspapers written like this? Partly because of the culture, including the education of those who run governments and media organisations, but also because the resources for creating this sort of material don’t exist.

‘In order for model-driven material to become the norm, authors will need data, models, tools, and standards…

‘Suppose there were good access to good data and good models. How would an author write a document incorporating them? Today, even the most modern writing tools are designed around typing in words, not facts. These tools are suitable for promoting preconceived ideas, but provide no help in ensuring that words reflect reality, or any plausible model of reality. They encourage authors to fool themselves, and fool others

‘Imagine an authoring tool designed for arguing from evidence. I don’t mean merely juxtaposing a document and reference material, but literally “autocompleting” sourced facts directly into the document. Perhaps the tool would have built-in connections to fact databases and model repositories, not unlike the built-in spelling dictionary. What if it were as easy to insert facts, data, and models as it is to insert emoji and cat photos?

‘Furthermore, the point of embedding a model is that the reader can explore scenarios within the context of the document. This requires tools for authoring “dynamic documents” — documents whose contents change as the reader explores the model. Such tools are pretty much non-existent.’

These sorts of tools for authoring dynamic documents should be seen as foundational technology like the integrated circuit or the internet.

‘Foundational technology appears essential only in retrospect. Looking forward, these things have the character of “unknown unknowns” — they are rarely sought out (or funded!) as a solution to any specific problem. They appear out of the blue, initially seem niche, and eventually become relevant to everything.

‘They may be hard to predict, but they have some common characteristics. One is that they scale well. Integrated circuits and the internet both scaled their “basic idea” from a dozen elements to a billion. Another is that they are purpose-agnostic. They are “material” or “infrastructure”, not applications.’

Victor ends with a very potent comment — that much of what we observe is ‘rearranging  app icons on the deck of the Titanic’. Commercial incentives drive people towards trying to create ‘the next Facebook’ — not fixing big social problems. I will address this below.

If you are an arts graduate interested in these subjects but not expert (like me), here is an example that will be more familiar… If you look at any big historical subject, such as ‘why/how did World War I start?’ and examine leading scholarship carefully, you will see that all the leading books on such subjects provide false chronologies and mix facts with errors such that it is impossible for a careful reader to be sure about crucial things. It is routine for famous historians to write that ‘X happened because Y’ when Y happened after X. Part of the problem is culture but this could potentially be improved by tools. A very crude example: why doesn’t Kindle make it possible for readers to log factual errors, with users’ reliability ranked by others, so authors can easily check potential errors and fix them in online versions of books? Even better, this could be part of a larger system to develop gold standard chronologies with each ‘fact’ linked to original sources and so on. This would improve the reliability of historical analysis and it would create an ‘anti-entropy’ ratchet — now, entropy means that errors spread across all books on a subject and there is no mechanism to reverse this…

 

‘Seeing Rooms’: macro-tools to help make decisions

Victor also discusses another fundamental issue: the rooms/spaces in which most modern work and thinking occurs are not well-suited to the problems being tackled and we could do much better. Victor is addressing advanced manufacturing and robotics but his argument applies just as powerfully, perhaps more powerfully, to government analysis and decision-making.

Now, ‘software based tools are trapped in tiny rectangles’. We have very sophisticated tools but they all sit on computer screens on desks, just as you are reading this blog.

In contrast, ‘Real-world tools are in rooms where workers think with their bodies.’ Traditional crafts occur in spatial environments designed for that purpose. Workers walk around, use their hands, and think spatially. ‘The room becomes a macro-tool they’re embedded inside, an extension of the body.’ These rooms act like tools to help them understand their problems in detail and make good decisions.

Picture: rooms designed for the problems being tackled

Screenshot 2017-03-20 14.29.19

The wave of 3D printing has developed ‘maker rooms’ and ‘Fab Labs’ where people work with a set of tools that are too expensive for an individual. The room is itself a network of tools. This approach is revolutionising manufacturing.

Why is this useful?

‘Modern projects have complex behavior… Understanding requires seeing and the best seeing tools are rooms.’ This is obviously particularly true of politics and government.

Here is a photo of a recent NASA mission control room. The room is set up so that all relevant people can see relevant data and models at different scales and preserve a common picture of what is important. NASA pioneered thinking about such rooms and the technology and tools needed in the 1960s.

Screenshot 2017-03-20 14.35.35

Here are pictures of two control rooms for power grids.

Screenshot 2017-03-20 14.37.28

Here is a panoramic photo of the unified control centre for the Large Hadron Collider – the biggest of ‘big data’ projects. Notice details like how they have removed all pillars so nothing interrupts visual communication between teams.

Screenshot 2017-03-20 15.31.33

Now contrast these rooms with rooms from politics.

Here is the Cabinet room. I have been in this room. There are effectively no tools. In the 19th Century at least Lord Salisbury used the fireplace as a tool. He would walk around the table, gather sensitive papers, and burn them at the end of meetings. The fire is now blocked. The only other tool, the clock, did not work when I was last there. Over a century, the physical space in which politicians make decisions affecting potentially billions of lives has deteriorated.

British Cabinet room practically as it was July 1914

Screenshot 2017-03-20 15.42.59

Here are JFK and EXCOM making decisions during the Cuban Missile Crisis that moved much faster than July 1914, compressing decisions leading to the destruction of global civilisation potentially into just minutes.

Screenshot 2019-02-14 16.06.04

Here is the only photo in the public domain of the room known as ‘COBRA’ (Cabinet Office Briefing Room) where a shifting set of characters at the apex of power in Britain meet to discuss crises.

Screenshot 2017-03-20 14.39.41

Notice how poor it is compared to NASA, the LHC etc. There has clearly been no attempt to learn from our best examples about how to use the room as a tool. The screens at the end are a late add-on to a room that is essentially indistinguishable from the room in which Prime Minister Asquith sat in July 1914 while doodling notes to his girlfriend as he got bored. I would be surprised if the video technology used is as good as what is commercially available cheaper, the justification will be ‘security’, and I would bet that many of the decisions about the operation of this room would not survive scrutiny from experts in how to construct such rooms.

I have not attended a COBRA meeting but I’ve spoken to many who have. The meetings, as you would expect looking at this room, are often normal political meetings. That is:

  • aims are unclear,
  • assumptions are not made explicit,
  • there is no use of advanced tools,
  • there is no use of quantitative models,
  • discussions are often dominated by lawyers so many actions are deemed ‘unlawful’ without proper scrutiny (and this device is routinely used by officials to stop discussion of options they dislike for non-legal reasons),
  • there is constant confusion between policy, politics and PR then the cast disperses without clarity about what was discussed and agreed.

Here is a photo of the American equivalent – the Situation Room.

Screenshot 2017-03-20 15.51.12.png

It has a few more screens but the picture is essentially the same: there are no interactive tools beyond the ability to speak and see someone at a distance which was invented back in the 1950s/1960s in the pioneering programs of SAGE (automated air defence) and Apollo (man on the moon). Tools to help thinking in powerful ways are not taken seriously. It is largely the same, and decisions are made the same, as in the Cuban Missile Crisis. In some ways the use of technology now makes management worse as it encourages Presidents and their staff to try to micromanage things they should not be managing, often in response to or fear of the media.

Individual ministers’ officers are also hopeless. The computers are old and rubbish. Even colour printing is often a battle. Walls are for kids’ pictures. In the DfE officials resented even giving us paper maps of where schools were and only did it when bullied by the private office. It was impossible for officials to work on interactive documents. They had no technology even for sharing documents in a way that was then (2011) normal even in low-performing organisations. Using GoogleDocs was ‘against the rules’. (I’m told this has slightly improved.) The whole structure of ‘submissions’ and ‘red boxes’ is hopeless. It is extremely bureaucratic and slow. It prevents serious analysis of quantitative models. It reinforces the lack of proper scientific thinking in policy analysis. It guarantees confusion as ministers scribble notes and private offices interpret rushed comments by exhausted ministers after dinner instead of having proper face-to-face meetings that get to the heart of problems and resolve conflicts quickly. The whole approach reinforces the abject failure of the senior civil service to think about high performance project management.

Of course, most of the problems with the standards of policy and management in the civil service are low or no-tech problems — they involve the ‘unrecognised simplicities’ that are independent of, and prior to, the use of technology — but all these things negatively reinforce each other. Anybody who wants to do things much better is scuppered by Whitehall’s entangled disaster zone of personnel, training, management, incentives and tools.

*

Dynamic Land: ‘amazing’

I won’t go into this in detail. Dynamic Land is in a building in Berkeley. I visited last year. It is Victor’s attempt to turn the ideas above into a sort of living laboratory. It is a large connected set of rooms that have computing embedded in surfaces. For example, you can scribble equations on a bit of paper, cameras in the ceiling read your scribbles automatically, turn them into code, and execute them — for example, by producing graphics. You can then physically interact with models that appear on the table or wall while the cameras watch your hands and instantly turn gestures into new code and change the graphics or whatever you are doing. Victor has put these cutting edge tools into a space and made it open to the Berkeley community. This is all hard to explain/understand because you haven’t seen anything like it even in sci-fi films (it’s telling the media still uses the 15 year-old Minority Report as its sci-fi illustration for such things).

This video gives a little taste. I visited with a physicist who works on the cutting edge of data science/AI. I was amazed but I know nothing about such things — I was interested to see his reaction as he scribbled gravitational equations on paper and watched the cameras turn them into models on the table in real-time, then he changed parameters and watched the graphics change in real-time on the table (projected from the ceiling): ‘Ohmygod, this is just obviously the future, absolutely amazing.’ The thought immediately struck us: imagine the implications of having policy discussions with such tools instead of the usual terrible meetings. Imagine discussing HS2 budgets or possible post-Brexit trading arrangements with the models running like this for decision-makers to interact with.

Video of Dynamic Land: the bits of coloured paper are ‘code’, graphics are projected from the ceiling

 

screenshot 2019-01-29 15.01.20

screenshot 2019-01-29 15.27.05

*

3. Michael Nielsen and cognitive technologies

Connected to Victor’s ideas are those of the brilliant physicist, Michael Nielsen. Nielsen wrote the textbook on quantum computation and a great book, Reinventing Discovery, on the evolution of the scientific method. For example, instead of waiting for the coincidence of Grossmann helping out Einstein with some crucial maths, new tools could create a sort of ‘designed serendipity’ to help potential collaborators find each other.

In his essay Thought as a Technology, Nielsen describes the feedback between thought and interfaces:

‘In extreme cases, to use such an interface is to enter a new world, containing objects and actions unlike any you’ve previously seen. At first these elements seem strange. But as they become familiar, you internalize the elements of this world. Eventually, you become fluent, discovering powerful and surprising idioms, emergent patterns hidden within the interface. You begin to think with the interface, learning patterns of thought that would formerly have seemed strange, but which become second nature. The interface begins to disappear, becoming part of your consciousness. You have been, in some measure, transformed.’

He describes how normal language and computer interfaces are cognitive technologies:

‘Language is an example of a cognitive technology: an external artifact, designed by humans, which can be internalized, and used as a substrate for cognition. That technology is made up of many individual pieces – words and phrases, in the case of language – which become basic elements of cognition. These elements of cognition are things we can think with…

‘In a similar way to language, maps etc, a computer interface can be a cognitive technology. To master an interface requires internalizing the objects and operations in the interface; they become elements of cognition. A sufficiently imaginative interface designer can invent entirely new elements of cognition… In general, what makes an interface transformational is when it introduces new elements of cognition that enable new modes of thought. More concretely, such an interface makes it easy to have insights or make discoveries that were formerly difficult or impossible. At the highest level, it will enable discoveries (or other forms of creativity) that go beyond all previous human achievement.’

Nielsen describes how powerful ways of thinking among mathematicians and physicists are hidden from view and not part of textbooks and normal teaching.

The reason is that traditional media are poorly adapted to working with such representations… If experts often develop their own representations, why do they sometimes not share those representations? To answer that question, suppose you think hard about a subject for several years… Eventually you push up against the limits of existing representations. If you’re strongly motivated – perhaps by the desire to solve a research problem – you may begin inventing new representations, to provide insights difficult through conventional means. You are effectively acting as your own interface designer. But the new representations you develop may be held entirely in your mind, and so are not constrained by traditional static media forms. Or even if based on static media, they may break social norms about what is an “acceptable” argument. Whatever the reason, they may be difficult to communicate using traditional media. And so they remain private, or are only discussed informally with expert colleagues.’

If we can create interfaces that reify deep principles, then ‘mastering the subject begins to coincide with mastering the interface.’ He gives the example of Photoshop which builds in many deep principles of image manipulation.

‘As you master interface elements such as layers, the clone stamp, and brushes, you’re well along the way to becoming an expert in image manipulation… By contrast, the interface to Microsoft Word contains few deep principles about writing, and as a result it is possible to master Word‘s interface without becoming a passable writer. This isn’t so much a criticism of Word, as it is a reflection of the fact that we have relatively few really strong and precise ideas about how to write well.’

He then describes what he calls ‘the cognitive outsourcing model’: ‘we specify a problem, send it to our device, which solves the problem, perhaps in a way we-the-user don’t understand, and sends back a solution.’ E.g we ask Google a question and Google sends us an answer.

This is how most of us think about the idea of augmenting the human intellect but it is not the best approach. ‘Rather than just solving problems expressed in terms we already understand, the goal is to change the thoughts we can think.’

‘One challenge in such work is that the outcomes are so difficult to imagine. What new elements of cognition can we invent? How will they affect the way human beings think? We cannot know until they’ve been invented.

‘As an analogy, compare today’s attempts to go to Mars with the exploration of the oceans during the great age of discovery. These appear similar, but while going to Mars is a specific, concrete goal, the seafarers of the 15th through 18th centuries didn’t know what they would find. They set out in flimsy boats, with vague plans, hoping to find something worth the risks. In that sense, it was even more difficult than today’s attempts on Mars.

‘Something similar is going on with intelligence augmentation. There are many worthwhile goals in technology, with very specific ends in mind. Things like artificial intelligence and life extension are solid, concrete goals. By contrast, new elements of cognition are harder to imagine, and seem vague by comparison. By definition, they’re ways of thinking which haven’t yet been invented. There’s no omniscient problem-solving box or life-extension pill to imagine. We cannot say a priori what new elements of cognition will look like, or what they will bring. But what we can do is ask good questions, and explore boldly.

In another essay, Using Artificial Intelligence to Augment Human Intelligence, Nielsen points out that breakthroughs in creating powerful new cognitive technologies such as musical notation or Descartes’ invention of algebraic geometry are rare but ‘modern computers are a meta-medium enabling the rapid invention of many new cognitive technologies‘ and, further, AI will help us ‘invent new cognitive technologies which transform the way we think.’

Further, historically powerful new cognitive technologies, such as ‘Feynman diagrams’, have often appeared strange at first. We should not assume that new interfaces should be ‘user friendly’. Powerful interfaces that repay mastery may require sacrifices.

‘The purpose of the best interfaces isn’t to be user-friendly in some shallow sense. It’s to be user-friendly in a much stronger sense, reifying deep principles about the world, making them the working conditions in which users live and create. At that point what once appeared strange can instead becomes comfortable and familiar, part of the pattern of thought…

‘Unfortunately, many in the AI community greatly underestimate the depth of interface design, often regarding it as a simple problem, mostly about making things pretty or easy-to-use. In this view, interface design is a problem to be handed off to others, while the hard work is to train some machine learning system.

‘This view is incorrect. At its deepest, interface design means developing the fundamental primitives human beings think and create with. This is a problem whose intellectual genesis goes back to the inventors of the alphabet, of cartography, and of musical notation, as well as modern giants such as Descartes, Playfair, Feynman, Engelbart, and Kay. It is one of the hardest, most important and most fundamental problems humanity grapples with.

‘As discussed earlier, in one common view of AI our computers will continue to get better at solving problems, but human beings will remain largely unchanged. In a second common view, human beings will be modified at the hardware level, perhaps directly through neural interfaces, or indirectly through whole brain emulation.

We’ve described a third view, in which AIs actually change humanity, helping us invent new cognitive technologies, which expand the range of human thought. Perhaps one day those cognitive technologies will, in turn, speed up the development of AI, in a virtuous feedback cycle:

Screenshot 2019-02-04 18.16.42

It would not be a Singularity in machines. Rather, it would be a Singularity in humanity’s range of thought… The long-term test of success will be the development of tools which are widely used by creators. Are artists using these tools to develop remarkable new styles? Are scientists in other fields using them to develop understanding in ways not otherwise possible?’

I would add: are governments using these tools to help them think in ways we already know are more powerful and to explore new ways of making decisions and shaping the complex systems on which we rely?

Nielsen also wrote this fascinating essay ‘Augmenting long-term memory’. This involves a computer tool (Anki) to aid long-term memory using ‘spaced repetition’ — i.e testing yourself at intervals which is shown to counter the normal (for most people) process of forgetting. This allows humans to turn memory into a choice so we can decide what to remember and achieve it systematically (without a ‘weird/extreme gift’ which is how memory is normally treated). (It’s fascinating that educated Greeks 2,500 years ago could build sophisticated mnemonic systems allowing them to remember vast amounts while almost all educated people now have no idea about such techniques.)

Connected to this, Nielsen also recently wrote an essay teaching fundamentals of quantum mechanics and quantum computers — but it is an essay with a twist:

‘[It] incorporates new user interface ideas to help you remember what you read… this essay isn’t just a conventional essay, it’s also a new medium, a mnemonic medium which integrates spaced-repetition testing. The medium itself makes memory a choice This essay will likely take you an hour or two to read. In a conventional essay, you’d forget most of what you learned over the next few weeks, perhaps retaining a handful of ideas. But with spaced-repetition testing built into the medium, a small additional commitment of time means you will remember all the core material of the essay. Doing this won’t be difficult, it will be easier than the initial read. Furthermore, you’ll be able to read other material which builds on these ideas; it will open up an entire world…

‘Mastering new subjects requires internalizing the basic terminology and ideas of the subject. The mnemonic medium should radically speed up this memory step, converting it from a challenging obstruction into a routine step. Frankly, I believe it would accelerate human progress if all the deepest ideas of our civilization were available in a form like this.’

This obviously has very important implications for education policy. It also shows how computers could be used to improve learning — something that has generally been a failure since the great hopes at PARC in the 1970s. I have used Anki since reading Nielsen’s blog and I can feel it making a big difference to my mind/thoughts — how often is this true of things you read? DOWNLOAD ANKI NOW AND USE IT!

We need similarly creative experiments with new mediums that are designed to improve  standards of high stakes decision-making.

*

4. Summary

We could create systems for those making decisions about m/billions of lives and b/trillions of dollars, such as Downing Street or The White House, that integrate inter alia:

  • Cognitive toolkits compressing already existing useful knowledge such as checklists for rational thinking developed by the likes of Tetlock, Munger, Yudkowsky et al.
  • A Nielsen/Victor research program on ‘Seeing Rooms’, interface design, authoring tools, and cognitive technologies. Start with bunging a few million to Victor immediately in return for allowing some people to study what he is doing and apply it in Whitehall, then grow from there.
  • An alpha data science/AI operation — tapping into the world’s best minds including having someone like David Deutsch or Tim Gowers as a sort of ‘chief rationalist’ in the Cabinet (with Scott Alexander as deputy!) — to support rational decision-making where this is possible and explain when it is not possible (just as useful).
  • Tetlock/Hanson prediction tournaments could easily and cheaply be extended to consider ‘clusters’ of issues around themes like Brexit to improve policy and project management.
  • Groves/Mueller style ‘systems management’ integrated with the data science team.
  • Legally entrenched Red Teams where incentives are aligned to overcoming groupthink and error-correction of the most powerful. Warren Buffett points out that public companies considering an acquisition should employ a Red Team whose fees are dependent on the deal NOT going ahead. This is the sort of idea we need in No10.

Researchers could see the real operating environment of decision-makers at the apex of power, the sort of problems they need to solve under pressure, and the constraints of existing centralised systems. They could start with the safe level of ‘tools that we already know work really well’ — i.e things like cognitive toolkits and Red Teams — while experimenting with new tools and new ways of thinking.

Hedge funds like Bridgewater and some other interesting organisations think about such ideas though without the sophistication of Victor’s approach. The world of MPs, officials, the Institute for Government (a cheerleader for ‘carry on failing’), and pundits will not engage with these ideas if left to their own devices.

This is not the place to go into how to change this. We know that the normal approach is doomed to produce the normal results and normal results applied to things like repeated WMD crises means disaster sooner or later. As Buffett points out, ‘If there is only one chance in thirty of an event occurring in a given year, the likelihood of it occurring at least once in a century is 96.6%.’ It is not necessary to hope in order to persevere: optimism of the will, pessimism of the intellect…

*

A final thought…

A very interesting comment that I have heard from some of the most important scientists involved in the creation of advanced technologies is that ‘artists see things first’ — that is, artists glimpse possibilities before most technologists and long before most businessmen and politicians.

Pixar came from a weird combination of George Lucas getting divorced and the visionary Alan Kay suggesting to Steve Jobs that he buy a tiny special effects unit from Lucas, which Jobs did with completely wrong expectations about what would happen. For unexpected reasons this tiny unit turned into a huge success — as Jobs put it later, he was ‘sort of snookered’ into creating Pixar. Now Alan Kay says he struggles to get tech billionaires to understand the importance of Victor’s ideas.

The same story repeats: genuinely new ideas that could create huge value always seem so odd that almost all people in almost all organisations cannot see new possibilities. If this is true in Silicon Valley, how much more true is it in Whitehall or Washington… 

If one were setting up a new party in Britain, one could incorporate some of these ideas. This would of course also require recruiting very different types of people to the norm in politics. The closed nature of Westminster/Whitehall combined with first-past-the-post means it is very hard to solve the coordination problem of how to break into this system with a new way of doing things. Even those interested in principle don’t want to commit to a 10-year (?) project that might get them blasted on the front pages. Vote Leave hacked the referendum but such opportunities are much rarer than VC-funded ‘unicorns’. On the other hand, arguably what is happening now is a once in 50 or 100 year crisis and such crises also are the waves that can be ridden to change things normally unchangeable. A second referendum in 2020 is quite possible (or two referendums under PM Corbyn, propped up by the SNP?) and might be the ideal launchpad for a completely new sort of entity, not least because if it happens the Conservative Party may well not exist in any meaningful sense (whether there is or isn’t another referendum). It’s very hard to create a wave and it’s much easier to ride one. It’s more likely in a few years you will see some of the above ideas in novels or movies or video games than in government — their pickup in places like hedge funds and intelligence services will be discrete — but you never know…

*

Ps. While I have talked to Michael Nielsen and Bret Victor about their ideas, in no way should this blog be taken as their involvement in anything to do with my ideas or plans or agreement with anything written above. I did not show this to them or even tell them I was writing about their work, we do not work together in any way, I have just read and listened to their work over a few years and thought about how their ideas could improve government.

Further Reading

If interested in how to make things work much better, read this (lessons for government from the Apollo project) and this (lessons for government from ARPA-PARC’s creation of the internet and PC).

Links to recent reports on AI/ML.

On the referendum #32: Science/productivity — a) small teams are more disruptive, b) ‘science is becoming far less efficient’

This blog considers two recent papers on the dynamics of scientific research: one in Nature and one by the brilliant physicist, Michael Nielsen, and the brilliant founder of Stripe, Patrick Collison, who is a very unusual CEO. These findings are very important to the question: how can we make economies more productive and what is the relationship between basic science and productivity? The papers are also interesting for those interested in the general question of high performance teams.

These issues are also crucial to the debate about what on earth Britain focuses on now the 2016 referendum has destroyed the Insiders’ preferred national strategy of ‘influencing the EU project’.

For as long as I have watched British politics carefully (sporadically since about 1998) these issues about science, technology and productivity have been almost totally ignored in the Insider debate because the incentives + culture of Westminster programs this behaviour: people with power are not incentivised to optimise for ‘improve science research and productivity’. E.g Everything Vote Leave said about funding science research during the referendum (including cooperation with EU programs) was treated as somewhere between eccentric, irrelevant and pointless by Insiders.

This recent Nature paper gives evidence that a) small teams are more disruptive in science research and b) solo researchers/small teams are significantly underfunded.

‘One of the most universal trends in science and technology today is the growth of large teams in all areas, as solitary researchers and small teams diminish in prevalence . Increases in team size have been attributed to the specialization of scientific activities, improvements in communication technology, or the complexity of modern problems that require interdisciplinary solutions. This shift in team size raises the question of whether and how the character of the science and technology produced by large teams differs from that of small teams. Here we analyse more than 65 million papers, patents and software products that span the period 1954–2014, and demonstrate that across this period smaller teams have tended to disrupt science and technology with new ideas and opportunities, whereas larger teams have tended to develop existing ones. Work from larger teams builds on more recent and popular developments, and attention to their work comes immediately. By contrast, contributions by smaller teams search more deeply into the past, are viewed as disruptive to science and technology and succeed further into the future — if at all. Observed differences between small and large teams are magnified for higher impact work, with small teams known for disruptive work and large teams for developing work. Differences in topic and research design account for a small part of the relationship between team size and disruption; most of the effect occurs at the level of the individual, as people move between smaller and larger teams. These results demonstrate that both small and large teams are essential to a flourishing ecology of science and technology, and suggest that, to achieve this, science policies should aim to support a diversity of team sizes

‘Although much has been demonstrated about the professional and career benefits of team size for team members, there is little evidence that supports the notion that larger teams are optimized for knowledge discovery and technological invention. Experimental and observational research on groups reveals that individuals in large groups … generate fewer ideas, recall less learned information, reject external perspectives more often and tend to neutralize each other’s viewpoints

‘Small teams disrupt science and technology by exploring and amplifying promising ideas from older and less-popular work. Large teams develop recent successes, by solving acknowledged problems and refining common designs. Some of this difference results from the substance of science and technology that small versus large teams tackle, but the larger part appears to emerge as a consequence of team size itself. Certain types of research require the resources of large teams, but large teams demand an ongoing stream of funding and success to ‘pay the bills’, which makes them more sensitive to the loss of reputation and support that comes from failure. Our findings are consistent with field research on teams in other domains, which demonstrate that small groups with more to gain and less to lose are more likely to undertake new and untested opportunities that have the potential for high growth and failure

‘In contrast to Nobel Prize papers, which have an average disruption among the top 2% of all contemporary papers, funded papers rank near the bottom 31%. This could result from a conservative review process, proposals designed to anticipate such a process or a planning effect whereby small teams lock themselves into large-team inertia by remaining accountable to a funded proposal. When we compare two major policy incentives for science (funding versus awards), we find that Nobel-prize-winning articles significantly oversample small disruptive teams, whereas those that acknowledge US National Science Foundation funding oversample large developmental teams. Regardless of the dominant driver, these results paint a unified portrait of underfunded solo investigators and small teams who disrupt science and technology by generating new directions on the basis of deeper and wider information search. These results suggest the need for government, industry and non-profit funders of science and technology to investigate the critical role that small teams appear to have in expanding the frontiers of knowledge, even as large teams rapidly develop them.’

Recently Michael Nielsen and Patrick Collison published some research on the question:

‘are we getting a proportional increase in our scientific understanding [for increased investment]?  Or are we investing vastly more merely to sustain (or even see a decline in) the rate of scientific progress?

They explored, inter alia, ‘how scientists think the quality of Nobel Prize–winning discoveries has changed over the decades.’

They conclude:

‘The picture this survey paints is bleak: Over the past century, we’ve vastly increased the time and money invested in science, but in scientists’ own judgement, we’re producing the most important breakthroughs at a near-constant rate. On a per-dollar or per-person basis, this suggests that science is becoming far less efficient.’

It’s also interesting that:

‘In fact, just three [physics] discoveries made since 1990 have been awarded Nobel Prizes. This is too few to get a good quality estimate for the 1990s, and so we didn’t survey those prizes. However, the paucity of prizes since 1990 is itself suggestive. The 1990s and 2000s have the dubious distinction of being the decades over which the Nobel Committee has most strongly preferred to skip, and instead award prizes for earlier work. Given that the 1980s and 1970s themselves don’t look so good, that’s bad news for physics.’

There is a similar story in chemistry.

Why has science got so much more expensive without commensurate gains in understanding?

‘A partial answer to this question is suggested by work done by the economists Benjamin Jones and Bruce Weinberg. They’ve studied how old scientists are when they make their great discoveries. They found that in the early days of the Nobel Prize, future Nobel scientists were 37 years old, on average, when they made their prizewinning discovery. But in recent times that has risen to an average of 47 years, an increase of about a quarter of a scientist’s working career.

‘Perhaps scientists today need to know far more to make important discoveries. As a result, they need to study longer, and so are older, before they can do their most important work. That is, great discoveries are simply getting harder to make. And if they’re harder to make, that suggests there will be fewer of them, or they will require much more effort.

‘In a similar vein, scientific collaborations now often involve far more people than they did a century ago. When Ernest Rutherford discovered the nucleus of the atom in 1911, he published it in a paper with just a single author: himself. By contrast, the two 2012 papers announcing the discovery of the Higgs particle had roughly a thousand authors each. On average, research teams nearly quadrupled in size over the 20th century, and that increase continues today. For many research questions, it requires far more skills, expensive equipment, and a large team to make progress today.

They suggest that ‘the optimistic view is that science is an endless frontier, and we will continue to discover and even create entirely new fields, with their own fundamental questions’. If science is slowing now, then perhaps it ‘is because science has remained too focused on established fields, where it’s becoming ever harder to make progress. We hope the future will see a more rapid proliferation of new fields, giving rise to major new questions. This is an opportunity for science to accelerate.’ They give the example of the birth of computer science after Gödel’s and Turing’s papers in the 1930s.

They also consider the arguments among economists concerning productivity slowdown. Tyler Cowen and others have argued that the breakthroughs in the 19th and early 20th centuries were more significant than recent discoveries: e.g the large-scale deployment of powerful general-purpose technologies such as electricity, the internal-combustion engine, radio, telephones, air travel, the assembly line, fertiliser and so on. Productivity growth in the 1950s was ‘roughly six times higher than today. That means we see about as much change over a decade today as we saw in 18 months in the 1950s.’ Yes the computer and internet have been fantastic but they haven’t, so far, contributed as much as all those powerful technologies like electricity.

They also argue ‘there has been little institutional response’ either among the scientific community or government.

‘Perhaps this lack of response is in part because some scientists see acknowledging diminishing returns as betraying scientists’ collective self-interest. Most scientists strongly favor more research funding. They like to portray science in a positive light, emphasizing benefits and minimizing negatives. While understandable, the evidence is that science has slowed enormously per dollar or hour spent. That evidence demands a large-scale institutional response. It should be a major subject in public policy, and at grant agencies and universities. Better understanding the cause of this phenomenon is important, and identifying ways to reverse it is one of the greatest opportunities to improve our future.’

Slate Star Codex also discussed these issues recently. We often look at charts of exponential progress like Moore’s Law but:

‘There are eighteen times more people involved in transistor-related research today than in 1971. So if in 1971 it took 1000 scientists to increase transistor density 35% per year, today it takes 18,000 scientists to do the same task. So apparently the average transistor scientist is eighteen times less productive today than fifty years ago. That should be surprising and scary.’

Similar arguments seem to apply in many areas.

‘All of these lines of evidence lead me to the same conclusion: constant growth rates in response to exponentially increasing inputs is the null hypothesis. If it wasn’t, we should be expecting 50% year-on-year GDP growth, easily-discovered-immortality, and the like.’

SSC also argues that the explanation for this phenomenon is the ‘low hanging fruit argument’:

‘For example, element 117 was discovered by an international collaboration who got an unstable isotope of berkelium from the single accelerator in Tennessee capable of synthesizing it, shipped it to a nuclear reactor in Russia where it was attached to a titanium film, brought it to a particle accelerator in a different Russian city where it was bombarded with a custom-made exotic isotope of calcium, sent the resulting data to a global team of theorists, and eventually found a signature indicating that element 117 had existed for a few milliseconds. Meanwhile, the first modern element discovery, that of phosphorous in the 1670s, came from a guy looking at his own piss. We should not be surprised that discovering element 117 needed more people than discovering phosphorous

‘I worry even this isn’t dismissive enough. My real objection is that constant progress in science in response to exponential increases in inputs ought to be our null hypothesis, and that it’s almost inconceivable that it could ever be otherwise.

How likely is it that this will change radically?

‘At the end of the conference, the moderator asked how many people thought that it was possible for a concerted effort by ourselves and our institutions to “fix” the “problem”… Almost the entire room raised their hands. Everyone there was smarter and more prestigious than I was (also richer, and in many cases way more attractive), but with all due respect I worry they are insane. This is kind of how I imagine their worldview looking:

Screenshot 2019-03-10 18.07.52.png

*

I don’t know what the answers are to the tricky questions explored above. I do know that the existing systems for funding science are bad and we already have great ideas about how to improve our chances of making dramatic breakthroughs, even if we cannot escape the general problem that a lot of low-hanging fruit in traditional subjects like high energy physics is gone.

I have repeated this theme ad nauseam on this blog:

1) We KNOW how effective the very unusual funding for computer science was in the 1960s/1970s — ARPA-PARC created the internet and personal computing — and there are other similar case studies but

2) almost no science is funded in this way and

3) there is practically no debate about this even among scientists, many of whom are wholly ignorant about this. As Alan Kay has observed, there is an amazing contrast between the huge amount of interest in the internet/PC revolution and the near-zero interest in what created the super-productive processes that sparked this revolution.

One of the reasons is the usual problem of bad incentives reinforcing a dysfunctional equilibrium: successful scientists have a lot of power and have a strong personal interest in preserving current funding systems that let them build empires. These empires include often bad treatment of young postdocs who are abused as cheap labour. This is connected to the point above about the average age of Nobel-winners growing. Much of the 1930s quantum revolution was done by people aged ~20-35 and so was the internet/PC revolution in the 1960s/1970s. The latter was deliberate: Licklider et al deliberately funded not short-term projects but creating whole new departments and institutions for young people. They funded a healthy ecosystem: people not projects was one of the core principles. People in their twenties now have very little power or money in the research ecosystem. Further, they have to operate in an appalling time-wasting-grant-writing bureaucracy that Heisenberg, Dirac et al did not face in the 1920s/30s. The politicians and officials don’t care so there is no force to push sensible experiments with new ideas. Almost all ‘reform’ from the central bureaucracy pushes in the direction of more power for the central bureaucracy, not fixing problems.

For example, for an amount of money that the Department for Education loses every week without ministers/officials even noticing it’s lost — I know from experience this is single figure millions — we could transform the funding of masters and PhDs in maths, physics, chemistry, biology, and computer science. There is so much good that could be done for trivial money that isn’t even wasted in the normal sense of ‘spent on rubbish gimmicks and procurement disasters’, it just disappears into the aether without anybody noticing.

The government spends about 250 billion pounds a year with extreme systematic incompetence. If we ‘just’ applied what we know about high performance project management and procurement we could take savings from this budget and put it into ARPA-PARC style high-risk-high-payoff visions including creating whole new fields. This would create powerful self-reinforcing dynamics that would give Britain real assets of far, far greater value than the chimerical ‘influence’ in Brussels meeting rooms where ‘economic and monetary union’ is the real focus.

A serious government or a serious new party (not TIG obviously which is business as usual with the usual suspects) would focus on these things. Under Major, Blair, Brown, Cameron and May these issues have been neglected for quarter of a century. The Conservative Party now has almost no intellectual connection to crucial debates about the ecosystem of science, productivity, universities, funding, startups and so on. I know from personal experience that even billionaire entrepreneurs whose donations are vital to the survival of CCHQ cannot get people like Hammond to listen to anything about all this — Hammond’s focus is obeying his orders from Goldman Sachs. Downing Street is much more interested in protecting corporate looting by large banks and companies and protecting rent-seekers than they are in productivity and entrepreneurs. Having an interest in this subject is seen as a sign of eccentricity to say the least while the ambitious people focus on ‘strategy’, speeches, interviews and all the other parts of their useless implicit ‘model for effective action’. The Tories are reduced to slogans about ‘freedom’, ‘deregulation’ and so on which provide no answers to our productivity problem and, ironically, lie between pointless and self-destructive for them politically but, crucially, play in the self-referential world of Parliament, ‘think tanks’, and pundit-world who debate ‘the next leader’ and which provides the real incentives that drive behaviour.

There is no force in British politics that prioritises science and productivity. Hopefully soon someone will say ‘there is such a party’…

Further reading

If interested in practical ideas for changing science funding in the UK, read my paper last year, which has a lot of links to important papers, or this by two brilliant young neuroscientists who have experienced the funding system’s problems.

For example:

  • Remove bureaucracy like the multi-stage procurement processes for buying a lightbulb. ‘Rather than invigilate every single decision, we should do spot checks retrospectively, as is done with tax returns.’
  • ‘We should return to funding university departments more directly, allowing more rapid, situation-aware decision-making of the kind present in start-ups, and create a diversity of funding systems.’
  • There are many simple steps like guaranteed work visas for spouses that could make the UK a magnet for talented young scientists.

Effective action #4b: ‘Expertise’, prediction and noise, from the NHS killing people to Brexit

In part A I looked at extreme sports as some background to the question of true expertise and the crucial nature of fast high quality feedback.

This blog looks at studies comparing expertise in many fields over decades, including work by Tetlock and Kahneman, and problems like — why people don’t learn to use even simple tools to stop children dying unnecessarily. There is a summary of some basic lessons at the end.

The reason for writing about this is that we will only improve the performance of government (at individual, team and institutional levels) if we reflect on:

  • what expertise really is and why do some very successful fields cultivate it effectively while others, like government, do not;
  • how to select much higher quality people (it’s insane people as ignorant and limited as me can have the influence we do in the way we do — us limited duffers can help in limited ways but why do we deliberately exclude ~100% of the most intelligent, talented, relentless, high performing people from fields with genuine expertise, why do we not have people like Fields Medallist Tim Gowers or Michael Nielsen as Chief Scientist  sitting ex officio in Cabinet?);
  • how to train people effectively to develop true expertise in skills relevant to government: it needs different intellectual content (PPE/economics are NOT good introductory degrees) and practice in practical skills (project management, making predictions and in general ‘thinking rationally’) with lots of fast, accurate feedback;
  • how to give them effective tools: e.g the Cabinet Room is worse in this respect than it was in July 1914 — at least then the clock and fireplace worked, and Lord Salisbury in the 1890s would walk round the Cabinet table gathering papers to burn in the grate — while today No10 is decades behind the state-of-the-art in old technologies like TV, doesn’t understand simple tools like checklists, and is nowhere with advanced technologies;
  • and how to ‘program’ institutions differently so that 1) people are more incentivised to optimise things we want them to optimise, like error-correction and predictive accuracy, and less incentivised to optimise bureaucratic process, prestige, and signalling as our institutions now do to a dangerous extent, and, connected, so that 2) institutions are much better at building high performance teams rather than continue normal rules that make this practically illegal, and so that 3) we have ‘immune systems’ to minimise the inevitable failures of even the best people and teams .

In SW1 now, those at the apex of power practically never think in a serious way about the reasons for the endemic dysfunctional decision-making that constitutes most of their daily experience or how to change it. What looks like omnishambles to the public and high performers in technology or business is seen by Insiders, always implicitly and often explicitly, as ‘normal performance’. ‘Crises’ such as the collapse of Carillion or our farcical multi-decade multi-billion ‘aircraft carrier’ project occasionally provoke a few days of headlines but it’s very rare anything important changes in the underlying structures and there is no real reflection on system failure.

This fact is why, for example, a startup created in a few months could win a referendum that should have been unwinnable. It was the systemic and consistent dysfunction of Establishment decision-making systems over a long period, with very poor mechanisms for good accurate feedback from reality, that created the space for a guerrilla operation to exploit.

This makes it particularly ironic that even after Westminster and Whitehall have allowed their internal consensus about UK national strategy to be shattered by the referendum, there is essentially no serious reflection on this system failure. It is much more psychologically appealing for Insiders to blame ‘lies’ (Blair and Osborne really say this without blushing), devilish use of technology to twist minds and so on. Perhaps the most profound aspect of broken systems is they cannot reflect on the reasons why they’re broken  — never mind take effective action. Instead of serious thought, we have high status Insiders like Campbell reduced to bathos with whining on social media about Brexit ‘impacting mental health’. This lack of reflection is why Remain-dominated Insiders lurched from failure over the referendum to failure over negotiations. OODA loops across SW1 are broken and this is very hard to fix — if you can’t orient to reality how do you even see your problem well? (NB. It should go without saying that there is a faction of pro-Brexit MPs, ‘campaigners’ and ‘pro-Brexit economists’ who are at least as disconnected from reality, often more, as the May/Hammond bunker.)

Screenshot 2018-06-05 10.05.19

In the commercial world, big companies mostly die within a few decades because they cannot maintain an internal system to keep them aligned to reality plus startups pop up. These two factors create learning at a system level — there is lots of micro failure but macro productivity/learning in which useful information is compressed and abstracted. In the political world, big established failing systems control the rules, suck in more and more resources rather than go bust, make it almost impossible for startups to contribute and so on. Even failures on the scale of the 2008 Crash or the 2016 referendum do not necessarily make broken systems face reality, at least quickly. Watching Parliament’s obsession with trivia in the face of the Cabinet’s and Whitehall’s contemptible failure to protect the interests of millions in the farcical Brexit negotiations is like watching the secretary to the Singapore Golf Club objecting to guns being placed on the links as the Japanese troops advanced.

Neither of the main parties has internalised the reality of these two crises. The Tories won’t face reality on things like corporate looting and the NHS, Labour won’t face reality on things like immigration and the limits of bureaucratic centralism. Neither can cope with the complexity of Brexit and both just look like I would look like in the ring with a professional fighter — baffled, terrified and desperate for a way to escape. There are so many simple ways to improve performance — and their own popularity! — but the system is stuck in such a closed loop it wilfully avoids seeing even the most obvious things and suppresses Insiders who want to do things differently…

But… there is a network of almost entirely younger people inside or close to the system thinking ‘we could do so much better than this’. Few senior Insiders are interested in these questions but that’s OK — few of them listened before the referendum either. It’s not the people now in power and running the parties and Whitehall who will determine whether we make Brexit a platform to contribute usefully to humanity’s biggest challenges but those that take over.

Doing better requires reflecting on what we know about real expertise…

*

How to distinguish between fields dominated by real expertise and those dominated by confident ‘experts’ who make bad predictions?

We know a lot about the distinction between fields in which there is real expertise and fields dominated by bogus expertise. Daniel Kahneman, who has published some of the most important research about expertise and prediction, summarises the two fundamental tests to ask about a field: 1) is there enough informational structure in the environment to allow good predictions, and 2) is there timely and effective feedback that enables error-correction and learning.

‘To know whether you can trust a particular intuitive judgment, there are two questions you should ask: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities? The answer here depends on the professionals’ experience and on the quality and speed with which they discover their mistakes. Anesthesiologists have a better chance to develop intuitions than radiologists do. Many of the professionals we encounter easily pass both tests, and their off-the-cuff judgments deserve to be taken seriously. In general, however, you should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about.’ (Emphasis added.)

In fields where these two elements are present there is genuine expertise and people build new knowledge on the reliable foundations of previous knowledge. Some fields make a transition from stories (e.g Icarus) and authority (e.g ‘witch doctor’) to quantitative models (e.g modern aircraft) and evidence/experiment (e.g some parts of modern medicine/surgery). As scientists have said since Newton, they stand on the shoulders of giants.

How do we assess predictions / judgement about the future?

‘Good judgment is often gauged against two gold standards – coherence and correspondence. Judgments are coherent if they demonstrate consistency with the axioms of probability theory or propositional logic. Judgments are correspondent if they agree with ground truth. When gold standards are unavailable, silver standards such as consistency and discrimination can be used to evaluate judgment quality. Individuals are consistent if they assign similar judgments to comparable stimuli, and they discriminate if they assign different judgments to dissimilar stimuli.

‘Coherence violations range from base rate neglect and confirmation bias to overconfidence and framing effects (Gilovich, Griffith & Kahneman, 2002; Kahneman, Slovic & Tversky, 1982). Experts are not immune. Statisticians (Christensen-Szalanski & Bushyhead, 1981), doctors (Eddy, 1982), and nurses (Bennett, 1980) neglect base rates. Physicians and intelligence professionals are susceptible to framing effects and financial investors are prone to overconfidence.

‘Research on correspondence tells a similar story. Numerous studies show that human predictions are frequently inaccurate and worse than simple linear models in many domains (e.g. Meehl, 1954; Dawes, Faust & Meehl, 1989). Once again, expertise doesn’t necessarily help. Inaccurate predictions have been found in parole officers, court judges, investment managers in the US and Taiwan, and politicians. However, expert predictions are better when the forecasting environment provides regular, clear feedback and there are repeated opportunities to learn (Kahneman & Klein, 2009; Shanteau, 1992). Examples include meteorologists, professional bridge players, and bookmakers at the racetrack, all of whom are well-calibrated in their own domains.‘ (Tetlock, How generalizable is good judgment?, 2017.)

In another 2017 piece Tetlock explored the studies furtherIn the 1920s researchers built simple models based on expert assessments of 500 ears of corn and the price they would fetch in the market. They found that ‘to everyone’s surprise, the models that mimicked the judges’ strategies nearly always performed better than the judges themselves’ (Tetlock, cf. ‘What Is in the Corn Judge’s Mind?’, Journal of American Society for Agronomy, 1923). Banks found the same when they introduced models for credit decisions.

‘In other fields, from predicting the performance of newly hired salespeople to the bankruptcy risks of companies to the life expectancies of terminally ill cancer patients, the experience has been essentially the same. Even though experts usually possess deep knowledge, they often do not make good predictions

When humans make predictions, wisdom gets mixed with “random noise.”… Bootstrapping, which incorporates expert judgment into a decision-making model, eliminates such inconsistencies while preserving the expert’s insights. But this does not occur when human judgment is employed on its own…

In fields ranging from medicine to finance, scores of studies have shown that replacing experts with models of experts produces superior judgments. In most cases, the bootstrapping model performed better than experts on their own. Nonetheless, bootstrapping models tend to be rather rudimentary in that human experts are usually needed to identify the factors that matter most in making predictions. Humans are also instrumental in assigning scores to the predictor variables (such as judging the strength of recommendation letters for college applications or the overall health of patients in medical cases). What’s more, humans are good at spotting when the model is getting out of date and needs updating…

Human experts typically provide signal, noise, and bias in unknown proportions, which makes it difficult to disentangle these three components in field settings. Whether humans or computers have the upper hand depends on many factors, including whether the tasks being undertaken are familiar or unique. When tasks are familiar and much data is available, computers will likely beat humans by being data-driven and highly consistent from one case to the next. But when tasks are unique (where creativity may matter more) and when data overload is not a problem for humans, humans will likely have an advantage…

One might think that humans have an advantage over models in understanding dynamically complex domains, with feedback loops, delays, and instability. But psychologists have examined how people learn about complex relationships in simulated dynamic environments (for example, a computer game modeling an airline’s strategic decisions or those of an electronics company managing a new product). Even after receiving extensive feedback after each round of play, the human subjects improved only slowly over time and failed to beat simple computer models. This raises questions about how much human expertise is desirable when building models for complex dynamic environments. The best way to find out is to compare how well humans and models do in specific domains and perhaps develop hybrid models that integrate different approaches.‘ (Tetlock)

Kahneman also recently published new work relevant to this.

Research has confirmed that in many tasks, experts’ decisions are highly variable: valuing stocks, appraising real estate, sentencing criminals, evaluating job performance, auditing financial statements, and more. The unavoidable conclusion is that professionals often make decisions that deviate significantly from those of their peers, from their own prior decisions, and from rules that they themselves claim to follow.’

In general organisations spend almost no effort figuring out how noisy the predictions made by senior staff are and how much this costs. Kahneman has done some ‘noise audits’ and shown companies that management make MUCH more variable predictions than people realise.

‘What prevents companies from recognizing that the judgments of their employees are noisy? The answer lies in two familiar phenomena: Experienced professionals tend to have high confidence in the accuracy of their own judgments, and they also have high regard for their colleagues’ intelligence. This combination inevitably leads to an overestimation of agreement. When asked about what their colleagues would say, professionals expect others’ judgments to be much closer to their own than they actually are. Most of the time, of course, experienced professionals are completely unconcerned with what others might think and simply assume that theirs is the best answer. One reason the problem of noise is invisible is that people do not go through life imagining plausible alternatives to every judgment they make.

‘High skill develops in chess and driving through years of practice in a predictable environment, in which actions are followed by feedback that is both immediate and clear. Unfortunately, few professionals operate in such a world. In most jobs people learn to make judgments by hearing managers and colleagues explain and criticize—a much less reliable source of knowledge than learning from one’s mistakes. Long experience on a job always increases people’s confidence in their judgments, but in the absence of rapid feedback, confidence is no guarantee of either accuracy or consensus.’

Reviewing the point that Tetlock makes about simple models beating experts in many fields, Kahneman summarises the evidence:

‘People have competed against algorithms in several hundred contests of accuracy over the past 60 years, in tasks ranging from predicting the life expectancy of cancer patients to predicting the success of graduate students. Algorithms were more accurate than human professionals in about half the studies, and approximately tied with the humans in the others. The ties should also count as victories for the algorithms, which are more cost-effective…

‘The common assumption is that algorithms require statistical analysis of large amounts of data. For example, most people we talk to believe that data on thousands of loan applications and their outcomes is needed to develop an equation that predicts commercial loan defaults. Very few know that adequate algorithms can be developed without any outcome data at all — and with input information on only a small number of cases. We call predictive formulas that are built without outcome data “reasoned rules,” because they draw on commonsense reasoning.

‘The construction of a reasoned rule starts with the selection of a few (perhaps six to eight) variables that are incontrovertibly related to the outcome being predicted. If the outcome is loan default, for example, assets and liabilities will surely be included in the list. The next step is to assign these variables equal weight in the prediction formula, setting their sign in the obvious direction (positive for assets, negative for liabilities). The rule can then be constructed by a few simple calculations.

The surprising result of much research is that in many contexts reasoned rules are about as accurate as statistical models built with outcome data. Standard statistical models combine a set of predictive variables, which are assigned weights based on their relationship to the predicted outcomes and to one another. In many situations, however, these weights are both statistically unstable and practically unimportant. A simple rule that assigns equal weights to the selected variables is likely to be just as valid. Algorithms that weight variables equally and don’t rely on outcome data have proved successful in personnel selection, election forecasting, predictions about football games, and other applications.

‘The bottom line here is that if you plan to use an algorithm to reduce noise, you need not wait for outcome data. You can reap most of the benefits by using common sense to select variables and the simplest possible rule to combine them…

‘Uncomfortable as people may be with the idea, studies have shown that while humans can provide useful input to formulas, algorithms do better in the role of final decision maker. If the avoidance of errors is the only criterion, managers should be strongly advised to overrule the algorithm only in exceptional circumstances.

Jim Simons is a mathematician and founder of the world’s most successful ‘quant fund’, Renaissance Technologies. While market prices appear close to random and are therefore extremely hard to predict, they are not quite random and the right models/technology can exploit these small and fleeting opportunities. One of the lessons he learned early was: Don’t turn off the model and go with your gut. At Renaissance, they trust models over instincts. The Bridgewater hedge fund led by Ray Dalio is similar. After near destruction early in his career, Dalio explicitly turned towards explicit model building as the basis for decisions combined with radical attempts to create an internal system that incentivises the optimisation of error-correction. It works.

*

People fail to learn from even the great examples of success and the simplest lessons

One of the most interesting meta-lessons of studying high performance, though, is that simply demonstrating extreme success does NOT lead to much learning. For example:

  • ARPA and PARC created the internet and PC. The PARC research team was an extraordinary collection of about two dozen people who were managed in a very unusual way that created super-productive processes extremely different to normal bureaucracies. XEROX, which owned PARC, had the entire future of the computer industry in its own hands, paid for by its own budgets, and it simultaneously let Bill Gates and Steve Jobs steal everything and XEROX then shut down the research team that did it. And then, as Silicon Valley grew on the back of these efforts, almost nobody, including most of the billionaires who got rich from the dynamics created by ARPA-PARC, studied the nature of the organisation and processes and copied it. Even today, those trying to do edge-of-the-art research in a similar way to PARC right at the heart of the Valley ecosystem are struggling for long-term patient funding. As Alan Kay, one of the PARC team, said, ‘The most interesting thing has been the contrast between appreciation/exploitation of the inventions/contributions [of PARC] versus the almost complete lack of curiosity and interest in the processes that produced them. ARPA survived being abolished in the 1970s but it was significantly changed and is no longer the freewheeling place that it was in the 1960s when it funded the internet. In many ways DARPA’s approach now is explicitly different to the old ARPA (the addition of the ‘D’ was a sign of internal bureaucratic changes).

Screenshot 2018-06-05 14.55.00

  • ‘Systems management’ was invented in the 1950s and 1960s (partly based on wartime experience of large complex projects) to deal with the classified ICBM project and Apollo. It put man on the moon then NASA largely abandoned the approach and reverted to being (relative to 1963-9) a normal bureaucracy. Most of Washington has ignored the lessons ever since — look for example at the collapse of ObamaCare’s rollout, after which Insiders said ‘oh, looks like it was a system failure, wonder how we deal with this’, mostly unaware that America had developed a successful approach to such projects half a century earlier. This is particularly interesting given that China also studied Mueller’s approach to systems management in Apollo and as we speak is copying it in projects across China. The EU’s bureaucracy is, like Whitehall, an anti-checklist to high level systems management — i.e they violate almost every principle of effective action.
  • Buffett and Munger are the most successful investment partnership in world history. Every year for half a century they have explained some basic principles, particularly concerning incentives, behind organisational success. Practically no public companies take their advice and all around us in Britain we see vast corporate looting and politicians of all parties failing to act — they don’t even read the Buffett/Munger lessons and think about them. Even when given these lessons to read, they won’t read them (I know this because I’ve tried).

Perhaps you’re thinking — well, learning from these brilliant examples might be intrinsically really hard, much harder than Cummings thinks. I don’t think this is quite right. Why? Partly because millions of well-educated and normally-ethical people don’t learn even from much simpler things.

I will explore this separately soon but I’ll give just one example. The world of healthcare unnecessarily kills and injures people on a vast scale. Two aspects of this are 1) a deep resistance to learning from the success of very simple tools like checklists and 2) a deep resistance to face the fact that most medical experts do not understand statistics properly and their routine misjudgements cause vast suffering, plus warped incentives encourage widespread lies about statistics and irrational management. E.g People are constantly told things like ‘you’ve tested positive for X therefore you have X’ and they then kill themselves. We KNOW how to practically eliminate certain sorts of medical injury/death. We KNOW how to teach and communicate statistics better. (Cf. Professor Gigerenzer for details. He was the motivation for including things like conditional probabilities in the new National Curriculum.) These are MUCH simpler than building ICBMs, putting man on the moon, creating the internet and PC, or being great investors. Yet our societies don’t do them.

Why?

Because we do not incentivise error-correction and predictive accuracy. People are not incentivised to consider the cost of their noisy judgements. Where incentives and culture are changed, performance magically changes. It is the nature of the systems, not (mostly) the nature of the people, that is the crucial ingredient in learning from proven simple success. In healthcare like in government generally, people are incentivised to engage in wasteful/dangerous signalling to a terrifying degree — not rigorous thinking and not solving problems.

I have experienced the problem with checklists first hand in the Department for Education when trying to get the social worker bureaucracy to think about checklists in the context of avoiding child killings like Baby P. Professionals tend to see them as undermining their status and bureaucracies fight against learning, even when some great officials try really hard (as some in the DfE did such as Pamela Dow and Victoria Woodcock). ‘Social work is not the same as an airline Dominic’. No shit. Airlines can handle millions of people without killing one of them because they align incentives with predictive accuracy and error-correction.

Some appalling killings are inevitable but the social work bureaucracy will keep allowing unnecessary killings because they will not align incentives with error-correction. Undoing flawed incentives threatens the system so they’ll keep killing children instead — and they’re not particularly bad people, they’re normal people in a normal bureaucracy. The pilot dies with the passengers. The ‘CEO’ on over £150,000 a year presiding over another unnecessary death despite constantly increasing taxpayers money pouring in? Issue a statement that ‘this must never happen again’, tell the lawyers to redact embarrassing cockups on the grounds of ‘protecting someone’s anonymity’ (the ECHR is a great tool to cover up death by incompetence), fuck off to the golf course, and wait for the media circus to move on.

Why do so many things go wrong? Because usually nobody is incentivised to work relentlessly to suppress entropy, never mind come up with something new.

*

We can see some reasonably clear conclusions from decades of study on expertise and prediction in many fields.

  • Some fields are like extreme sport or physics: genuine expertise emerges because of fast effective feedback on errors.
  • Abstracting human wisdom into models often works better than relying on human experts as models are often more consistent and less noisy.
  • Models are also often cheaper and simpler to use.
  • Models do not have to be complex to be highly effective — quite the opposite, often simpler models outperform more sophisticated and expensive ones.
  • In many fields (which I’ve explored before but won’t go into again here) low tech very simple checklists have been extremely effective: e.g flying aircraft or surgery.
  • Successful individuals like Warren Buffett and Ray Dalio also create cognitive checklists to trap and correct normal cognitive biases that degrade individual and team performance.
  • Fields make progress towards genuine expertise when they make a transition from stories (e.g Icarus) and authority (e.g ‘witch doctor’) to quantitative models (e.g modern aircraft) and evidence/experiment (e.g some parts of modern medicine/surgery).
  • In the intellectual realm, maths and physics are fields dominated by genuine expertise and provide a useful benchmark to compare others against. They are also hierarchical. Social sciences have little in common with this.
  • Even when we have great examples of learning and progress, and we can see the principles behind them are relatively simple and do not require high intelligence to understand, they are so psychologically hard and run so counter to the dynamics of normal big organisations, that almost nobody learns from them. Extreme success is ‘easy to learn from’ in one sense and ‘the hardest thing in the world to learn from’ in another sense.

It is fascinating how remarkably little interest there is in the world of politics/government, and social sciences analysing politics/government, about all this evidence. This is partly because politics/government is an anti-learning and anti-expertise field, partly because the social sciences are swamped by what Feynman called ‘cargo cult science’ with very noisy predictions, little good feedback and learning, and a lot of chippiness at criticism whether it’s from statistics experts or the ‘ignorant masses’. Fields like ‘education research’ and ‘political science’ are particularly dreadful and packed with charlatans but much of economics is not much better (much pro- and anti-Brexit mainstream economics is classic ‘cargo cult’).

I have found there is overwhelmingly more interest in high technology circles than in government circles, but in high technology circles there is also a lot of incredulity and naivety about how government works — many assume politicians are trying and failing to achieve high performance and don’t realise that in fact nobody is actually trying. This illusion extends to many well-connected businessmen who just can’t internalise the reality of the apex of power. I find that uneducated people on 20k living hundreds of miles from SW1 generally have a more accurate picture of daily No10 work than extremely well-connected billionaires.

This is all sobering and is another reason to be pessimistic about the chances of changing government from ‘normal’ to ‘high performance’ — but, pessimism of the intellect, optimism of the will…

If you are in Whitehall now watching the Brexit farce or abroad looking at similar, you will see from page 26 HERE a checklist for how to manage complex government projects at world class levels (if you find this interesting then read the whole paper). I will elaborate on this. I am also thinking about a project to look at the intersection of (roughly) five fields in order to make large improvements in the quality of people, ideas, tools, and institutions that determine political/government decisions and performance:

  • the science of prediction across different fields (e.g early warning systems, the Tetlock/IARPA project showing dramatic performance improvements),
  • what we know about high performance (individual/team/organisation) in different fields (e.g China’s application of ‘systems management’ to government),
  • technology and tools (e.g Bret Victor’s work, Michael Nielsen’s work on cognitive technologies, work on human-AI ‘minotaur’ teams),
  • political/government decision making affecting millions of people and trillions of dollars (e.g WMD, health), and
  • communication (e.g crisis management, applied psychology).

Progress requires attacking the ‘system of systems’ problem at the right ‘level’. Attacking the problems directly — let’s improve policy X and Y, let’s swap ‘incompetent’ A for ‘competent’ B — cannot touch the core problems, particularly the hardest meta-problem that government systems bitterly fight improvement. Solving the explicit surface problems of politics and government is best approached by a more general focus on applying abstract principles of effective action. We need to surround relatively specific problems with a more general approach. Attack at the right level will see specific solutions automatically ‘pop out’ of the system. One of the most powerful simplicities in all conflict (almost always unrecognised) is: ‘winning without fighting is the highest form of war’. If we approach the problem of government performance at the right level of generality then we have a chance to solve specific problems ‘without fighting’ — or, rather, without fighting nearly so much and the fighting will be more fruitful.

This is not a theoretical argument. If you look carefully at ancient texts and modern case studies, you see that applying a small number of very simple, powerful, but largely unrecognised principles (that are very hard for organisations to operationalise) can produce extremely surprising results.

How to jump from the Idea to Reality? More soon…


Ps. Just as I was about to hit publish on this, the DCMS Select Committee released their report on me. The sentence about the Singapore golf club at the top comes to mind.

On the referendum #23, a year after victory: ‘a change of perspective is worth 80 IQ points’ & ‘how to capture the heavens’

‘Just like all British governments, they will act more or less in a hand to mouth way on the spur of the moment, but they will not think out and adopt a steady policy.’ Earl Cromer, 1896.

Fascinating that the same problems recur time after time, in almost every program, and that the management of the program, whether it happened to be government or industry, continues to avoid reality.’ George Mueller, pioneer of systems management and head of the Apollo programme to put man on the moon.

Traditional cultures, those that all humans lived in until quite recently and which still survive in pockets, don’t realise that they are living inside a particular perspective. They think that what they see is ‘reality’. It is, obviously, not their fault. It is not because they are stupid. It is a historical accident that they did/do not have access to mental models that help more accurate thinking about reality.

Westminster and the other political cultures dotted around the world are similar to these traditional cultures. They think they they are living in ‘reality’. The MPs and pundits get up, read each other, tweet at each other, give speeches, send press releases, have dinner, attack, fuck or fight each other, do the same tomorrow and think ‘this is reality’. Like traditional cultures they are wrong. They are living inside a particular perspective that enormously distorts reality. 

They are trapped in thinking about today and their careers. They are trapped in thinking about incremental improvements. Almost nobody has ever been part of a high performance team responsible for a complex project. The speciality is a hot take to explain post facto what one cannot predict. They mostly don’t know what they don’t know. They don’t understand the decentralised information processing that allows markets to enable complex coordination. They don’t understand how scientific research works and they don’t value it. Their daily activity is massively constrained by the party and state bureaucracies that incentivise behaviour very different to what humanity needs to create long-term value. As Michael Nielsen (author of Reinventing Science) writes:

‘[M]uch of our intellectual elite who think they have “the solutions” have actually cut themselves off from understanding the basis for much of the most important human progress.’

Unlike traditional cultures, our modern political cultures don’t have the excuse of our hunter-gatherer ancestors. We could do better. But it is very very hard to escape the core imperatives that make big bureaucracies — public companies as well as state bureaucracies — so bad at learning. Warren Buffet explained decades ago how institutions actively fight against learning and fight to stay in a closed and vicious feedback loop:

‘My most surprising discovery: the overwhelming importance in business of an unseen force that we might call “the institutional imperative”. In business school, I was given no hint of the imperative’s existence and I did not intuitively understand it when I entered the business world. I thought then that decent, intelligence, and experienced managers would automatically make rational business decisions. But I learned the hard way that isn’t so. Instead rationality frequently wilts when the institutional imperative comes into play.

‘For example, 1) As if governed by Newton’s First Law, any institution will resist any change in its current direction. 2) … Corporate projects will materialise to soak up available funds. 3) Any business craving of the leader, however foolish, will quickly be supported by … his troops. 4) The behaviour of peer companies … will be mindlessly imitated.’

Almost nobody really learns from the world’s most successful investor about investing and how to run a successful business with good corporate governance. (People read what he writes but almost no investors choose to operate long-term like him, I think it is still true that not a single public company has copied his innovations with corporate governance like ‘no pay for company directors’, and governments have consistently rejected his and Munger’s advice about controlling the looting of public companies by management.) Almost nobody really learns how to do things better from the experience of dealing with this ‘institutional imperative’. We fail over and over again in the same way, trusting in institutions that are programmed to fail.

It is very very hard for humans to lift our eyes from today and to go out into the future and think about what could be done to bring the future back to the present. Like ants crawling around on the leaf, we political people only know our leaf.

Science has shown us a different way. Newton looked up from his leaf, looked far away from today, and created a new perspective — a new model of reality. It took an extreme genius to discover something like calculus but once discovered billions of people who are far from being geniuses can use this new perspective. Science advances by turning new ideas into standard ideas so each generation builds on the last.

Politics does the equivalent of constantly trying to reinvent children’s arithmetic and botching it. It does not build reliable foundations of knowledge. Archimedes is no longer cutting edge. Thucydides and Sun Tzu are still cutting edge. Even though Tetlock and others have shown how to start making similar progress with politics, our political cultures fiercely resist learning and fight ferociously to stay in closed and failing feedback loops.

In many ways our political culture has regressed as it has become more and more audio-visual and less and less literate. (Only 31% of US college graduates can read at a basic level. I’d guess it’s similar here. See end.) I’ve experimented with the way Jeff Bezos runs meetings at Amazon: i.e start the meeting with giving people a 5-10 page memo to read. Impossible in Westminster, nobody will sit and read like that! Officials have tried and failed for a year to get senior ministers to engage with complex written material about the EU negotiations. TV news dominates politics and is extremely low-bandwidth: it contains a few hundred words and rarely uses graphics properly. Evan Davis illustrates a comment about ‘going down the plughole’ with a picture of water down a plughole and Nick Robinson illustrates a comment about ‘the economy taking off’ with a picture of a plane taking off. The constant flow of bullshit from the likes of Robert Peston and Jon Snow dominates the medium because competition has been impossible until recently. BUT, although technology is making these charlatans less relevant (good) it also creates new problems and will not necessarily improve the culture.

Watching political news makes you dumber — switch it off and read books! If you work in it, either QUIT or go on holiday and come back determined to subvert it. How? Start with a previous blog which has some ideas, like tracking properly which people have a record of getting things right and wrong. Every editor I’ve suggested this to winces and says ‘impossible’. Insiders fear accountability and competition.

Today, the anniversary of the referendum, is a good day to forget the babble in the bubble and think about lessons from another project that changed the world, the famous ARPA/PARC team of the 1960s and 1970s.

*

ARPA/PARC and ‘capturing the heavens’: The best way to predict the future is to invent it

The panic over Sputnik brought many good things such as a huge increase in science funding. America also created the Advanced Research Projects Agency (ARPA, which later added ‘Defense’ and became DARPA). Its job was to fund high risk / high payoff technology development. In the 1960s and 1970s, a combination of unusual people and unusually wise funding from ARPA created a community that in turn invented the internet, or ‘the intergalactic network’ as Licklider originally called it, and the personal computer. One of the elements of this community was PARC, a research centre working for Xerox. As Bill Gates said, he and Steve Jobs essentially broke into PARC, stole their ideas, and created Microsoft and Apple.

The ARPA/PARC project has created over 35 TRILLION DOLLARS of value for society and counting.

The whole story is fascinating in many ways. I won’t go into the technological aspects. I just want to say something about the process.

What does a process that produces ideas that change the world look like?

One of the central figures was Alan Kay. One of the most interesting things about the project is that not only has almost nobody tried to repeat this sort of research but the business world has even gone out of its way to spread mis-information about it because it was seen as so threatening to business-as-usual.

I will sketch a few lessons from one of Kay’s pieces but I urge you to read the whole thing.

‘This is what I call “The power of the context” or “Point of view is worth 80 IQ points”. Science and engineering themselves are famous examples, but there are even more striking processes within these large disciplines. One of the greatest works of art from that fruitful period of ARPA/PARC research in the 60s and 70s was the almost invisible context and community that catalysed so many researchers to be incredibly better dreamers and thinkers. That it was a great work of art is confirmed by the world-changing results that appeared so swiftly, and almost easily. That it was almost invisible, in spite of its tremendous success, is revealed by the disheartening fact today that, as far as I’m aware, no governments and no companies do edge-of-the-art research using these principles.’

‘[W]hen I think of ARPA/PARC, I think first of good will, even before brilliant people… Good will and great interest in graduate students as “world-class researchers who didn’t have PhDs yet” was the general rule across the ARPA community.

‘[I]t is no exaggeration to say that ARPA/PARC had “visions rather than goals” and “funded people, not projects”. The vision was “interactive computing as a complementary intellectual partner for people pervasively networked world-wide”. By not trying to derive specific goals from this at the funding side, ARPA/PARC was able to fund rather different and sometimes opposing points of view.

‘The pursuit of Art always sets off plans and goals, but plans and goals don’t always give rise to Art. If “visions not goals” opens the heavens, it is important to find artistic people to conceive the projects.

‘Thus the “people not projects” principle was the other cornerstone of ARPA/PARC’s success. Because of the normal distribution of talents and drive in the world, a depressingly large percentage of organizational processes have been designed to deal with people of moderate ability, motivation, and trust. We can easily see this in most walks of life today, but also astoundingly in corporate, university, and government research. ARPA/PARC had two main thresholds: self-motivation and ability. They cultivated people who “had to do, paid or not” and “whose doings were likely to be highly interesting and important”. Thus conventional oversight was not only not needed, but was not really possible. “Peer review” wasn’t easily done even with actual peers. The situation was “out of control”, yet extremely productive and not at all anarchic.

‘”Out of control” because artists have to do what they have to do. “Extremely productive” because a great vision acts like a magnetic field from the future that aligns all the little iron particle artists to point to “North” without having to see it. They then make their own paths to the future. Xerox often was shocked at the PARC process and declared it out of control, but they didn’t understand that the context was so powerful and compelling and the good will so abundant, that the artists worked happily at their version of the vision. The results were an enormous collection of breakthroughs.

‘Our game is more like art and sports than accounting, in that high percentages of failure are quite OK as long as enough larger processes succeed… [I]n most processes today — and sadly in most important areas of technology research — the administrators seem to prefer to be completely in control of mediocre processes to being “out of control” with superproductive processes. They are trying to “avoid failure” rather than trying to “capture the heavens”.

‘All of these principles came together a little over 30 years ago to eventually give rise to 1500 Altos, Ethernetworked to: each other, Laserprinters, file servers and the ARPAnet, distributed to many kinds of end-users to be heavily used in real situations. This anticipated the commercial availability of this genre by 10-15 years. The best way to predict the future is to invent it.

‘[W]e should realize that many of the most important ARPA/PARC ideas haven’t yet been adopted by the mainstream. For example, it is amazing to me that most of Doug Engelbart’s big ideas about “augmenting the collective intelligence of groups working together” have still not taken hold in commercial systems. What looked like a real revolution twice for end-users, first with spreadsheets and then with Hypercard, didn’t evolve into what will be commonplace 25 years from now, even though it could have. Most things done by most people today are still “automating paper, records and film” rather than “simulating the future”. More discouraging is that most computing is still aimed at adults in business, and that aimed at nonbusiness and children is mainly for entertainment and apes the worst of television. We see almost no use in education of what is great and unique about computer modeling and computer thinking. These are not technological problems but a lack of perspective. Must we hope that the open-source software movements will put things right?

‘The ARPA/PARC history shows that a combination of vision, a modest amount of funding, with a felicitous context and process can almost magically give rise to new technologies that not only amplify civilization, but also produce tremendous wealth for the society. Isn’t it time to do this again by Reason, even with no Cold War to use as an excuse? How about helping children of the world grow up to think much better than most adults do today? This would truly create “The Power of the Context”.’

Note how this story runs contrary to how free market think tanks and pundits describe technological development. The impetus for most of this development came from government funding, not markets.

Also note that every attempt since the 1950s to copy ARPA and JASON (the semi-classified group that partly gave ARPA its direction) in the UK has been blocked by Whitehall. The latest attempt was in 2014 when the Cabinet Office swatted aside the idea. Hilariously its argument was ‘DARPA has had a lot of failures’ thus demonstrating extreme ignorance about the basic idea — the whole point is you must have failures and if you don’t have lots of failures then you are failing!

People later claimed that while PARC may have changed the world it never made any money for XEROX. This is ‘absolute bullshit’ (Kay). It made billions from the laser printer alone and overall Xerox made 250 times what they invested in PARC before they went bust. In 1983 they fired Bob Taylor, the manager of PARC and the guy who made it all happen.

‘They hated [Taylor] for the very reason that most companies hate people who are doing something different, because it makes middle and upper management extremely uncomfortable. The last thing they want to do is make trillions, they want to make a few millions in a comfortable way’ (Kay).

Someone finally listened to Kay recently. ‘YC Research’, the research arm of the world’s most successful (by far) technology incubator, is starting to fund people in this way. I am not aware of any similar UK projects though I know that a small network of people are thinking again about how something like this could be done here. If you can help them, take a risk and help them! Someone talk to science minister Jo Johnson but be prepared for the Treasury’s usual ignorant bullshit — ‘what are we buying for our money, and how can we put in place appropriate oversight and compliance?’ they will say!

Why is this relevant to the referendum?

As we ponder the future of the UK-EU relationship shaped amid the farce of modern Whitehall, we should think hard about the ARPA/PARC example: how a small group of people can make a huge breakthrough with little money but the right structure, the right ways of thinking, and the right motives.

Those of us outside the political system thinking ‘we know we can do so much better than this but HOW can we break through the bullshit?’ need to change our perspective and gain 80 IQ points.

This real picture is a metaphor for the political culture: ad hoc solutions that are either bad or don’t scale.

Screenshot 2017-06-14 16.58.14.png

ARPA said ‘Let’s get rid of all the wires’. How do we ‘get rid of all the wires’ and build something different that breaks open the closed and failing political cultures? Winning the referendum was just one step that helps clear away dead wood but we now need to build new things.

The ARPA vision that aligned the artists ‘like little iron filings’ was:

‘Computers are destined to become interactive intellectual amplifiers for everyone in the world universally networked worldwide’ (Licklider).

We need a motivating vision aimed not at tomorrow but at changing the basic wiring of  the whole system, a vision that can align ‘the little iron filings’, and then start building for the long-term.

I will go into what I think this vision could be and how to do it another day. I think it is possible to create something new that could scale very fast and enable us to do politics and government extremely differently, as different to today as the internet and PC were to the post-war mainframes. This would enable us to build huge long-term value for humanity in a relatively short time (less than 20 years). To create it we need a process as well suited to the goal as the ARPA/PARC project was and incorporating many of its principles.

We must try to escape the current system with its periodic meltdowns and international crises. These crises move 500-1,000 times faster than that of summer 1914. Our destructive potential is at least a million-fold greater than it was in 1914. Yet we have essentially the same hierarchical command-and-control decision-making systems in place now that could not even cope with 1914 technology and pace. We have dodged nuclear wars by fluke because individuals made snap judgements in minutes. Nobody who reads the history of these episodes can think that this is viable long-term, and we will soon have another wave of innovation to worry about with autonomous robots and genetic engineering. Technology gives us no option but to try to overcome evolved instincts like destroying out-group competitors.

In a previous blog I outlined how the ‘systems management’ approach used to put man on the moon provides principles for a new approach.

*

Ironically, one of the very few people in politics who understood the sort of thinking needed was … Jean Monnet, the architect of the EEC/EU! Monnet understood how to step back from today and build institutions. He worked operationally to prepare the future:

‘If there was stiff competition round the centres of power, there was practically none in the area where I wanted to work – preparing the future.’

Monnet was one of the few people in modern politics who really deserve the label ‘genius’. The story of how he wangled the creation of his institutions through the daily chaos of post-war politics is a lesson to anybody who wants to get things done.

But the institutions he created are in many ways the opposite of what the world needs. Their core operating principle is perpetual centralisation of power in the hands of an all powerful bureaucracy (Commission) and Court (ECJ). Nothing that works well in the world works like this!

Thanks to the prominence of Farage the dominant story among educated people is that those who got us out of the EU want to take us back to the pre-1914 era of hostile competing nation states. Nothing could be further from the truth. The key people in Vote Leave wanted and want not just what is best for Britain but what is best for all humanity. We want more international cooperation, not less. The problem with the EU is not that it is about international cooperation but that it is so bad at it and actually undermines it.

Britain leaving forces those with power to ask: how can all European countries trade freely and cooperate without subscribing to Monnet’s bureaucratic centralism? This will help Europe in the long-term. To those who favour this bureaucratic centralism and uniformity, reflect on the different trajectories of Europe and China post-Renaissance. In Europe, regulatory competition (so Columbus could chase funding in Spain after rejection in Portugal) brought immense gains. In China, centrally directed uniformity led to centuries of stagnation. America’s model of competitive federalism created by the founding fathers has been a far more effective engine of civilisation, growth, and new knowledge than the Monnet-Delors Single Market model.

If Britain were to focus on science and education with huge resources and a new-found seriousness, then this regulatory diversity would help not just Britain but all Europe and the global science community. We could make Britain the best place in the world to be for those who can invent the future. Like Alan Kay and his colleagues, we could create whole new industries. We could call Jeff Bezos and say, ‘Ok Jeff, you want a permanent international manned moon base, let’s talk about who does what, but not with that old rocket technology.’ No country on earth funds science as well as we already know how it could be done — that is something for Britain to do that would create real long-term value for humanity, instead of the ‘punching above our weight’ and ‘special relationship’ bullshit that passes for strategy in London. How we change our domestic institutions is within our power and will have much much greater influence on our long-term future than whatever deal is botched together with Brussels. We have the resources. But can we break the system open? If we don’t then we’re likely to go down the path we were already going down inside the EU, like the deluded Norma Desmond in Sunset Boulevard claiming ‘I am big, it’s the pictures that got small.’

*

Vote Leave and ‘good will’

Although Vote Leave was enmeshed in a sort of collective lunacy we managed, barely, to fend it off from the inner working of the campaign. Much of my job (sadly) was just trying to maintain a cordon around the core team so they could deliver the campaign with as little disruption as possible. We managed this because among the core people we had great good will. The stories of the campaign focus on the lunacy, but the people who really made it work remember the goodwill.

A year ago tonight I was sitting alone in a room thinking ‘we’ve won, now…’ when the walls started rumbling. At first I couldn’t make it out then, as Tim Shipman tells the story in his definitive book on the campaign, I heard ‘Dom, Dom, DOM’ — the team had declared victory. I went next door…

Thanks to everybody who sacrificed something. As I said that night and as I said in my long blog on the campaign, I’ve been given credit I don’t deserve and which rightly belongs to others — Cleo Watson, Richard ‘Ricardo’ Howell, Brother Starkie, Oliver Lewis, Lord Suart et al. Now, let’s think about what should come next…

 

Watch Alan Kay explain how to invent the future HERE and HERE.


Ps. Kay also points out that the real computer revolution won’t happen until people fulfil the original vision of enabling children to use this powerful way of thinking:

‘The real printing revolution was a qualitative change in thought and argument that lagged the hardware inventions by almost two centuries. The special quality of computers is their ability to rapidly simulate arbitrary descriptions, and the real computer revolution won’t happen until children can learn to read, write, argue and think in this powerful new way. We should all try to make this happen much sooner than 200 or even 20 more years!’

Almost nobody in education policy is aware of the educational context for the ARPA/PARC project which also speaks volumes about the abysmal field of ‘education research/policy’.

* Re the US literacy statistic, cf. A First Look at the Literacy of America’s Adults in the 21st Century, National Assessment of Adult Literacy, U.S. Dept of Education, NCES 2006.

 

 

 

Complexity and Prediction Part V: The crisis of mathematical paradoxes, Gödel, Turing and the basis of computing

Before the referendum I started a series of blogs and notes exploring the themes of complexity and prediction. This was part of a project with two main aims: first, to sketch a new approach to education and training in general but particularly for those who go on to make important decisions in political institutions and, second, to suggest a new approach to political priorities in which progress with education and science becomes a central focus for the British state. The two are entangled: progress with each will hopefully encourage progress with the other.

I was working on this paper when I suddenly got sidetracked by the referendum and have just looked at it again for the first time in about two years.

The paper concerns a fascinating episode in the history of ideas that saw the most esoteric and unpractical field, mathematical logic, spawn a revolutionary technology, the modern computer. NB. a great lesson to science funders: it’s a great mistake to cut funding on theory and assume that you’ll get more bang for buck from ‘applications’.

Apart from its inherent fascination, knowing something of the history is helpful for anybody interested in the state-of-the-art in predicting complex systems which involves the intersection between different fields including: maths, computer science, economics, cognitive science, and artificial intelligence. The books on it are either technical, and therefore inaccessible to ~100% of the population, or non-chronological so it is impossible for someone like me to get a clear picture of how the story unfolded.

Further, there are few if any very deep ideas in maths or science that are so misunderstood and abused as Gödel’s results. As Alan Sokal, author of the brilliant hoax exposing post-modernist academics, said, ‘Gödel’s theorem is an inexhaustible source of intellectual abuses.’ I have tried to make clear some of these using the best book available by Franzen, which explains why almost everything you read about it is wrong. If even Stephen Hawking can cock it up, the rest of us should be particularly careful.

I sketched these notes as I tried to pull together the story from many different books. I hope they are useful particularly for some 15-25 year-olds who like chronological accounts about ideas. I tried to put the notes together in the way that I wish I had been able to read at that age. I tried hard to eliminate errors but they are inevitable given how far I am from being competent to write about such things. I wish someone who is competent would do it properly. It would take time I don’t now have to go through and finish it the way I originally intended to so I will just post it as it was 2 years ago when I got calls saying ‘about this referendum…’

The only change I think I have made since May 2015 is to shove in some notes from a great essay later that year by the man who wrote the textbook on quantum computers, Michael Nielsen, which would be useful to read as an introduction or instead, HERE.

As always on this blog there is not a single original thought and any value comes from the time I have spent condensing the work of others to save you the time. Please leave corrections in comments.

The PDF of the paper is HERE (amended since first publication to correct an error, see Comments).

 

‘Gödel’s achievement in modern logic is singular and monumental – indeed it is more than a monument, it is a land mark which will remain visible far in space and time.’  John von Neumann.

‘Einstein had often told me that in the late years of his life he has continually sought Gödel’s company in order to have discussions with him. Once he said to me that his own work no longer meant much, that he came to the Institute merely in order to have the privilege of walking home with Gödel.’ Oskar Morgenstern (co-author with von Neumann of the first major work on Game Theory).

‘The world is rational’, Kurt Gödel.

Unrecognised simplicities of effective action #1: expertise and a quadrillion dollar business

‘The combination of physics and politics could render the surface of the earth uninhabitable.’ John von Neumann.

Introduction

This series of blogs considers:

  • the difference between fields with genuine expertise, such as fighting and physics, and fields dominated by bogus expertise, such as politics and economic forecasting;
  • the big big problem we face – the world is ‘undersized and underorganised’ because of a collision between four forces: 1) our technological civilisation is inherently fragile and vulnerable to shocks, 2) the knowledge it generates is inherently dangerous, 3) our evolved instincts predispose us to aggression and misunderstanding, and 4) there is a profound mismatch between the scale and speed of destruction our knowledge can cause and the quality of individual and institutional decision-making in ‘mission critical’ institutions – our institutions are similar to those that failed so spectacularly in summer 1914 yet they face crises moving at least ~103 times faster and involving ~106 times more destructive power able to kill ~1010 people;
  • what classic texts and case studies suggest about the unrecognised simplicities of effective action to improve the selection, education, training, and management of vital decision-makers to improve dramatically, reliably, and quantifiably the quality of individual and institutional decisions (particularly 1) the ability to make accurate predictions and b) the quality of feedback);
  • how we can change incentives to aim a much bigger fraction of the most able people at the most important problems;
  • what tools and technologies can help decision-makers cope with complexity.

[I’ve tweaked a couple of things in response to this blog by physicist Steve Hsu.]

*

Summary of the big big problem

The investor Peter Thiel (founder of PayPal and Palantir, early investor in Facebook) asks people in job interviews: what billion (109) dollar business is nobody building? The most successful investor in world history, Warren Buffett, illustrated what a quadrillion (1015) dollar business might look like in his 50th anniversary letter to Berkshire Hathaway investors.

‘There is, however, one clear, present and enduring danger to Berkshire against which Charlie and I are powerless. That threat to Berkshire is also the major threat our citizenry faces: a “successful” … cyber, biological, nuclear or chemical attack on the United States… The probability of such mass destruction in any given year is likely very small… Nevertheless, what’s a small probability in a short period approaches certainty in the longer run. (If there is only one chance in thirty of an event occurring in a given year, the likelihood of it occurring at least once in a century is 96.6%.) The added bad news is that there will forever be people and organizations and perhaps even nations that would like to inflict maximum damage on our country. Their means of doing so have increased exponentially during my lifetime. “Innovation” has its dark side.

‘There is no way for American corporations or their investors to shed this risk. If an event occurs in the U.S. that leads to mass devastation, the value of all equity investments will almost certainly be decimated.

‘No one knows what “the day after” will look like. I think, however, that Einstein’s 1949 appraisal remains apt: “I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.”’

Politics is profoundly nonlinear. (I have written a series of blogs about complexity and prediction HERE which are useful background for those interested.) Changing the course of European history via the referendum only involved about 10 crucial people controlling ~£107  while its effects over ten years could be on the scale of ~108 – 10people and ~£1012: like many episodes in history the resources put into it are extremely nonlinear in relation to the potential branching histories it creates. Errors dealing with Germany in 1914 and 1939 were costly on the scale of ~100,000,000 (108) lives. If we carry on with normal human history – that is, international relations defined as out-groups competing violently – and combine this with modern technology then it is extremely likely that we will have a disaster on the scale of billions (109) or even all humans (~1010). The ultimate disaster would kill about 100 times more people than our failure with Germany. Our destructive power is already much more than 100 times greater than it was then: nuclear weapons increased destructiveness by roughly a factor of a million.

Even if we dodge this particular bullet there are many others lurking. New genetic engineering techniques such as CRISPR allow radical possibilities for re-engineering organisms including humans in ways thought of as science fiction only a decade ago. We will soon be able to remake human nature itself. CRISPR-enabled ‘gene drives’ enable us to make changes to the germ-line of organisms permanent such that changes spread through the entire wild population, including making species extinct on demand. Unlike nuclear weapons such technologies are not complex, expensive, and able to be kept secret for a long time. The world’s leading experts predict that people will be making them cheaply at home soon – perhaps they already are. These developments have been driven by exponential progress much faster than Moore’s Law reducing the cost of DNA sequencing per genome from ~$108 to ~$10in roughly 15 years.

screenshot-2017-01-16-12-24-13

It is already practically possible to deploy a cheap, autonomous, and anonymous drone with facial-recognition software and a one gram shaped-charge to identify a relevant face and blow it up. Military logic is driving autonomy. For example, 1) the explosion in the volume of drone surveillance video (from 71 hours in 2004 to 300,000 hours in 2011 to millions of hours now) requires automated analysis, and 2) jamming and spoofing of drones strongly incentivise a push for autonomy. It is unlikely that promises to ‘keep humans in the loop’ will be kept. It is likely that state and non-state actors will deploy low-cost drone swarms using machine learning to automate the ‘find-fix-finish’ cycle now controlled by humans. (See HERE for a video just released for one such program and imagine the capability when they carry their own communication and logistics network with them.)

In the medium-term, many billions are being spent on finding the secrets of general intelligence. We know this secret is encoded somewhere in the roughly 125 million ‘bits’ of information that is the rough difference between the genome that produces the human brain and the genome that produces the chimp brain. This search space is remarkably small – the equivalent of just 25 million English words or 30 copies of the King James Bible. There is no fundamental barrier to decoding this information and it is possible that the ultimate secret could be described relatively simply (cf. this great essay by physicist Michael Nielsen). One of the world’s leading experts has told me they think a large proportion of this problem could be solved in about a decade with a few tens of billions and something like an Apollo programme level of determination.

Not only is our destructive and disruptive power still getting bigger quickly – it is also getting cheaper and faster every year. The change in speed adds another dimension to the problem. In the period between the Archduke’s murder and the outbreak of World War I a month later it is striking how general failures of individuals and institutions were compounded by the way in which events moved much faster than the ‘mission critical’ institutions could cope with such that soon everyone was behind the pace, telegrams were read in the wrong order and so on. The crisis leading to World War I was about 30 days from the assassination to the start of general war – about 700 hours. The timescale for deciding what to do between receiving a warning of nuclear missile launch and deciding to launch yourself is less than half an hour and the President’s decision time is less than this, maybe just minutes. This is a speedup factor of at least 103.

Economic crises already occur far faster than human brains can cope with. The financial system has made a transition from people shouting at each other to a a system dominated by high frequency ‘algorithmic trading’ (HFT), i.e. machine intelligence applied to robot trading with vast volumes traded on a global spatial scale and a microsecond (10-6) temporal scale far beyond the monitoring, understanding, or control of regulators and politicians. There is even competition for computer trading bases in specific locations based on calculations of Special Relativity as the speed of light becomes a factor in minimising trade delays (cf. Relativistic statistical arbitrage, Wissner-Gross). ‘The Flash Crash’ of 9 May 2010 saw the Dow lose hundreds of points in minutes. Mini ‘flash crashes’ now blow up and die out faster than humans can notice. Given our institutions cannot cope with economic decisions made at ‘human speed’, a fortiori they cannot cope with decisions made at ‘robot speed’. There is scope for worse disasters than 2008 which would further damage the moral credibility of decentralised markets and provide huge chances for extremist political entrepreneurs to exploit. (* See endnote.)

What about the individuals and institutions that are supposed to cope with all this?

Our brains have not evolved much in thousands of years and are subject to all sorts of constraints including evolved heuristics that lead to misunderstanding, delusion, and violence particularly under pressure. There is a terrible mismatch between the sort of people that routinely dominate mission critical political institutions and the sort of people we need: high-ish IQ (we need more people >145 (+3SD) while almost everybody important is between 115-130 (+1 or 2SD)), a robust toolkit for not fooling yourself including quantitative problem-solving (almost totally absent at the apex of relevant institutions), determination, management skills, relevant experience, and ethics. While our ancestor chiefs at least had some intuitive feel for important variables like agriculture and cavalry our contemporary chiefs (and those in the media responsible for scrutiny of decisions) generally do not understand their equivalents, and are often less experienced in managing complex organisations than their predecessors.

The national institutions we have to deal with such crises are pretty similar to those that failed so spectacularly in summer 1914 yet they face crises moving at least ~103 times faster and involving ~106 times more destructive power able to kill ~1010 people. The international institutions developed post-1945 (UN, EU etc) contribute little to solving the biggest problems and in many ways make them worse. These institutions fail constantly and do not  – cannot – learn much.

If we keep having crises like we have experienced over the past century then this combination of problems pushes the probability of catastrophe towards ‘overwhelmingly likely’.

*

What Is To be Done? There’s plenty of room at the top

‘In a knowledge-rich world, progress does not lie in the direction of reading information faster, writing it faster, and storing more of it. Progress lies in the direction of extracting and exploiting the patterns of the world… And that progress will depend on … our ability to devise better and more powerful thinking programs for man and machine.’ Herbert Simon, Designing Organizations for an Information-rich World, 1969.

‘Fascinating that the same problems recur time after time, in almost every program, and that the management of the program, whether it happened to be government or industry, continues to avoid reality.’ George Mueller, pioneer of ‘systems engineering’ and ‘systems management’ and the man most responsible for the success of the 1969 moon landing.

Somehow the world has to make a series of extremely traumatic and dangerous transitions over the next 20 years. The main transition needed is:

Embed reliably the unrecognised simplicities of high performance teams (HPTs), including personnel selection and training, in ‘mission critical’ institutions while simultaneously developing a focused project that radically improves the prospects for international cooperation and new forms of political organisation beyond competing nation states.

Big progress on this problem would automatically and for free bring big progress on other big problems. It could improve (even save) billions of lives and save a quadrillion dollars (~$1015). If we avoid disasters then the error-correcting institutions of markets and science will, patchily, spread peace, prosperity, and learning. We will make big improvements with public services and other aspects of ‘normal’ government. We will have a healthier political culture in which representative institutions, markets serving the public (not looters), and international cooperation are stronger.

Can a big jump in performance – ‘better and more powerful thinking programs for man and machine’ – somehow be systematised?

Feynman once gave a talk titled ‘There’s plenty of room at the bottom’ about the huge performance improvements possible if we could learn to do engineering at the atomic scale – what is now called nanotechnology. There is also ‘plenty of room at the top’ of political structures for huge improvements in performance. As I explained recently, the victory of the Leave campaign owed more to the fundamental dysfunction of the British Establishment than it did to any brilliance from Vote Leave. Despite having the support of practically every force with power and money in the world (including the main broadcasters) and controlling the timing and legal regulation of the referendum, they blew it. This was good if you support Leave but just how easily the whole system could be taken down should be frightening for everybody .

Creating high performance teams is obviously hard but in what ways is it really hard? It is not hard in the same sense that some things are hard like discovering profound new mathematical knowledge. HPTs do not require profound new knowledge. We have been able to read the basic lessons in classics for over two thousand years. We can see relevant examples all around us of individuals and teams showing huge gains in effectiveness.

The real obstacle is not financial. The financial resources needed are remarkably low and the return on small investments could be incalculably vast. We could significantly improve the decisions of the most powerful 100 people in the UK or the world for less than a million dollars (~£106) and a decade-long project on a scale of just ~£107 could have dramatic effects.

The real obstacle is not a huge task of public persuasion – quite the opposite. A government that tried in a disciplined way to do this would attract huge public support. (I’ve polled some ideas and am confident about this.) Political parties are locked in a game that in trying to win in conventional ways leads to the public despising them. Ironically if a party (established or new) forgets this game and makes the public the target of extreme intelligent focus then it would not only make the world better but would trounce their opponents.

The real obstacle is not a need for breakthrough technologies though technology could help. As Colonel Boyd used to shout, ‘People, ideas, machines – in that order!’

The real obstacle is that although we can all learn and study HPTs it is extremely hard to put this learning to practical use and sustain it against all the forces of entropy that constantly operate to degrade high performance once the original people have gone. HPTs are episodic. They seem to come out of nowhere, shock people, then vanish with the rare individuals. People write about them and many talk about learning from them but in fact almost nobody ever learns from them – apart, perhaps, from those very rare people who did not need to learn – and nobody has found a method to embed this learning reliably and systematically in institutions that can maintain it. The Prussian General Staff remained operationally brilliant but in other ways went badly wrong after the death of the elder Moltke. When George Mueller left NASA it reverted to what it had been before he arrived – management chaos. All the best companies quickly go downhill after the departure of people like Bill Gates – even when such very able people have tried very very hard to avoid exactly this problem.

Charlie Munger, half of the most successful investment team in world history, has a great phrase he uses to explain their success that gets to the heart of this problem:

‘There isn’t one novel thought in all of how Berkshire [Hathaway] is run. It’s all about … exploiting unrecognized simplicities… It’s a community of like-minded people, and that makes most decisions into no-brainers. Warren [Buffett] and I aren’t prodigies. We can’t play chess blindfolded or be concert pianists. But the results are prodigious, because we have a temperamental advantage that more than compensates for a lack of IQ points.’

The simplicities that bring high performance in general, not just in investing, are largely unrecognised because they conflict with many evolved instincts and are therefore psychologically very hard to implement. The principles of the Buffett-Munger success are clear – they have even gone to great pains to explain them and what the rest of us should do – and the results are clear yet still almost nobody really listens to them and above average intelligence people instead constantly put their money into active fund management that is proved to destroy wealth every year!

Most people think they are already implementing these lessons and usually strongly reject the idea that they are not. This means that just explaining things is very unlikely to work:

‘I’d say the history that Charlie [Munger] and I have had of persuading decent, intelligent people who we thought were doing unintelligent things to change their course of action has been poor.’ Buffett.

Even more worrying, it is extremely hard to take over organisations that are not run right and make them excellent.

‘We really don’t believe in buying into organisations to change them.’ Buffett.

If people won’t listen to the world’s most successful investor in history on his own subject, and even he finds it too hard to take over failing businesses and turn them around, how likely is it that politicians and officials incentivised to keep things as they are will listen to ideas about how to do things better? How likely is it that a team can take over broken government institutions and make them dramatically better in a way that outlasts the people who do it? Bureaucracies are extraordinarily resistant to learning. Even after the debacles of 9/11 and the Iraq War, costing many lives and trillions of dollars, and even after the 2008 Crash, the security and financial bureaucracies in America and Europe are essentially the same and operate on the same principles.

Buffett’s success is partly due to his discipline in sticking within what he and Munger call their ‘circle of competence’. Within this circle they have proved the wisdom of avoiding trying to persuade people to change their minds and avoiding trying to fix broken institutions.

This option is not available in politics. The Enlightenment and the scientific revolution give us no choice but to try to persuade people and try to fix or replace broken institutions. In general ‘it is better to undertake revolution than undergo it’. How might we go about it? What can people who do not have any significant power inside the system do? What international projects are most likely to spark the sort of big changes in attitude we urgently need?

This is the first of a series. I will keep it separate from the series on the EU referendum though it is connected in the sense that I spent a year on the referendum in the belief that winning it was a necessary though not sufficient condition for Britain to play a part in improving the quality of government dramatically and improving the probability of avoiding the disasters that will happen if politics follows a normal path. I intended to implement some of these ideas in Downing Street if the Boris-Gove team had not blown up. The more I study this issue the more confident I am that dramatic improvements are possible and the more pessimistic I am that they will happen soon enough.

Please leave comments and corrections…

* A new transatlantic cable recently opened for financial trading. Its cost? £300 million. Its advantage? It shaves 2.6 milliseconds off the latency of financial trades. Innovative groups are discussing the application of military laser technology, unmanned drones circling the earth acting as routers, and even the use of neutrino communication (because neutrinos can go straight through the earth just as zillions pass through your body every second without colliding with its atoms) – cf. this recent survey in Nature.

Times op-ed: What Is To Be Done? An answer to Dean Acheson’s famous quip

On Tuesday 2 December, the Times ran an op-ed by me you can see HERE. It got cut slightly for space. Below is the original version that makes a few other points.

I will use this as a start of a new series on what can be done to improve the system including policy, institutions, and management.

NB1. The article is not about the election or party politics. My suggested answer to Acheson is, I think, powerful partly because it is something that could be agreed upon, in various dimensions, across the political spectrum. I left the DfE in January partly because I wanted to have nothing to do with the election and this piece should not be seen as advocating ‘something Tories should say for the election’. I do not think any of the three leaders are interested in or could usefully pursue this goal – I am suggesting something for the future when they are all gone, and they could quite easily all be gone by summer 2016.

NB2. My view is not – ‘public bad, private good’. As I explained in The Hollow Men II, a much more accurate and interesting distinction is between a) large elements of state bureaucracies, dreadful NGOs like the CBI, and many large companies (that have many of the same HR and incentive problems as bureaucracies), where very similar types rise to power because the incentives encourage political skills rather than problem-solving skills, and b) start-ups, where entrepreneurs and technically trained problem-solvers can create organisations that operate extremely differently, move extremely fast, create huge value, and so on.

(For a great insight into start-up world I recommend two books. 1. Peter Thiel’s new book ‘Zero To One‘. 2. An older book telling the story of a mid-90s start-up that was embroiled in the Netscape/Microsoft battle and ended up selling itself to the much better organised Bill Gates – ‘High Stakes, No Prisoners‘ by Charles Ferguson. This blog, Creators and Rulers, by physicist Steve Hsu also summarises some crucial issues excellently.)

Some parts of government can work like start-ups but the rest of the system tries to smother them. For example, DARPA (originally ARPA) was set up as part of the US panic about Sputnik. It operates on very different principles from the rest of the Pentagon’s R&D system. Because it is organised differently, it has repeatedly produced revolutionary breakthroughs (e.g. the internet) despite a relatively tiny budget. But also note – DARPA has been around for decades and its operating principles are clear but nobody else has managed to create an equivalent (openly at least). Also note that despite its track record, D.C. vultures constantly circle trying to make it conform to the normal rules or otherwise clip its wings. (Another interesting case study would be the alternative paths taken by a) the US government developing computers with one genius mathematician, von Neumann, post-1945 (a lot of ‘start-up’ culture) and b) the UK government’s awful decisions in the same field with another genius mathematician, Turing, post-1945.)

When I talk about new and different institutions below, this is one of the things I mean. I will write a separate blog just on DARPA but I think there are two clear action points:

1. We should create a civilian version of DARPA aimed at high-risk/high-impact breakthroughs in areas like energy science and other fundamental areas such as quantum information and computing that clearly have world-changing potential. For it to work, it would have to operate outside all existing Whitehall HR rules, EU procurement rules and so on – otherwise it would be as dysfunctional as the rest of the system (defence procurement is in a much worse state than the DfE, hence, for example, billions spent on aircraft carriers that in classified war-games cannot be deployed to warzones). We could easily afford this if we could prioritise – UK politicians spend far more than DARPA’s budget on gimmicks every year – and it would provide huge value with cascading effects through universities and businesses.

2. The lessons of why and how it works – such as incentivising goals, not micromanaging methods – have general application that are useful when we think generally about Whitehall reform.

Finally, government institutions also operate to exclude from power scientists, mathematicians, and people from the start-up world – the Creators, in Hsu’s term. We need to think very hard about how to use their very rare and valuable skills as a counterweight to the inevitable psychological type that politics will always tend to promote.

Please leave comments, corrections etc below.

DC


 

What Is to Be Done?

There is growing and justified contempt for Westminster. Number Ten has become a tragi-comic press office with the prime minister acting as Über Pundit. Cameron, Miliband, and Clegg see only the news’s flickering shadows on their cave wall – they cannot see the real world behind them. As they watch floundering MPs, officials know they will stay in charge regardless of an election that won’t significantly change Britain’s trajectory.

Our institutions failed pre-1914, pre-1939, and with Europe. They are now failing to deal with a combination of debts, bad public services, security threats, and profound transitions in geopolitics, economics, and technology. They fail in crises because they are programmed to fail. The public knows we need to reorient national policy and reform these institutions. How?

First, we need a new goal. In 1962, Dean Acheson quipped that Britain had failed to find a post-imperial role. The romantic pursuit of ‘the special relationship’ and the deluded pursuit of a leading EU role have failed. This role should focus on making Britain the best country for education and science. Pericles described Athens as ‘the school of Greece’: we could be the school of the world because this role depends on thought and organisation, not size.

This would give us a central role in tackling humanity’s biggest problems and shaping the new institutions, displacing the EU and UN, that will emerge as the world makes painful transitions in coming decades. It would provide a focus for financial priorities and Whitehall’s urgent organisational surgery. It’s a goal that could mobilise very large efforts across political divisions as the pursuit of knowledge is an extremely powerful motive.

Second, we must train aspirant leaders very differently so they have basic quantitative skills and experience of managing complex projects. We should stop selecting leaders from a subset of Oxbridge egomaniacs with a humanities degree and a spell as spin doctor.

In 2012, Fields Medallist Tim Gowers sketched a ‘maths for presidents’ course to teach 16-18 year-olds crucial maths skills, including probability and statistics, that can help solve real problems. It starts next year. [NB. The DfE funded MEI to turn this blog into a real course.] A version should be developed for MPs and officials. (A similar ‘Physics for Presidents‘ course has been a smash hit at Berkeley.) Similarly, pioneering work by Philip Tetlock on ‘The Good Judgement Project‘ has shown that training can reduce common cognitive errors and can sharply improve the quality of political predictions, hitherto characterised by great self-confidence and constant failure.

New interdisciplinary degrees such as ‘World history and maths for presidents’ would improve on PPE but theory isn’t enough. If we want leaders to make good decisions amid huge complexity, and learn how to build great teams, then we should send them to learn from people who’ve proved they can do it. Instead of long summer holidays, embed aspirant leaders with Larry Page or James Dyson so they can experience successful leadership.

Third, because better training can only do so much, we must open political institutions to people and ideas from outside SW1.

A few people prove able repeatedly to solve hard problems in theoretical and practical fields, creating important new ideas and huge value. Whitehall and Westminster operate to exclude them from influence. Instead, they tend to promote hacks and apparatchiks and incentivise psychopathic narcissism and bureaucratic infighting skills – not the pursuit of the public interest.

How to open up the system? First, a Prime Minister should be able to appoint Secretaries of State from outside Parliament. [How? A quick and dirty solution would be: a) shove them in the Lords, b) give Lords ministers ‘rights of audience’ in the Commons, c) strengthen the Select Committee system.]

Second, the 150 year experiment with a permanent civil service should end and Whitehall must open to outsiders. The role of Permanent Secretary should go and ministers should appoint departmental chief executives so they are really responsible for policy and implementation. Expertise should be brought in as needed with no restrictions from the destructive civil service ‘human resources’ system that programmes government to fail. Mass collaborations are revolutionising science [cf. Michael Nielsen’s brilliant book]; they could revolutionise policy. Real openness would bring urgent focus to Whitehall’s disastrous lack of skills in basic functions such as budgeting, contracts, procurement, legal advice, and project management.

Third, Whitehall’s functions should be amputated. The Department for Education improved as Gove shrank it. Other departments would benefit from extreme focus, simplification, and firing thousands of overpaid people. If the bureaucracy ceases to be ‘permanent’, it can adapt quickly. Instead of obsessing on process, distorting targets, and micromanaging methods, it could shift to incentivising goals and decentralising methods.

Fourth, existing legal relationships with the EU and ECHR must change. They are incompatible with democratic and effective government

Fifth, Number Ten must be reoriented from ‘government by punditry’ to a focus on the operational planning and project management needed to convert priorities to reality over months and years.

Technological changes such as genetic engineering and machine intelligence are bringing revolution. It would be better to undertake it than undergo it.