The unrecognised simplicities of effective action #3: lessons on ‘capturing the heavens’ from the ARPA/PARC project that created the internet & PC

Below is a short summary of some basic principles of the ARPA/PARC project that created the internet and the personal computer. I wrote it originally as part of an anniversary blog on the referendum but it is also really part of this series on effective action.

One of the most interesting aspects of this project, like Mueller’s reforms of NASA, is the contrast between 1) extreme effectiveness, changing the world in a profound way, and 2) the general reaction to the methods was not only a failure to learn but a widespread hostility inside established bureaucracies (public and private) to the successful approach: NASA dropped Mueller’s approach when he left and has never been the same, and XEROX closed PARC and fired Bob Taylor. Changing the world in a profound and beneficial way is not enough to put a dint in bureaucracies which operate on their own dynamics.

Warren Buffet explained decades ago how institutions actively fight against learning and fight to stay in a closed and vicious feedback loop:

‘My most surprising discovery: the overwhelming importance in business of an unseen force that we might call “the institutional imperative”. In business school, I was given no hint of the imperative’s existence and I did not intuitively understand it when I entered the business world. I thought then that decent, intelligence, and experienced managers would automatically make rational business decisions. But I learned the hard way that isn’t so. Instead rationality frequently wilts when the institutional imperative comes into play.

‘For example, 1) As if governed by Newton’s First Law, any institution will resist any change in its current direction. 2) … Corporate projects will materialise to soak up available funds. 3) Any business craving of the leader, however foolish, will quickly be supported by … his troops. 4) The behaviour of peer companies … will be mindlessly imitated.’

Many of the principles behind ARPA/PARC could be applied to politics and government but they will not be learned from ‘naturally’ inside the system. Dramatic improvements will only happen if a group of people force ‘system’ changes on how government works so it is open to learning.

I have modified the below very slightly and added some references.

*

ARPA/PARC and ‘capturing the heavens’: The best way to predict the future is to invent it

The panic over Sputnik brought many good things such as a huge increase in science funding. America also created the Advanced Research Projects Agency (ARPA, which later added ‘Defense’ and became DARPA). Its job was to fund high risk / high payoff technology development. In the 1960s and 1970s, a combination of unusual people and unusually wise funding from ARPA created a community that in turn invented the internet, or ‘the intergalactic network’ as Licklider originally called it, and the personal computer. One of the elements of this community was PARC, a research centre working for Xerox. As Bill Gates said, he and Steve Jobs essentially broke into PARC, stole their ideas, and created Microsoft and Apple.

The ARPA/PARC project is an example of how if something is set up properly then a tiny number of people can do extraordinary things.

  • PARC had about 25 people and about $12 million per year in today’s money.
  • The breakthroughs from the ARPA/PARC project  created over 35 TRILLION DOLLARS of value for society and counting.
  • The internet architecture they built, based on decentralisation and distributed control, has scaled up over ten orders of magnitude (1010) without ever breaking and without ever being taken down for maintenance since 1969.

The whole story is fascinating in many ways. I won’t go into the technological aspects. I just want to say something about the process.

What does a process that produces ideas that change the world look like?

One of the central figures was Alan Kay. One of the most interesting things about the project is that not only has almost nobody tried to repeat this sort of research but the business world has even gone out of its way to spread mis-information about it because it was seen as so threatening to business-as-usual.

I will sketch a few lessons from one of Kay’s pieces but I urge you to read the whole thing.

‘This is what I call “The power of the context” or “Point of view is worth 80 IQ points”. Science and engineering themselves are famous examples, but there are even more striking processes within these large disciplines. One of the greatest works of art from that fruitful period of ARPA/PARC research in the 60s and 70s was the almost invisible context and community that catalysed so many researchers to be incredibly better dreamers and thinkers. That it was a great work of art is confirmed by the world-changing results that appeared so swiftly, and almost easily. That it was almost invisible, in spite of its tremendous success, is revealed by the disheartening fact today that, as far as I’m aware, no governments and no companies do edge-of-the-art research using these principles.’

‘[W]hen I think of ARPA/PARC, I think first of good will, even before brilliant people… Good will and great interest in graduate students as “world-class researchers who didn’t have PhDs yet” was the general rule across the ARPA community.

‘[I]t is no exaggeration to say that ARPA/PARC had “visions rather than goals” and “funded people, not projects”. The vision was “interactive computing as a complementary intellectual partner for people pervasively networked world-wide”. By not trying to derive specific goals from this at the funding side, ARPA/PARC was able to fund rather different and sometimes opposing points of view.

‘The pursuit of Art always sets off plans and goals, but plans and goals don’t always give rise to Art. If “visions not goals” opens the heavens, it is important to find artistic people to conceive the projects.

‘Thus the “people not projects” principle was the other cornerstone of ARPA/PARC’s success. Because of the normal distribution of talents and drive in the world, a depressingly large percentage of organizational processes have been designed to deal with people of moderate ability, motivation, and trust. We can easily see this in most walks of life today, but also astoundingly in corporate, university, and government research. ARPA/PARC had two main thresholds: self-motivation and ability. They cultivated people who “had to do, paid or not” and “whose doings were likely to be highly interesting and important”. Thus conventional oversight was not only not needed, but was not really possible. “Peer review” wasn’t easily done even with actual peers. The situation was “out of control”, yet extremely productive and not at all anarchic.

‘”Out of control” because artists have to do what they have to do. “Extremely productive” because a great vision acts like a magnetic field from the future that aligns all the little iron particle artists to point to “North” without having to see it. They then make their own paths to the future. Xerox often was shocked at the PARC process and declared it out of control, but they didn’t understand that the context was so powerful and compelling and the good will so abundant, that the artists worked happily at their version of the vision. The results were an enormous collection of breakthroughs.

‘Our game is more like art and sports than accounting, in that high percentages of failure are quite OK as long as enough larger processes succeed… [I]n most processes today — and sadly in most important areas of technology research — the administrators seem to prefer to be completely in control of mediocre processes to being “out of control” with superproductive processes. They are trying to “avoid failure” rather than trying to “capture the heavens”.

‘All of these principles came together a little over 30 years ago to eventually give rise to 1500 Altos, Ethernetworked to: each other, Laserprinters, file servers and the ARPAnet, distributed to many kinds of end-users to be heavily used in real situations. This anticipated the commercial availability of this genre by 10-15 years. The best way to predict the future is to invent it.

‘[W]e should realize that many of the most important ARPA/PARC ideas haven’t yet been adopted by the mainstream. For example, it is amazing to me that most of Doug Engelbart’s big ideas about “augmenting the collective intelligence of groups working together” have still not taken hold in commercial systems. What looked like a real revolution twice for end-users, first with spreadsheets and then with Hypercard, didn’t evolve into what will be commonplace 25 years from now, even though it could have. Most things done by most people today are still “automating paper, records and film” rather than “simulating the future”. More discouraging is that most computing is still aimed at adults in business, and that aimed at nonbusiness and children is mainly for entertainment and apes the worst of television. We see almost no use in education of what is great and unique about computer modeling and computer thinking. These are not technological problems but a lack of perspective. Must we hope that the open-source software movements will put things right?

‘The ARPA/PARC history shows that a combination of vision, a modest amount of funding, with a felicitous context and process can almost magically give rise to new technologies that not only amplify civilization, but also produce tremendous wealth for the society. Isn’t it time to do this again by Reason, even with no Cold War to use as an excuse? How about helping children of the world grow up to think much better than most adults do today? This would truly create “The Power of the Context”.’

Note how this story runs contrary to how free market think tanks and pundits describe technological development. The impetus for most of this development came from government funding, not markets.

Also note that every attempt since the 1950s to copy ARPA and JASON (the semi-classified group that partly gave ARPA its direction) in the UK has been blocked by Whitehall. The latest attempt was in 2014 when the Cabinet Office swatted aside the idea. Hilariously its argument was ‘DARPA has had a lot of failures’ thus demonstrating extreme ignorance about the basic idea — the whole point is you must have failures and if you don’t have lots of failures then you are failing!

People later claimed that while PARC may have changed the world it never made any money for XEROX. This is ‘absolute bullshit’ (Kay). It made billions from the laser printer alone and overall Xerox made 250 times what they invested in PARC before they went bust. In 1983 they fired Bob Taylor, the manager of PARC and the guy who made it all happen.

‘They hated [Taylor] for the very reason that most companies hate people who are doing something different, because it makes middle and upper management extremely uncomfortable. The last thing they want to do is make trillions, they want to make a few millions in a comfortable way’ (Kay).

Someone finally listened to Kay recently. ‘YC Research’, the research arm of the world’s most successful (by far) technology incubator, is starting to fund people in this way. I am not aware of any similar UK projects though I know that a small network of people are thinking again about how something like this could be done here. If you can help them, take a risk and help them! Someone talk to science minister Jo Johnson but be prepared for the Treasury’s usual ignorant bullshit — ‘what are we buying for our money, and how can we put in place appropriate oversight and compliance?’ they will say!

*

As we ponder the future of the UK-EU relationship shaped amid the farce of modern Whitehall, we should think hard about the ARPA/PARC example: how a small group of people can make a huge breakthrough with little money but the right structure, the right ways of thinking, and the right motives.

Those of us outside the political system thinking ‘we know we can do so much better than this but HOW can we break through the bullshit?’ need to change our perspective and gain 80 IQ points.

This real picture is a metaphor for the political culture: ad hoc solutions that are either bad or don’t scale.

Screenshot 2017-06-14 16.58.14.png

ARPA said ‘Let’s get rid of all the wires’. How do we ‘get rid of all the wires’ and build something different that breaks open the closed and failing political cultures? Winning the referendum was just one step that helps clear away dead wood but we now need to build new things.

The ARPA vision that aligned the artists ‘like little iron filings’ was:

‘Computers are destined to become interactive intellectual amplifiers for everyone in the world universally networked worldwide’ (Licklider).

We need a motivating vision aimed not at tomorrow but at changing the basic wiring of  the whole system, a vision that can align ‘the little iron filings’, and then start building for the long-term.

I will go into what I think this vision could be and how to do it another day. I think it is possible to create something new that could scale very fast and enable us to do politics and government extremely differently, as different to today as the internet and PC were to the post-war mainframes. This would enable us to build huge long-term value for humanity in a relatively short time (less than 20 years). To create it we need a process as well suited to the goal as the ARPA/PARC project was and incorporating many of its principles.

We must try to escape the current system with its periodic meltdowns and international crises. These crises move 500-1,000 times faster than that of summer 1914. Our destructive potential is at least a million-fold greater than it was in 1914. Yet we have essentially the same hierarchical command-and-control decision-making systems in place now that could not even cope with 1914 technology and pace. We have dodged nuclear wars by fluke because individuals made snap judgements in minutes. Nobody who reads the history of these episodes can think that this is viable long-term, and we will soon have another wave of innovation to worry about with autonomous robots and genetic engineering. Technology gives us no option but to try to overcome evolved instincts like destroying out-group competitors.

Watch Alan Kay explain how to invent the future HERE and HERE.

This link has these seminal papers:

  • Man-Computer Symbiosis, Licklider (1960)
  • The computer as a communications device, Licklider & Taylor (1968)

Part I of this series is HERE.

Part II on the emergence of ‘systems management’, how George Mueller used it to put man on the moon, and a checklist of how successful management of complex projects is systematically different to how Whitehall (and other state bureaucracies) work HERE.


Ps. Kay also points out that the real computer revolution won’t happen until people fulfil the original vision of enabling children to use this powerful way of thinking:

‘The real printing revolution was a qualitative change in thought and argument that lagged the hardware inventions by almost two centuries. The special quality of computers is their ability to rapidly simulate arbitrary descriptions, and the real computer revolution won’t happen until children can learn to read, write, argue and think in this powerful new way. We should all try to make this happen much sooner than 200 or even 20 more years!’

Almost nobody in education policy is aware of the educational context for the ARPA/PARC project which also speaks volumes about the abysmal field of ‘education research/policy’. People rightly say ‘education tech has largely failed’ but very few are aware that many of the original ideas from Licklider, Engelbart et al have never been tried and the Apple and MS versions are not the original vision.

 

Complexity and Prediction Part V: The crisis of mathematical paradoxes, Gödel, Turing and the basis of computing

Before the referendum I started a series of blogs and notes exploring the themes of complexity and prediction. This was part of a project with two main aims: first, to sketch a new approach to education and training in general but particularly for those who go on to make important decisions in political institutions and, second, to suggest a new approach to political priorities in which progress with education and science becomes a central focus for the British state. The two are entangled: progress with each will hopefully encourage progress with the other.

I was working on this paper when I suddenly got sidetracked by the referendum and have just looked at it again for the first time in about two years.

The paper concerns a fascinating episode in the history of ideas that saw the most esoteric and unpractical field, mathematical logic, spawn a revolutionary technology, the modern computer. NB. a great lesson to science funders: it’s a great mistake to cut funding on theory and assume that you’ll get more bang for buck from ‘applications’.

Apart from its inherent fascination, knowing something of the history is helpful for anybody interested in the state-of-the-art in predicting complex systems which involves the intersection between different fields including: maths, computer science, economics, cognitive science, and artificial intelligence. The books on it are either technical, and therefore inaccessible to ~100% of the population, or non-chronological so it is impossible for someone like me to get a clear picture of how the story unfolded.

Further, there are few if any very deep ideas in maths or science that are so misunderstood and abused as Gödel’s results. As Alan Sokal, author of the brilliant hoax exposing post-modernist academics, said, ‘Gödel’s theorem is an inexhaustible source of intellectual abuses.’ I have tried to make clear some of these using the best book available by Franzen, which explains why almost everything you read about it is wrong. If even Stephen Hawking can cock it up, the rest of us should be particularly careful.

I sketched these notes as I tried to pull together the story from many different books. I hope they are useful particularly for some 15-25 year-olds who like chronological accounts about ideas. I tried to put the notes together in the way that I wish I had been able to read at that age. I tried hard to eliminate errors but they are inevitable given how far I am from being competent to write about such things. I wish someone who is competent would do it properly. It would take time I don’t now have to go through and finish it the way I originally intended to so I will just post it as it was 2 years ago when I got calls saying ‘about this referendum…’

The only change I think I have made since May 2015 is to shove in some notes from a great essay later that year by the man who wrote the textbook on quantum computers, Michael Nielsen, which would be useful to read as an introduction or instead, HERE.

As always on this blog there is not a single original thought and any value comes from the time I have spent condensing the work of others to save you the time. Please leave corrections in comments.

The PDF of the paper is HERE (amended since first publication to correct an error, see Comments).

 

‘Gödel’s achievement in modern logic is singular and monumental – indeed it is more than a monument, it is a land mark which will remain visible far in space and time.’  John von Neumann.

‘Einstein had often told me that in the late years of his life he has continually sought Gödel’s company in order to have discussions with him. Once he said to me that his own work no longer meant much, that he came to the Institute merely in order to have the privilege of walking home with Gödel.’ Oskar Morgenstern (co-author with von Neumann of the first major work on Game Theory).

‘The world is rational’, Kurt Gödel.

Unrecognised simplicities of effective action #2(b): the Apollo programme, the Tory train wreck, and advice to spads starting work today

A few months ago I put a paper on my blog: The unrecognised simplicities of effective action #2: ‘Systems engineering’ and ‘systems management’ — ideas from the Apollo programme for a ‘systems politics’.

It examined the history of the classified programme to build ICBMs and the way in which George Mueller turned the failing NASA bureaucracy into an organisation that could put man on the moon. The heart of the paper is about the principles behind effective management of complex projects. These principles are relevant to Government, politics, and campaigns.

The paper is long as I thought it worthwhile to tell some of the detailed story. At the suggestion of various spads, ministers, hacks and so on I have cut and pasted the conclusion below particularly for those starting new jobs today. This is in the form of a crude checklist that compares a) the principles of Mueller’s systems management and b) how Whitehall actually works.

You will see that Whitehall operates on exactly opposite principles to those organisations where high performance creates real value. You will also soon see that you are now in a culture in which almost nobody is aware of this and anybody who suggests it sinks their career. In your new department, failure is so normal it is not defined as ‘failure’. Officials lose millions and get a gong. There is little spirit of public service or culture of responsibility. The most political people are promoted and the most competent people, like Victoria Woodcock, leave. The very worst officials are often put in charge of training the next generation. For most powerful officials, the most important thing is preserving the system, closed and impregnable. Unlike for ministers, the TV blaring with DISASTER is of no concern – provided it is the Minister in the firing line not them – and the responsible officials will happily amble to the tube at 4pm while political careers hang in the balance and you draft statements taking ‘full responsibility’ for things you knew nothing about and would have been prohibited from fixing if you had.

For all those spads in particular who are moving into new jobs, it is worth reflecting on the deep principles that actually determine why things work and do not work. Nobody will explain these to you or talk to you about them. Sadly, few MPs these days understand the crucial role of management – they tend to think of it like science as a rather lowly skill beneath their Olympian status – so you will also probably have to cope with the fact that your minister is more interested in keeping one step ahead of Simon Walters (they won’t). The thing that officials will try hardest to do is convey to you that you have no role in personnel decisions and/or management.

If you accept that, you are accepting at the start that you will achieve very little. The reason why Gove’s team got much more done than ANY insider thought was possible – including Cameron and the Perm Sec – was because we bent or broke the rules and focused very hard on a) replacing rubbish officials and bringing in people from outside and b) project management.

You cannot reform the way the civil service works. Only a PM can do that and there is no chance of May doing it – she blew her chance and her reward is to be pushed around by Heywood and Sue Gray until her colleagues pull the plug and start the leadership campaign. You should assume that won’t be long so focus, manage a few priorities with daily and weekly timetables, and use embarrassing errors to negotiate secret deals with the Perm Sec to move rubbish officials out of your priority areas – trust me, Perm Secs understand this game and will do deals with alacrity to make their lives easier. Officials are less politically biased than you probably have been told – they are much more concerned with avoiding hard work and protecting the system than in resisting specific policies, and you can exploit this. Make alliances with the good officials who still have hope and have not been broken by the system, there are surprisingly many who will pop up if they think you actually care about the public rather than party interests.

You will also notice that fundamental issues of organisational culture described below explain the shambles of CCHQ over the past 8 weeks: the lack of information sharing, the lack of orientation, the culture of blaming juniors for the failures of overpaid senior people, bottlenecks preventing fast decisions, endless small errors compounding into a broken organisation because nobody knows who is responsible for what and so on. Every failing organisation has the same stories, people find it very hard to learn from the most successful organisations and people.

To the extent Vote Leave was successful, it was partly because I consciously tried to copy Mueller in various ways, though given my own severe limitations this was patchy. If you ever get the chance to exercise leadership, try to copy people like Mueller who tried to make the world better and build an organisation that people were proud to serve.

Finally, consider the basic condition that allows Westminster and Whitehall to be so rubbish and get away with it: they are not just monopolies, they set the rules of the game, and both the civil service and the parties make it almost impossible for outsiders to influence anything. But a) the combination of the 2008 crisis, Brexit, and extreme unhappiness about politics as usual provides a potentially powerful fuel for an insurgency, and b) technology provides opportunities for startups to catch public imagination and scale extremely fast. I’ve always been sceptical of the idea of a new UK party of any sort but I increasingly think there is a chance that a handful of entrepreneurs could start a sort of anti-party to exploit the broken system and create something which confounds the right/centre/left broken mental model that dominates SW1 and which combines Mueller’s principles with Silicon Valley technology.

If the Tory Party does not make some profound changes fast, then it faces being blamed for the disintegration of Brexit talks and the election of Corbyn after which it is possible that, rather than attempting a coup to take them over, entrepreneurs may decide it is more rational to build something that ploughs them into the earth next to Corbyn.

I said since last summer that if the Tory Party tried to carry on with Brexit and government using the same broken Downing Street operation, which spends its time on crap spin and has almost no capacity for serious management, and the same broken political operation, dominated by people who have failed to persuade the country convincingly for many years, then they would blow up. They failed to change Downing Street and they ran yet another fundamentally misconceived campaign that blew massive structural advantages. Kaboom.

[[Within minutes of publishing this blog I got the following email from someone I haven’t met but who I know was inside CCHQ with the para above highlighted and these words: ‘This is exactly my depressing experience – shit show run by people who don’t care about anything other than their jobs.’]]

MPs of all parties need to realise that the referendum makes it impossible to carry on with your usual bullshit – it forces changes upon you even though you want to carry on with the old games. The first set of MPs that realise this and change their operating principles will quickly overwhelm the others: there is a huge first-mover advantage especially in a field characterised by institutional incompetence that is susceptible to external shocks (terror, financial collapse) and which is opening up to technological disruption. And you will only get on top of Brexit if you realise that leaving the EU is a systems problem requiring a systems response and this means a radically different organisation of the UK negotiating team. The challenge is not far short of the political equivalent of the Apollo program and it needs similarly imaginative management.

For those who do want to do something better, the below will be useful. I encourage you to read the whole history HERE but for those rushing through a sandwich on Day 1 this summary will help you think of the big picture. If you want a detailed tutorial on how the civil service works then read The Hollow Men HERE

[Added later… It is also very instructive that despite the triumph of Mueller’s methods, NASA itself abandoned them after he left and has never recovered. Even spectacular success on a world-changing project is not enough to beat bureaucratic inertia. Also, the US Government passed so many laws that Mueller himself said in later life it would be impossible to repeat Apollo without making it a classified ‘black’ project to evade the regulations. JSOC, US classified special forces, has to run a lot of its standard procurement via ‘black’ procurement processes just to get anything done. The abysmal procurement rules imposed under the Single Market are just one of the good reasons for us to get out of the SM as well as the EU. I had to deal with them a lot in the DfE and had to find ways to cheat them a lot to get things done faster and cheaper. They add billions to costs every year and Whitehall refused for years even to assess this huge area to avoid undermining support for the EU.]

*

Excerpt from The unrecognised simplicities of effective action #2 (p.28ff) 

Core lessons [of Mueller’s systems management] for politics?

Finally, I will summarise some of the core lessons of systems management that could be applied to re-engineering political institutions such as Downing Street.  Mueller’s approach meant an extreme focus on some core principles:

  • Organisation-wide orientation. Everybody in a large organisation must understand as much about the goals and plans as possible. Whitehall now works on opposite principles: I doubt a single department has proper orientation across most of the organisation (few will have it even across the top 10 people), never mind a whole government. This is partly because most ministers fail at the first hurdle — developing coherent goals — so effective orientation is inherently impossible.
  • Integration. There must be an overall approach in which the most important elements fit together, including in policy, management, and communications. Failures in complex projects, from renovating your house to designing a new welfare system, often occur at interfaces between parts. Whitehall now works on opposite principles: for example, Cameron and Osborne approached important policy on immigration/welfare in the opposite way by 1) promising to reduce immigration to less than 100,000 while simultaneously 2) having no legal tools to do this (and even worse promising to change this then failing in the EU renegotiation) and 3) having welfare policies that incentivised more immigration then 4) announcing a new living wage thus increasing incentives further for immigration. They emphasised each element as part of short-term political games and got themselves into a long-term inescapable mess.
  • Extreme transparency and communication, horizontally as well as hierarchically. More, richer, deeper communication so that ‘all of us understand what was going on throughout the program… [C]ommunications on a level that is free and easy and not constrained by the fact that you’re the boss… [This was] the secret of the success of the program, because so many programs fail because everybody doesn’t know what it is they are supposed to do’ (Mueller). Break information and management silos — a denser network of information and commands is necessary and much of it must be decentralised and distributed between different teams, but with leadership having fast and clear information flow at the centre so problems are seen and tackled fast (a virtuous circle). There is very little that needs to be kept secret in government and different processes can easily be developed for that very small number of things. As McChrystal says of special forces operations generally the advantages of communication hugely outweigh the dangers of leaks. Whitehall now works on opposite principles: it keeps information secret that does not need to be secret in order to hide its own internal processes from scrutiny, thus adding to its own management failures and distrust (a vicious circle).
  • ‘Configuration management’. There must be a process whereby huge efforts go into the initial design of a complex system then there is a process whereby changes are made in a disciplined way such that a) interdependencies are tested where possible by relevant people before a change is agreed and b) then everybody relevant knows about the change. This ties together design, engineering, management, scheduling, cost, contracts, and allows the coordination of interdisciplinary teams. Test, learn, communicate results, change where needed, communicate… Whitehall now works on opposite principles: it does not put enough effort into the initial design then makes haphazard changes then fails to communicate changes effectively.
  • Physical and information structures should reinforce open communication. From Mueller’s NASA to JSOC, organisations that have coped well with complexity have built novel control centres to reinforce extreme communication. Spend money and time on new technologies and processes to help spread orientation and learning through the organisation. Whitehall now works on opposite principles: e.g. its antiquated committee structure and ‘red box’ system are ludicrously inefficient regarding management but are kept because they give officials huge control over ministers.
  • Long-term budgets. Long-term budgets save money. Whitehall now works on opposite principles: normal government budget processes do not value speed and savings from doing things fast. They are focused on what Parliament thinks this year. This makes it very hard to plan wisely and wastes money in the long-term (see below).
  • You need a complex mix of centralisation and decentralisation. While overall vision, goals, and strategy usually comes from the top, it is vital that extreme decentralisation dominates operationally so that decisions are fast and unbureaucratic. Information must be shared centrally and horizontally across the organisation — it is not either/or. Big complex projects must empower people throughout the network and cannot rely on issuing orders through a hierarchy. Whitehall now works on opposite principles: it is a centralising ratchet. E.g. Budgets and spending reviews are the exact opposite of Mueller’s approach. 1) They are short-term with almost no long-term elements. 2) They do not balance off priorities in any serious way. 3) They involve totally fake numbers — every department lies to the Treasury and provides fake numbers. Treasury officials dig into these. There are rounds of these games. Officials never stop lying. To maintain the charade the Chancellor never says to the SoS ‘stop your officials lying to us’ — candour would break the system. 4) The Treasury does not have the expertise to evaluate most of what they are looking at. The idea it is a department staffed by brilliant whiz kids is a joke. I saw DfE officials with very modest abilities routinely cheat the Treasury.
  • Extreme focus on errors. Schriever had ‘Black Saturdays’ and Mueller had similar meetings focused not on ‘reporting progress’ but making clear the problems. Simple as it sounds this is very unusual. Whitehall now works on opposite principles: routinely nobody is held responsible for errors and most management works on the basis of ‘give me good news not bad news’. Neither the culture nor incentives focus effort on eliminating errors. Most don’t care and you see those responsible for disaster ambling to the tube at 4pm or going on holiday amid meltdown.
  • Spending on redundancy to improve resilience. Whitehall now works on opposite principles: it tends to treat redundancy as ‘waste’ and its short-term budget processes reinforce decisions that mean out-of-control long-term budgets. By the time the long-term happens, the responsible people have all moved on to better paid jobs and nobody is accountable.
  • Important knowledge is discovered but then the innovation is standardised and codified so it can be easily learned and used by others. Whitehall now works on opposite principles: for example, in the Department for Education officials systematically destroyed its own library. The DfE operated with almost no institutional memory. By the time I left in 2014, after David Cameron banned me from entering any department officials would ask to meet me outside to find out why decisions had been taken in 2011 because three years later almost everybody had moved on to other things. The Foreign Office similarly destroyed its own library.
  • Systems management means lots of process and documentation but at its best it is fluid and purposeful — it is not process for ass-covering. The crucial ‘Gillette Procedures’ swept away red tape and Schriever battled the system to maintain freedom from normal government processes. When asked how he would do a similar programme to Apollo now (1990s) Mueller responded that the only way to do it would be as a classified ‘black’ project to escape the law on issues like procurement. Whitehall now works on opposite principles: its obsession is bullshit process for buck-passing and it fights with all its might against simplification and focus.
  • Saving time saves money. Schriever and Mueller focused on speed and saving time. Whitehall now works on opposite principles: its default mode is to go slower and those who advocate speed are denounced as reckless. Repeatedly in the DfE I was told it was ‘impossible’ to do things in the period I demanded — often less than half what senior officials wanted — yet we often achieved this and there was practically no example of failure that came because my time demands were inherently unreasonable. The system naturally pushes for the longest periods they can get away with to give themselves what they think of as a chance to beat ‘expectations’ but then they often fail on absurdly long timetables. In the DfE we often had a better record of hitting timetables that were ‘impossibly’ short than on those that were traditionally long. Also in many areas there is no downside to pushing fast — the worst that happens is minor and irrelevant embarrassment while the cumulative gains from trying to go fast are huge.
  • The ‘systems’ approach is inherently interdisciplinary ‘because its function is to integrate the specialized separate pieces of a complex of apparatus and people — the system — into a harmonious ensemble that optimally achieves the desired end’ (Ramo). Whitehall now works on opposite principles: it is hopeless at assembling interdisciplinary teams and elevates legal advice over everything in relation to practically any problem, causing huge delays and cost overruns.
  • The ‘matrix management’ system allowed coordination across different departments and different projects.  Whitehall now works on opposite principles. It is stuck with antiquated departments, an antiquated Cabinet Office system, and antiquated project management. Anything ‘cross-government’ is an immediate clue to the savvy that it is doomed and rarely worth wasting time on. A ‘matrix’ approach could and should be applied to break existing hierarchies and speed everything up.
  • People and ideas were more important than technology. Computers and other technologies can help but the main ideas came in the 1950s before personal computers. JSOC applied all sorts of technologies but Colonel Boyd’s dictum holds: people, ideas, technology — in that order. Whitehall now works on opposite principles: for example, the former Cabinet Secretary, Gus O’Donnell, recently blamed a ‘lack of investment’ in IT and a shortage of staff for a huge range of Whitehall blunders. This is really deluded. The central problem is known to all experts and is shown in almost every inquiry: IT projects fail repeatedly in the same ways because of failures of management, not ‘lack of investment’, and adding people to flawed projects is not a solution.

Ministers have little grip of departments and little power to change their direction. They can’t hire or fire and they can’t set incentives. They are almost never in a job long enough to acquire much useful knowledge and they almost never have the sort of management skills that provide alternative value to specific knowledge. They have little chance to change anything and officials ensure this little chance becomes almost no chance.

This story shows how to do things much better than normal. It shows that the principles underlying Mueller’s success are naturally in extreme competition with the principles of management that dominate all normal bureaucracies, public or private. People have been able to read about these principles for decades yet today in Whitehall almost everything runs on exactly the opposite principles: incentives operate to suppress learning. The institutional and policy changes inherent in leaving the EU are a systems problem requiring a systems response. Implementing Mueller’s principles would mean changes to most of the antiquated and failing foundations of Whitehall and bring big improvements and cost savings. Such changes are likely to be resisted by most MPs as well as Whitehall given few of them understand or have experience in high performance teams and would regard Mueller’s approach as a threat to their career prospects.

Because Whitehall is a system failure in which different failures are entangled, its inhabitants tend to potter around in an uncomprehending fog of confusion without understanding why things fail every day and therefore they do not support changes that could improve things even though these changes would be personally advantageous particularly for the first mover.

What is the minimum needed to break bureaucratic resistance and spark a virtuous circle?

How can people outside the system affect mission critical political institutions protected from market competition and resistant to major reforms?

How can we replace many traditional centralised bureaucracies with institutions that mimic successful biological systems such as the immune system that a) use distributed information processing to identify useful structure in the environment, b) find ‘good enough’ solutions in a vast search space of possibilities, and c) move at least ten times faster than existing systems?

[If you find this interesting and/or useful, then the PDF of the whole story is here. It involves some of the cleverest people of the 20th Century, such as John von Neumann.]

Unrecognised simplicities of effective action #2: ‘Systems’ thinking — ideas from the Apollo programme for a ‘systems politics’

This is the second in a series: click this link 201702-effective-action-2-systems-engineering-to-systems-politics. The first is HERE.

This paper concerns a very interesting story combining politics, management, institutions, science and technology. When high technology projects passed a threshold of complexity post-1945 amid the extreme pressure of the early Cold War, new management ideas emerged. These ideas were known as ‘systems engineering’ and ‘systems management’. These ideas were particularly connected to the classified program to build the first Intercontinental Ballistic Missiles (ICBMs) in the 1950s and successful ideas were transplanted into a failing NASA by George Mueller and others from 1963 leading to the successful moon landing in 1969.

These ideas were then applied in other mission critical teams and could be used to improve government performance. Urgently needed projects to lower the probability of catastrophes for humanity will benefit from considering why Mueller’s approach was 1) so successful and 2) so un-influential in politics. Could we develop a ‘systems politics’ that applies the unrecognised simplicities of effective action?

For those interested, it also looks briefly at an interesting element of the story – the role of John von Neumann, the brilliant mathematician who was deeply involved in the Manhattan Project, the project to build ICBMs, the first digital computers, and subjects like artificial intelligence, artificial life, possibilities for self-replicating machines made from unreliable components, and the basic problem that technological progress ‘gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we have known them, cannot continue.’

An obvious project with huge inherent advantages for humanity is the development of an international manned lunar base as part of developing space for commerce and science. It is the sort of thing that might change political dynamics on earth and could generate enormous support across international boundaries. After 23 June 2016, the UK has to reorient national policy on many dimensions. Developing basic science is one of the most important dimensions (for example, as I have long argued we urgently need a civilian version of DARPA similarly operating outside normal government bureaucratic systems including procurement and HR). Supporting such an international project would be a great focus for UK efforts and far more productive than our largely wasted decades of focus on the dysfunctional bureaucracy in Brussels that is dominated by institutions that fail the most important test – the capacity for error-correction the importance of which has been demonstrated over long periods and through many problems by the Anglo-American political system and its common law.

Please leave comments or email dmc2.cummings at gmail.com

 

Unrecognised simplicities of effective action #1: expertise and a quadrillion dollar business

‘The combination of physics and politics could render the surface of the earth uninhabitable.’ John von Neumann.

Introduction

This series of blogs considers:

  • the difference between fields with genuine expertise, such as fighting and physics, and fields dominated by bogus expertise, such as politics and economic forecasting;
  • the big big problem we face – the world is ‘undersized and underorganised’ because of a collision between four forces: 1) our technological civilisation is inherently fragile and vulnerable to shocks, 2) the knowledge it generates is inherently dangerous, 3) our evolved instincts predispose us to aggression and misunderstanding, and 4) there is a profound mismatch between the scale and speed of destruction our knowledge can cause and the quality of individual and institutional decision-making in ‘mission critical’ institutions – our institutions are similar to those that failed so spectacularly in summer 1914 yet they face crises moving at least ~103 times faster and involving ~106 times more destructive power able to kill ~1010 people;
  • what classic texts and case studies suggest about the unrecognised simplicities of effective action to improve the selection, education, training, and management of vital decision-makers to improve dramatically, reliably, and quantifiably the quality of individual and institutional decisions (particularly 1) the ability to make accurate predictions and b) the quality of feedback);
  • how we can change incentives to aim a much bigger fraction of the most able people at the most important problems;
  • what tools and technologies can help decision-makers cope with complexity.

[I’ve tweaked a couple of things in response to this blog by physicist Steve Hsu.]

*

Summary of the big big problem

The investor Peter Thiel (founder of PayPal and Palantir, early investor in Facebook) asks people in job interviews: what billion (109) dollar business is nobody building? The most successful investor in world history, Warren Buffett, illustrated what a quadrillion (1015) dollar business might look like in his 50th anniversary letter to Berkshire Hathaway investors.

‘There is, however, one clear, present and enduring danger to Berkshire against which Charlie and I are powerless. That threat to Berkshire is also the major threat our citizenry faces: a “successful” … cyber, biological, nuclear or chemical attack on the United States… The probability of such mass destruction in any given year is likely very small… Nevertheless, what’s a small probability in a short period approaches certainty in the longer run. (If there is only one chance in thirty of an event occurring in a given year, the likelihood of it occurring at least once in a century is 96.6%.) The added bad news is that there will forever be people and organizations and perhaps even nations that would like to inflict maximum damage on our country. Their means of doing so have increased exponentially during my lifetime. “Innovation” has its dark side.

‘There is no way for American corporations or their investors to shed this risk. If an event occurs in the U.S. that leads to mass devastation, the value of all equity investments will almost certainly be decimated.

‘No one knows what “the day after” will look like. I think, however, that Einstein’s 1949 appraisal remains apt: “I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.”’

Politics is profoundly nonlinear. (I have written a series of blogs about complexity and prediction HERE which are useful background for those interested.) Changing the course of European history via the referendum only involved about 10 crucial people controlling ~£107  while its effects over ten years could be on the scale of ~108 – 10people and ~£1012: like many episodes in history the resources put into it are extremely nonlinear in relation to the potential branching histories it creates. Errors dealing with Germany in 1914 and 1939 were costly on the scale of ~100,000,000 (108) lives. If we carry on with normal human history – that is, international relations defined as out-groups competing violently – and combine this with modern technology then it is extremely likely that we will have a disaster on the scale of billions (109) or even all humans (~1010). The ultimate disaster would kill about 100 times more people than our failure with Germany. Our destructive power is already much more than 100 times greater than it was then: nuclear weapons increased destructiveness by roughly a factor of a million.

Even if we dodge this particular bullet there are many others lurking. New genetic engineering techniques such as CRISPR allow radical possibilities for re-engineering organisms including humans in ways thought of as science fiction only a decade ago. We will soon be able to remake human nature itself. CRISPR-enabled ‘gene drives’ enable us to make changes to the germ-line of organisms permanent such that changes spread through the entire wild population, including making species extinct on demand. Unlike nuclear weapons such technologies are not complex, expensive, and able to be kept secret for a long time. The world’s leading experts predict that people will be making them cheaply at home soon – perhaps they already are. These developments have been driven by exponential progress much faster than Moore’s Law reducing the cost of DNA sequencing per genome from ~$108 to ~$10in roughly 15 years.

screenshot-2017-01-16-12-24-13

It is already practically possible to deploy a cheap, autonomous, and anonymous drone with facial-recognition software and a one gram shaped-charge to identify a relevant face and blow it up. Military logic is driving autonomy. For example, 1) the explosion in the volume of drone surveillance video (from 71 hours in 2004 to 300,000 hours in 2011 to millions of hours now) requires automated analysis, and 2) jamming and spoofing of drones strongly incentivise a push for autonomy. It is unlikely that promises to ‘keep humans in the loop’ will be kept. It is likely that state and non-state actors will deploy low-cost drone swarms using machine learning to automate the ‘find-fix-finish’ cycle now controlled by humans. (See HERE for a video just released for one such program and imagine the capability when they carry their own communication and logistics network with them.)

In the medium-term, many billions are being spent on finding the secrets of general intelligence. We know this secret is encoded somewhere in the roughly 125 million ‘bits’ of information that is the rough difference between the genome that produces the human brain and the genome that produces the chimp brain. This search space is remarkably small – the equivalent of just 25 million English words or 30 copies of the King James Bible. There is no fundamental barrier to decoding this information and it is possible that the ultimate secret could be described relatively simply (cf. this great essay by physicist Michael Nielsen). One of the world’s leading experts has told me they think a large proportion of this problem could be solved in about a decade with a few tens of billions and something like an Apollo programme level of determination.

Not only is our destructive and disruptive power still getting bigger quickly – it is also getting cheaper and faster every year. The change in speed adds another dimension to the problem. In the period between the Archduke’s murder and the outbreak of World War I a month later it is striking how general failures of individuals and institutions were compounded by the way in which events moved much faster than the ‘mission critical’ institutions could cope with such that soon everyone was behind the pace, telegrams were read in the wrong order and so on. The crisis leading to World War I was about 30 days from the assassination to the start of general war – about 700 hours. The timescale for deciding what to do between receiving a warning of nuclear missile launch and deciding to launch yourself is less than half an hour and the President’s decision time is less than this, maybe just minutes. This is a speedup factor of at least 103.

Economic crises already occur far faster than human brains can cope with. The financial system has made a transition from people shouting at each other to a a system dominated by high frequency ‘algorithmic trading’ (HFT), i.e. machine intelligence applied to robot trading with vast volumes traded on a global spatial scale and a microsecond (10-6) temporal scale far beyond the monitoring, understanding, or control of regulators and politicians. There is even competition for computer trading bases in specific locations based on calculations of Special Relativity as the speed of light becomes a factor in minimising trade delays (cf. Relativistic statistical arbitrage, Wissner-Gross). ‘The Flash Crash’ of 9 May 2010 saw the Dow lose hundreds of points in minutes. Mini ‘flash crashes’ now blow up and die out faster than humans can notice. Given our institutions cannot cope with economic decisions made at ‘human speed’, a fortiori they cannot cope with decisions made at ‘robot speed’. There is scope for worse disasters than 2008 which would further damage the moral credibility of decentralised markets and provide huge chances for extremist political entrepreneurs to exploit. (* See endnote.)

What about the individuals and institutions that are supposed to cope with all this?

Our brains have not evolved much in thousands of years and are subject to all sorts of constraints including evolved heuristics that lead to misunderstanding, delusion, and violence particularly under pressure. There is a terrible mismatch between the sort of people that routinely dominate mission critical political institutions and the sort of people we need: high-ish IQ (we need more people >145 (+3SD) while almost everybody important is between 115-130 (+1 or 2SD)), a robust toolkit for not fooling yourself including quantitative problem-solving (almost totally absent at the apex of relevant institutions), determination, management skills, relevant experience, and ethics. While our ancestor chiefs at least had some intuitive feel for important variables like agriculture and cavalry our contemporary chiefs (and those in the media responsible for scrutiny of decisions) generally do not understand their equivalents, and are often less experienced in managing complex organisations than their predecessors.

The national institutions we have to deal with such crises are pretty similar to those that failed so spectacularly in summer 1914 yet they face crises moving at least ~103 times faster and involving ~106 times more destructive power able to kill ~1010 people. The international institutions developed post-1945 (UN, EU etc) contribute little to solving the biggest problems and in many ways make them worse. These institutions fail constantly and do not  – cannot – learn much.

If we keep having crises like we have experienced over the past century then this combination of problems pushes the probability of catastrophe towards ‘overwhelmingly likely’.

*

What Is To be Done? There’s plenty of room at the top

‘In a knowledge-rich world, progress does not lie in the direction of reading information faster, writing it faster, and storing more of it. Progress lies in the direction of extracting and exploiting the patterns of the world… And that progress will depend on … our ability to devise better and more powerful thinking programs for man and machine.’ Herbert Simon, Designing Organizations for an Information-rich World, 1969.

‘Fascinating that the same problems recur time after time, in almost every program, and that the management of the program, whether it happened to be government or industry, continues to avoid reality.’ George Mueller, pioneer of ‘systems engineering’ and ‘systems management’ and the man most responsible for the success of the 1969 moon landing.

Somehow the world has to make a series of extremely traumatic and dangerous transitions over the next 20 years. The main transition needed is:

Embed reliably the unrecognised simplicities of high performance teams (HPTs), including personnel selection and training, in ‘mission critical’ institutions while simultaneously developing a focused project that radically improves the prospects for international cooperation and new forms of political organisation beyond competing nation states.

Big progress on this problem would automatically and for free bring big progress on other big problems. It could improve (even save) billions of lives and save a quadrillion dollars (~$1015). If we avoid disasters then the error-correcting institutions of markets and science will, patchily, spread peace, prosperity, and learning. We will make big improvements with public services and other aspects of ‘normal’ government. We will have a healthier political culture in which representative institutions, markets serving the public (not looters), and international cooperation are stronger.

Can a big jump in performance – ‘better and more powerful thinking programs for man and machine’ – somehow be systematised?

Feynman once gave a talk titled ‘There’s plenty of room at the bottom’ about the huge performance improvements possible if we could learn to do engineering at the atomic scale – what is now called nanotechnology. There is also ‘plenty of room at the top’ of political structures for huge improvements in performance. As I explained recently, the victory of the Leave campaign owed more to the fundamental dysfunction of the British Establishment than it did to any brilliance from Vote Leave. Despite having the support of practically every force with power and money in the world (including the main broadcasters) and controlling the timing and legal regulation of the referendum, they blew it. This was good if you support Leave but just how easily the whole system could be taken down should be frightening for everybody .

Creating high performance teams is obviously hard but in what ways is it really hard? It is not hard in the same sense that some things are hard like discovering profound new mathematical knowledge. HPTs do not require profound new knowledge. We have been able to read the basic lessons in classics for over two thousand years. We can see relevant examples all around us of individuals and teams showing huge gains in effectiveness.

The real obstacle is not financial. The financial resources needed are remarkably low and the return on small investments could be incalculably vast. We could significantly improve the decisions of the most powerful 100 people in the UK or the world for less than a million dollars (~£106) and a decade-long project on a scale of just ~£107 could have dramatic effects.

The real obstacle is not a huge task of public persuasion – quite the opposite. A government that tried in a disciplined way to do this would attract huge public support. (I’ve polled some ideas and am confident about this.) Political parties are locked in a game that in trying to win in conventional ways leads to the public despising them. Ironically if a party (established or new) forgets this game and makes the public the target of extreme intelligent focus then it would not only make the world better but would trounce their opponents.

The real obstacle is not a need for breakthrough technologies though technology could help. As Colonel Boyd used to shout, ‘People, ideas, machines – in that order!’

The real obstacle is that although we can all learn and study HPTs it is extremely hard to put this learning to practical use and sustain it against all the forces of entropy that constantly operate to degrade high performance once the original people have gone. HPTs are episodic. They seem to come out of nowhere, shock people, then vanish with the rare individuals. People write about them and many talk about learning from them but in fact almost nobody ever learns from them – apart, perhaps, from those very rare people who did not need to learn – and nobody has found a method to embed this learning reliably and systematically in institutions that can maintain it. The Prussian General Staff remained operationally brilliant but in other ways went badly wrong after the death of the elder Moltke. When George Mueller left NASA it reverted to what it had been before he arrived – management chaos. All the best companies quickly go downhill after the departure of people like Bill Gates – even when such very able people have tried very very hard to avoid exactly this problem.

Charlie Munger, half of the most successful investment team in world history, has a great phrase he uses to explain their success that gets to the heart of this problem:

‘There isn’t one novel thought in all of how Berkshire [Hathaway] is run. It’s all about … exploiting unrecognized simplicities… It’s a community of like-minded people, and that makes most decisions into no-brainers. Warren [Buffett] and I aren’t prodigies. We can’t play chess blindfolded or be concert pianists. But the results are prodigious, because we have a temperamental advantage that more than compensates for a lack of IQ points.’

The simplicities that bring high performance in general, not just in investing, are largely unrecognised because they conflict with many evolved instincts and are therefore psychologically very hard to implement. The principles of the Buffett-Munger success are clear – they have even gone to great pains to explain them and what the rest of us should do – and the results are clear yet still almost nobody really listens to them and above average intelligence people instead constantly put their money into active fund management that is proved to destroy wealth every year!

Most people think they are already implementing these lessons and usually strongly reject the idea that they are not. This means that just explaining things is very unlikely to work:

‘I’d say the history that Charlie [Munger] and I have had of persuading decent, intelligent people who we thought were doing unintelligent things to change their course of action has been poor.’ Buffett.

Even more worrying, it is extremely hard to take over organisations that are not run right and make them excellent.

‘We really don’t believe in buying into organisations to change them.’ Buffett.

If people won’t listen to the world’s most successful investor in history on his own subject, and even he finds it too hard to take over failing businesses and turn them around, how likely is it that politicians and officials incentivised to keep things as they are will listen to ideas about how to do things better? How likely is it that a team can take over broken government institutions and make them dramatically better in a way that outlasts the people who do it? Bureaucracies are extraordinarily resistant to learning. Even after the debacles of 9/11 and the Iraq War, costing many lives and trillions of dollars, and even after the 2008 Crash, the security and financial bureaucracies in America and Europe are essentially the same and operate on the same principles.

Buffett’s success is partly due to his discipline in sticking within what he and Munger call their ‘circle of competence’. Within this circle they have proved the wisdom of avoiding trying to persuade people to change their minds and avoiding trying to fix broken institutions.

This option is not available in politics. The Enlightenment and the scientific revolution give us no choice but to try to persuade people and try to fix or replace broken institutions. In general ‘it is better to undertake revolution than undergo it’. How might we go about it? What can people who do not have any significant power inside the system do? What international projects are most likely to spark the sort of big changes in attitude we urgently need?

This is the first of a series. I will keep it separate from the series on the EU referendum though it is connected in the sense that I spent a year on the referendum in the belief that winning it was a necessary though not sufficient condition for Britain to play a part in improving the quality of government dramatically and improving the probability of avoiding the disasters that will happen if politics follows a normal path. I intended to implement some of these ideas in Downing Street if the Boris-Gove team had not blown up. The more I study this issue the more confident I am that dramatic improvements are possible and the more pessimistic I am that they will happen soon enough.

Please leave comments and corrections…

* A new transatlantic cable recently opened for financial trading. Its cost? £300 million. Its advantage? It shaves 2.6 milliseconds off the latency of financial trades. Innovative groups are discussing the application of military laser technology, unmanned drones circling the earth acting as routers, and even the use of neutrino communication (because neutrinos can go straight through the earth just as zillions pass through your body every second without colliding with its atoms) – cf. this recent survey in Nature.

A review of Tetlock’s ‘Superforecasting’ (2015)

Spectator Review, October 2015

Forecasts have been fundamental to mankind’s journey from a small tribe on the African savannah to a species that can sling objects across the solar system with extreme precision. In physics, we developed models that are extremely accurate across vastly different scales from the sub-atomic to the visible universe. In politics we bumbled along making the same sort of errors repeatedly.

Until the 20th century, medicine was more like politics than physics. Its forecasts were often bogus and its record grim. In the 1920s, statisticians invaded medicine and devised randomised controlled trials. Doctors, hating the challenge to their prestige, resisted but lost. Evidence-based medicine became routine and saved millions of lives. A similar battle has begun in politics. The result could be more dramatic.

In 1984, Philip Tetlock, a political scientist, did something new – he considered how to assess the accuracy of political forecasts in a scientific way. In politics, it is usually impossible to make progress because forecasts are so vague as to be useless. People don’t do what is normal in physics – use precise measurements – so nobody can make a scientific judgement in the future about whether, say, George Osborne or Ed Balls is ‘right’.

Tetlock established a precise measurement system to track political forecasts made by experts to gauge their accuracy. After twenty years he published the results. The average expert was no more accurate than the proverbial dart-throwing chimp on many questions. Few could beat simple rules like ‘always predict no change’.

Tetlock also found that a small fraction did significantly better than average. Why? The worst forecasters were those with great self-confidence who stuck to their big ideas (‘hedgehogs’). They were often worse than the dart-throwing chimp. The most successful were those who were cautious, humble, numerate, actively open-minded, looked at many points of view, and updated their predictions (‘foxes’). TV programmes recruit hedgehogs so the more likely an expert was to appear on TV, the less accurate he was. Tetlock dug further: how much could training improve performance?

In the aftermath of disastrous intelligence forecasts about Iraq’s WMD, an obscure American intelligence agency explored Tetlock’s ideas. They created an online tournament in which thousands of volunteers would make many predictions. They framed specific questions with specific timescales, required forecasts using numerical probability scales, and created a robust statistical scoring system. Tetlock created a team – the Good Judgement Project (GJP) – to compete in the tournament.

The results? GJP beat the official control group by 60% in year 1 and by 78% in year 2. GJP beat all competitors so easily the tournament was shut down early.

How did they do it? GJP recruited a team of hundreds, aggregated the forecasts, gave extra weight to the most successful, and applied a simple statistical rule. A few hundred ordinary people and simple maths outperformed a bureaucracy costing tens of billions.

Tetlock also found ‘superforecasters’. These individuals outperformed others by 60% and also, despite a lack of subject-specific knowledge, comfortably beat the average of professional intelligence analysts using classified data (the size of the difference is secret but was significant).

Superforecasters explores the nature of these unusual individuals. Crucially, Tetlock has shown that training programmes can yield big improvements. Even a mere sixty minute tutorial on some basics of statistics improves performance by 10%. The cost:benefit ratio of training forecasting is huge.

It would be natural to assume that this work must be the focus of intense thought and funding in Whitehall. Wrong. Whitehall has ignored this entire research programme. Whitehall experiences repeated predictable failure while simultaneously seeing no alternative to their antiquated methods, like 1950s doctors resisting randomised control trials that threaten prestige.

This may change. Early adopters could use Tetlock’s techniques to improve performance. Success sparks mimicry. Everybody reading this could do one simple thing: ask their MP whether they have done Tetlock’s training programme. A website could track candidates’ answers before the next election. News programmes could require quantifiable predictions from their pundits and record their accuracy.

We now expect that every medicine is tested before it is used. We ought to expect that everybody who aspires to high office is trained to understand why they are so likely to make mistakes forecasting complex events. The cost is tiny. The potential benefits run to trillions of pounds and millions of lives. Politics is harder than physics but Tetlock has shown that it doesn’t have to be like astrology.

Superforecasting: the art and science of prediction, by Philip Tetlock (Random House, 352 pages)

Ps. When I wrote this (August/September 2015) I was assembling the team to fight the referendum. One of the things I did was hire people with very high quantitative skills, as I describe in this blog HERE.

Complexity, ‘fog and moonlight’, prediction, and politics III – von Neumann and economics as a science

The two previous blogs in this series were:

Part I HERE.

Part II HERE.

All page references unless otherwise stated are to my essay, HERE.

Since the financial crisis, there has been a great deal of media and Westminster discussion about why so few people predicted it and what the problems are with economics and financial theory.

Absent from most of this discussion is the history of the subject and its intellectual origins. Economics is clearly a vital area of prediction for people in politics. I therefore will explore some intellectual history to provide context for contemporary discussions about ‘what is wrong with economics and what should be done about it’.

*

It has often been argued that the ‘complexity’ of human behaviour renders precise mathematical treatment of economics impossible, or that the undoubted errors of modern economics in applying the tools of mathematical physics are evidence of the irredeemable hopelessness of the goal.

For example, Kant wrote in Critique of Judgement:

‘For it is quite certain that in terms of merely mechanical principles of nature we cannot even adequately become familiar with, much less explain, organized beings and how they are internally possible. So certain is this that we may boldly state that it is absurd for human beings even to attempt it, or to hope that perhaps some day another Newton might arise who would explain to us, in terms of natural laws unordered by any intention, how even a mere blade of grass is produced. Rather, we must absolutely deny that human beings have such insight.’

In the middle of the 20th Century, one of the great minds of the century turned to this question. John Von Neumann was one of the leading mathematicians of the 20th Century. He was also a major contributor to the mathematisation of quantum mechanics, created the field of ‘quantum logic’ (1936), worked as a consultant to the Manhattan Project and other wartime technological projects, and was one of the two most important creators of modern computer science and artificial intelligence (with Turing) which he developed partly for immediate problems he was working on (e.g. the hydrogen bomb and ICBMs) and partly to probe the general field of understanding complex nonlinear systems.  In an Endnote of my essay I discuss some of these things.

Von Neumann was regarded as an extraordinary phenomenon even by  the cleverest people in the world. The Nobel-winning physicist and mathematician Wigner said of von Neumann:

‘I have known a great many intelligent people in my life. I knew Planck, von Laue and Heisenberg. Paul Dirac was my brother in law; Leo Szilard and Edward Teller have been among my closest friends; and Albert Einstein was a good friend, too. But none of them had a mind as quick and acute as Jansci von Neumann. I have often remarked this in the presence of those men and no one ever disputed me… Perhaps the consciousness of animals is more shadowy than ours and perhaps their perceptions are always dreamlike. On the opposite side, whenever I talked with the sharpest intellect whom I have known – with von Neumann – I always had the impression that only he was fully awake, that I was halfway in a dream.’

Von Neumann also had a big impact on economics. During breaks from pressing wartime business, he wrote ‘Theory of Games and Economic Behaviour’ (TGEB) with Morgenstern. This practically created the field of ‘game theory’ which one sees so many references to now. TGEB was one of the most influential books ever written on economics. (The movie The Beautiful Mind gave a false impression of Nash’s contribution.) In the Introduction, his explanation of some foundational issues concerning economics, mathematics, and prediction is clearer for non-specialists than any other thing I have seen on the subject and cuts through a vast amount of contemporary discussion which fogs the issues.

This documentary on von Neumann is also interesting:

*

There are some snippets from pre-20th Century figures explaining concepts in terms recognisable through the prism of Game Theory. For example, Ampère wrote ‘Considerations sur la théorie mathématique du jeu’ in 1802 and credited Buffon’s 1777 essay on ‘moral arithmetic’ (Buffon figured out many elements that Darwin would later harmonise in his theory of evolution). Cournot discussed what would later be described as a specific example of a ‘Nash equilibrium’ viz duopoly in 1838.  The French mathematician Emile Borel also made contributions to early ideas.

However, Game Theory really was born with von Neumann. In December 1926, he presented the paper ‘Zur Theorie der Gesellschaftsspiele’ (On the Theory of Parlour Games, published in 1928, translated version here) while working on the Hilbert Programme [cf. Endnote on Computing] and quantum mechanics. The connection between the Hilbert Programme and the intellectual origins of Game Theory can perhaps first be traced in a 1912 lecture by one of the world’s leading mathematicians and founders of modern set theory, Zermelo, titled ‘On the Application of Set Theory to Chess’ which stated of its purpose:

‘… it is not dealing with the practical method for games, but rather is simply giving an answer to the following question: can the value of a particular feasible position in a game for one of the players be mathematically and objectively decided, or can it at least be defined without resorting to more subjective psychological concepts?’

He presented a theorem that chess is strictly determined: that is, either (i) white can force a win, or (ii) black can force a win, or (iii) both sides can force at least a draw. Which of these is the actual solution to chess remains unknown. (Cf. ‘Zermelo and the Early History of Game Theory’, by Schwalbe & Walker (1997), which argues that modern scholarship is full of errors about this paper. According to Leonard (2006), Zermelo’s paper was part of a general interest in the game of chess among intellectuals in the first third of the 20th century. Lasker (world chess champion 1897–1921) knew Zermelo and both were taught by Hilbert.)

Von Neumman later wrote:

‘[I]f the theory of Chess were really fully known there would be nothing left to play.  The theory would show which of the three possibilities … actually holds, and accordingly the play would be decided before it starts…  But our proof, which guarantees the validity of one (and only one) of these three alternatives, gives no practically usable method to determine the true one. This relative, human difficulty necessitates the use of those incomplete, heuristic methods of playing, which constitute ‘good’ Chess; and without it there would be no element of ‘struggle’ and ‘surprise’ in that game.’ (p.125)

Elsewhere, he said:

‘Chess is not a game. Chess is a well-defined computation. You may not be able to work out the answers, but in theory there must be a solution, a right procedure in any position. Now, real games are not like that at all. Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory.’

Von Neumman’s 1928 paper proved that there is a rational solution to every two-person zero-sum game. That is, in a rigorously defined game with precise payoffs, there is a mathematically rational strategy for both sides – an outcome which both parties cannot hope to improve upon. This introduced the concept of the minimax: choose a strategy that minimises the possible maximum loss.

Zero-sum games are those where the payoffs ‘sum’ to zero. For example, chess or Go are zero-sum games because the gain (+1) and the loss (-1) sum to zero; one person’s win is another’s loss. The famous Prisoners’ Dilemma is a non-zero-sum game because the payoffs do not sum to zero: it is possible for both players to make gains. In some games the payoffs to the players are symmetrical (e.g. Prisoners’ Dilemma); in others, the payoffs are asymmetrical (e.g. the Dictator or Ultimatum games). Sometimes the strategies can be completely stated without the need for probabilities (‘pure’ strategies); sometimes, probabilities have to be assigned for particular actions (‘mixed’ strategies).

While the optimal minimax strategy might be a ‘pure’ strategy, von Neumann showed it would often have to be a ‘mixed strategy’ and this means a spontaneous return of probability, even if the game itself does not involve probability.

‘Although … chance was eliminated from the games of strategy under consideration (by introducing expected values and eliminating ‘draws’), it has now made a spontaneous reappearance. Even if the rules of the game do not contain any elements of ‘hazard’ … in specifying the rules of behaviour for the players it becomes imperative to reconsider the element of ‘hazard’. The dependence on chance (the ‘statistical’ element) is such an intrinsic part of the game itself (if not of the world) that there is no need to introduce it artificially by way of the rules of the game itself: even if the formal rules contain no trace of it, it still will assert itself.’

In 1932, he gave a lecture titled ‘On Certain Equations of Economics and A Generalization of Brouwer’s Fixed-Point Theorem’. It was published in German in 1938 but not in English until 1945 when it was published as ‘A Model of General Economic Equilibrium’. This paper developed what is sometimes called von Neumann’s Expanding Economic Model and has been described as the most influential article in mathematical economics. It introduced the use of ‘fixed-point theorems’. (Brouwer’s ‘fixed point theorem’ in topology proved that, in crude terms, if you lay a map of the US on the ground anywhere in the US, one point on the map will lie precisely over the point it represents on the ground beneath.)

‘The mathematical proof is possible only by means of a generalisation of Brouwer’s Fix-Point Theorem, i.e. by the use of very fundamental topological facts… The connection with topology may be very surprising at first, but the author thinks that it is natural in problems of this kind. The immediate reason for this is the occurrence of a certain ‘minimum-maximum’ problem… It is closely related to another problem occurring in the theory of games.’

Von Neumann’s application of this topological proof to economics was very influential in post-war mathematical economics and in particular was used by Arrow and Debreu in their seminal 1954 paper on general equilibrium, perhaps the central paper in modern traditional economics.

*

In the late 1930’s, von Neumann, based at the IAS in Princeton to which Gödel and Einstein also fled to escape the Nazis, met up with the economist Oskar Morgenstern who was deeply dissatisfied with the state of economics. In 1940, von Neumann began his collaboration on games with Morgenstern, while working on war business including the Manhattan Project and computers, that became The Theory of Games and Economic Behavior (TGEB). By December 1942, he had finished his work on this though it was not published until 1944.

In the Introduction of TGEB, von Neumann explained the real problems in applying mathematics to economics and why Kant was wrong.

‘It is not that there exists any fundamental reason why mathematics should not be used in economics.  The arguments often heard that because of the human element, of the psychological factors etc., or because there is – allegedly – no measurement of important factors, mathematics will find no application, can all be dismissed as utterly mistaken.  Almost all these objections have been made, or might have been made, many centuries ago in fields where mathematics is now the chief instrument of analysis [e.g. physics in the 16th Century or chemistry and biology in the 18th]…

‘As to the lack of measurement of the most important factors, the example of the theory of heat is most instructive; before the development of the mathematical theory the possibilities of quantitative measurements were less favorable there than they are now in economics.  The precise measurements of the quantity and quality of heat (energy and temperature) were the outcome and not the antecedents of the mathematical theory…

‘The reason why mathematics has not been more successful in economics must be found elsewhere… To begin with, the economic problems were not formulated clearly and are often stated in such vague terms as to make mathematical treatment a priori appear hopeless because it is quite uncertain what the problems really are. There is no point using exact methods where there is no clarity in the concepts and issues to which they are applied. [Emphasis added] Consequently the initial task is to clarify the knowledge of the matter by further careful descriptive work. But even in those parts of economics where the descriptive problem has been handled more satisfactorily, mathematical tools have seldom been used appropriately. They were either inadequately handled … or they led to mere translations from a literary form of expression into symbols…

‘Next, the empirical background of economic science is definitely inadequate. Our knowledge of the relevant facts of economics is incomparably smaller than that commanded in physics at the time when mathematization of that subject was achieved.  Indeed, the decisive break which came in physics in the seventeenth century … was possible only because of previous developments in astronomy. It was backed by several millennia of systematic, scientific, astronomical observation, culminating in an observer of unparalleled calibre, Tycho de Brahe. Nothing of this sort has occurred in economics. It would have been absurd in physics to expect Kepler and Newton without Tycho – and there is no reason to hope for an easier development in economics…

‘Very frequently the proofs [in economics] are lacking because a mathematical treatment has been attempted in fields which are so vast and so complicated that for a long time to come – until much more empirical knowledge is acquired – there is hardly any reason at all to expect progress more mathematico. The fact that these fields have been attacked in this way … indicates how much the attendant difficulties are being underestimated. They are enormous and we are now in no way equipped for them.

‘[We will need] changes in mathematical technique – in fact, in mathematics itself…  It must not be forgotten that these changes may be very considerable. The decisive phase of the application of mathematics to physics – Newton’s creation of a rational discipline of mechanics – brought about, and can hardly be separated from, the discovery of the infinitesimal calculus…

‘The importance of the social phenomena, the wealth and multiplicity of their manifestations, and the complexity of their structure, are at least equal to those in physics.  It is therefore to be expected – or feared – that mathematical discoveries of a stature comparable to that of calculus will be needed in order to produce decisive success in this field… A fortiori, it is unlikely that a mere repetition of the tricks which served us so well in physics will do for the social phenomena too.  The probability is very slim indeed, since … we encounter in our discussions some mathematical problems which are quite different from those which occur in physical science.’

Von Neumann therefore exhorted economists to humility and the task of ‘careful, patient description’, a ‘task of vast proportions’. He stressed that economics could not attack the ‘big’ questions – much more modesty is needed to establish an exact theory for very simple problems, and build on those foundations.

‘The everyday work of the research physicist is … concerned with special problems which are “mature”… Unifications of fields which were formerly divided and far apart may alternate with this type of work. However, such fortunate occurrences are rare and happen only after each field has been thoroughly explored. Considering the fact that economics is much more difficult, much less understood, and undoubtedly in a much earlier stage of its evolution as a science than physics, one should clearly not expect more than a development of the above type in economics either…

‘The great progress in every science came when, in the study of problems which were modest as compared with ultimate aims, methods were developed which could be extended further and further. The free fall is a very trivial physical example, but it was the study of this exceedingly simple fact and its comparison with astronomical material which brought forth mechanics. It seems to us that the same standard of modesty should be applied in economics… The sound procedure is to obtain first utmost precision and mastery in a limited field, and then to proceed to another, somewhat wider one, and so on.’

Von Neumann therefore aims in TGEB at ‘the behavior of the individual and the simplest forms of exchange’ with the hope that this can be extended to more complex situations.

‘Economists frequently point to much larger, more ‘burning’ questions…  The experience of … physics indicates that this impatience merely delays progress, including that of the treatment of the ‘burning’ questions. There is no reason to assume the existence of shortcuts…

‘It is a well-known phenomenon in many branches of the exact and physical sciences that very great numbers are often easier to handle than those of medium size. An almost exact theory of a gas, containing about 1025 freely moving particles, is incomparably easier than that of the solar system, made up of 9 major bodies… This is … due to the excellent possibility of applying the laws of statistics and probabilities in the first case.

‘This analogy, however, is far from perfect for our problem. The theory of mechanics for 2,3,4,… bodies is well known, and in its general theoretical …. form is the foundation of the statistical theory for great numbers. For the social exchange economy – i.e. for the equivalent ‘games of strategy’ – the theory of 2,3,4… participants was heretofore lacking. It is this need that … our subsequent investigations will endeavor to satisfy. In other words, only after the theory for moderate numbers of participants has been satisfactorily developed will it be possible to decide whether extremely great numbers of participants simplify the situation.’

[This last bit has changed slightly as I forgot to include a few things.]

While some of von Neumann’s ideas were extremely influential on economics, his general warning here about the right approach to the use of mathematics was not widely heeded.

Most economists initially ignored von Neumann’s ideas.  Martin Shubik, a Princeton mathematician, recounted the scene he found:

‘The contrast of attitudes between the economics department and mathematics department was stamped on my mind… The former projected an atmosphere of dull-business-as-usual conservatism… The latter was electric with ideas… When von Neumann gave his seminar on his growth model, with a few exceptions, the serried ranks of Princeton economists could scarce forebear to yawn.’

However, a small but influential number, including mathematicians at the RAND Corporation (the first recognisable modern ‘think tank’) led by John Williams, applied it to nuclear strategy as well as economics. For example, Albert Wohlstetter published his Selection and Use of Strategic Air Bases (RAND, R-266, sometimes referred to as The Basing Study) in 1954. Williams persuaded the RAND Board and the infamous SAC General Curtis LeMay to develop a social science division at RAND that could include economists and psychologists to explore the practical potential of Game Theory further. He also hired von Neumann as a consultant; when the latter said he was too busy, Williams told him he only wanted the time it took von Neumann to shave in the morning. (Kubrick’s Dr Strangelove satirised RAND’s use of game theory.)

In the 1990’s, the movie A Beautiful Mind brought John Nash into pop culture, giving the misleading impression that he was the principle developer of Game Theory. Nash’s fame rests principally on work he did in 1950-1 that became known as ‘the Nash Equilibrium’. In Non-Cooperative Games (1950), he wrote:

‘[TGEB] contains a theory of n-person games of a type which we would call cooperative. This theory is based on an analysis of the interrelationships of the various coalitions which can be formed by the players of the game. Our theory, in contradistinction, is based on the absence of coalitions in that it is assumed each participant acts independently, without collaboration or communication with any of the others… [I have proved] that a finite non-cooperative game always has at least one equilibrium point.’

Von Neumann remarked of Nash’s results, ‘That’s trivial you know. It’s just a fixed point theorem.’ Nash himself said that von Neumann was a ‘European gentleman’ but was not impressed by his results.

In 1949-50, Merrill Flood, another RAND researcher, began experimenting with staff at RAND (and his own children) playing various games. Nash’s results prompted Flood to create what became known as the ‘Prisoners’ Dilemma’ game, the most famous and studied game in Game Theory. It was initially known as ‘a non-cooperative pair’ and the name ‘Prisoners’ Dilemma’ was given it by Tucker later in 1950 when he had to think of a way of explaining the concept to his psychology class at Stanford and hit on an anecdote putting the payoff matrix in the form of two prisoners in separate cells considering the pros and cons of ratting on each other.

The game was discussed and played at RAND without publishing. Flood wrote up the results in 1952 as an internal RAND memo accompanied by the real-time comments of the players. In 1958, Flood published the results formally (Some Experimental Games). Flood concluded that ‘there was no tendency to seek as the final solution … the Nash equilibrium point.’ Prisoners’ Dilemma has been called ‘the E. coli of social psychology’ by Axelrod, so popular has it become in so many different fields. Many studies of Iterated Prisoners’ Dilemma games have shown that generally neither human nor evolved genetic algorithm players converge on the Nash equilibrium but choose to cooperate far more than Nash’s theory predicts.

Section 7 of my essay discusses some recent breakthroughs, particularly the paper by Press & Dyson. This is also a good example of how mathematicians can invade fields. Dyson’s professional fields are maths and physics. He was persuaded to look at the Prisoners’ Dilemma. He very quickly saw that there was a previously unseen class of strategies that has opened up a whole new field for exploration. This article HERE is a good summary of recent developments.

Von Neumann’s brief forays into economics were very much a minor sideline for him but there is no doubt of his influence. Despite von Neumann’s reservations about neoclassical economics, Paul Samuelson admitted that, ‘He darted briefly into our domain, and it has never been the same since.’

In 1987, the Santa Fe Institute, founded by Gell Mann and others, organised a ten day meeting to discuss economics. On one side, they invited leading economists such as Kenneth Arrow and Larry Summers; on the other side, they invited physicists, biologists, and computer scientists, such as Nobel-winning Philip Anderson and John Holland (inventor of genetic algorithms). When the economists explained their assumptions, Phil Anderson said to them, ‘You guys really believe that?

One physicist later described the meeting as like visiting Cuba – the cars are all from the 1950’s so on one hand you admire them for keeping them going, but on the other hand they are old technology; similarly the economists were ingeniously using 19th Century maths and physics on very out-of-date models. The physicists were shocked at how the economists were content with simplifying assumptions that were obviously contradicted by reality, and they were surprised at the way the economists seemed unconcerned about how poor their predictions were.

Twenty-seven years later, this problem is more acute. Some economists are listening to the physicists about fundamental problems with the field. Some are angrily rejecting the physicists’ incursions into their field.

Von Neumann explained the scientifically accurate approach to economics and mathematics. [Inserted later. I mean – the first part of his comments above that discusses maths, prediction, models, and economics and physics. As far as I know, nobody seriously disputes these comments – i.e. that Kant and the general argument that ‘maths cannot make inroads into economics’ are wrong. The later comments about building up economic theories from theories of 2, 3, 4 agents etc is a separate topic. See comments.] In other blogs in this series I will explore some of the history of economic thinking as part of a description of the problem for politicians and other decision-makers who need to make predictions.

Please leave corrections and comments below.