On the referendum #28: Some interesting stuff on AI/ML with, hopefully, implications for post-May/Hammond decisions

Here are a few interesting recent papers I’ve read over the past few months.

Bear in mind that Shane Legg, co-founder and chief scientist of Deep Mind, said publicly a few years ago that there’s a 50% probability that we will achieve human level AI by 2028 and a 90% probability by 2050. Given all that has happened since, including at Deep Mind, it’s surely unlikely he now thinks this forecast is too optimistic. Also bear in mind that the US-China AI arms race is already underway, the UK lost its main asset before almost any MPs even knew its name, and the EU in general (outside London) is decreasingly relevant as progress at the edge of the field is driven by coastal America and coastal China, spurred by commercial and national security dynamics. This will get worse as the EU Commission and the ECJ use the Charter of Fundamental Rights to grab the power to regulate all high technology fields from AI to genomics — a legal/power dynamic still greatly under-appreciated in London’s technology world. If you think GDPR is a mess, wait for the ECJ to spend three years deciding crucial cases on autonomous drones and genetic engineering before upending research in the field…

Vote Leave argued during the referendum that a Leave victory should deliver the huge changes that the public wanted and the UK should make science and technology the focus of a profound process of national renewal. On this as on everything else, from Article 50 to how to conduct the negotiations to budget priorities to immigration policy, SW1 in general and the Conservative Party in particular did the opposite of what Vote Leave said. They have driven the country into the ditch and the only upside is they have exposed the rottenness of Westminster and Whitehall and forced many who wanted to keep the duvet over their eyes to face reality — the first step in improvement.

After the abysmal May/Hammond interlude is over, hopefully some time between October 2018 — July 2019, its replacement will need to change course on almost every front from the NHS to how SW1 pours billions into the greedy paws of corporate looters via its appallingly managed >£200 BILLION annual contracting/procurement budget — ‘there’s no money’ bleats most of SW1 as it unthinkingly shovels it at the demimonde of Carillion/BaE-like companies that prop up its MPs with donations.

May’s replacement could decide to take seriously the economic and technological forces changing the world. The UK could, with a very different vision of the future to anything now proposed in Whitehall, improve its own security and prosperity and help the world but this will require 1) substantially changing the wiring of power in Whitehall so decisions are better (new people, training, ideas, tools, and institutions), and 2) making scientific research and technology projects important at the apex of power. We could build real assets with much greater real influence than the chimerical ‘influence’ in Brussels meeting rooms that SW1 has used as an excuse to give away power to Brussels where thinking is much closer to the 1970s than to today’s coastal China or Silicon Valley. Brushing aside Corbyn would be child’s play for a government that could focus on important questions and took project management — an undiscussable subject in SW1 — seriously.

The whole country — the whole world — can see our rotten parties have failed us. The parties ally with the civil service to keep new ideas and people excluded. SW1 has tried to resist the revolutionary implications of the referendum but this resistance has to crack: one way or the other the old ways are doomed. The country voted for profound change in 2016. The Tories didn’t understand this hence, partly, the worst campaign in modern history. This dire Cabinet, doomed to merciless judgement in the history books, is visibly falling: let’s ‘push what is falling’…

For specific proposals on improving the appalling science funding system, see below.

*

The Sam Altman co-founded non-profit, OpenAI, made major progress with its Dota-playing AI last week: follow @gdb for updates. Deep Mind is similarly working on Starcraft. It is a major advance to shift from perfect information games like GO to imperfect strategic games like Dota and Starcraft. If AIs shortly beat the best humans at full versions of such games, then it means they can outperform at least parts of human reasoning in ways that have been assumed to be many years away. As OpenAI says, it is a major step ‘towards advanced AI systems which can handle the complexity and uncertainty of the real world.’

https://blog.openai.com/openai-five-benchmark-results/

RAND paper on how AI affects the chances of nuclear catastrophe:

https://www.rand.org/content/dam/rand/pubs/perspectives/PE200/PE296/RAND_PE296.pdf

The Malicious Use of Artificial Intelligence:

https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/1c6q2kc4v_50335.pdf

Defense Science Board: ‘Summer Study on Autonomy’ (2016):

http://www.acq.osd.mil/dsb/reports/2010s/DSBSS15.pdf

JASON: ‘Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD’ (2017)

https://fas.org/irp/agency/dod/jason/ai-dod.pdf

Artificial Intelligence and National Security, Greg Allen Taniel Chan (for IARPA):

Artificial Intelligence and National Security – The Belfer Center for …

Some predictions on driverless cars and other automation milestones: http://rodneybrooks.com/my-dated-predictions/

Project Maven (very relevant to politicians/procurement): https://thebulletin.org/project-maven-brings-ai-fight-against-isis11374

Chris Anderson on drones changing business sectors:

https://hbr.org/cover-story/2017/05/drones-go-to-work

On the trend in AI compute and economic sustainability (NB. I think the author is wrong on the Manhattan Project being a good upper bound for what a country will spend in an arms race, US GDP spent on DoD at the height of the Cold War would be a better metric): https://aiimpacts.org/interpreting-ai-compute-trends/

Read this excellent essay on ‘AI Nationalism’ by Ian Hogarth, directly relevant to arms race arguments and UK policy.

Read ‘Intelligence Explosion Microeconomics’ by Yudkowsky.

Read ‘Autonomous technology and the greater human good’ by Omohundro — one of the best things about the dangers of AGI and ideas about safety I’ve seen by one of the most respected academics working in this field.

Existential Risk: Diplomacy and Governance (Future of Humanity Institute, 2017).

If you haven’t you should also read this 1955 essay by von Neumann ‘Can we survive technology?’. It is relevant beyond any specific technology. VN was regarded by the likes of Einstein and Dirac as the smartest person they’d ever met. He was involved in the Manhattan Project, inventing computer science, game theory and much more. This essay explored the essential problem that the scale and speed of technological change suddenly blew up assumptions about political institutions’ ability to cope. Much reads as if it were written yesterday.  ‘For progress there is no cure…’

I blogged on a paper by Judea Pearl a few months ago HERE. He is the leading scholar of causation. He argues that current ML approaches are inherently limited and advance requires giving machines causal reasoning:

‘If we want machines to reason about interventions (“What if we ban cigarettes?”) and introspection (“What if I had finished high school?”), we must invoke causal models. Associations are not enough — and this is a mathematical fact, not opinion.’

I also wrote this recently on science funding which links to a great piece by two young neuroscientists about how post-Brexit Britain should improve science and is also relevant to how the UK could set up an ARPA-like entity to fund AI/ML and other fields:

https://dominiccummings.com/2018/06/08/on-the-referendum-25-how-to-change-science-funding-post-brexit/

 

On the referendum #26: How to change science funding post-Brexit [updated with comment by Alan Kay]

There was an excellent piece in the Telegraph yesterday by two young neuroscientists on how SW1 should be thinking about science post-Brexit. The byline says that James Phillips works at Janelia, a US lab that has explicitly tried to learn about how to fund science research from the famous successes of Bell Labs, the ARPA-PARC project that invented the internet and PC, and similar efforts. He must see every day how science funding can work so much better than is normal in Britain.

Today, the UK a) ties research up in appalling bureaucracy, such as requiring multi-stage procurement processes literally to change a lightbulb, and b) does not fund it enough. The bureaucracy around basic science is so crazy that a glitch in paper work means thousands of animals are secretly destroyed in ways the public would be appalled to learn if made public.

Few in SW1 take basic science research seriously. And in all the debates over Brexit, practically the entire focus is 1980s arguments over the mechanism for regulating product markets created by Delors to centralise power in Brussels — the Internal Market (aka Single Market). Thirty years after they committed to this mechanism and two years after the referendum that blew it up, most MPs still don’t understand what it is and how it works. Dismally, the last two years has been a sort of remedial education programme and there has been practically zero discussion about how Britain could help create the future

During the referendum, Vote Leave argued that the dreadful Cameron/Osborne immigration policy (including the net migration target) was damaging and said we should make Britain MORE welcoming to scientists. Obviously Remain-SW1 likes to pretend that the May/Hammond Remain team’s shambles is the only possible version of Brexit. Nothing could be further from the truth. If the government had funded the NHS, ditched the ‘tens of thousands’ absurdity, and, for example, given maths, physics and computer science PhDs ‘free movement’ then things would be very different now — and Corbyn would probably be a historical footnote.

Regardless of how you voted in the referendum, reasonable people outside the rancid environment of SW1 should pressure their MPs to take their responsibilities to science x100 more seriously than they do.

I strongly urge you to read it all, send it to your MP, and politely ask for action…

(Their phrase ‘creating the future’ invokes Alan Kay’s famous line — the best way to predict the future is to invent it.)


Science holds the key, by James & Matthew Phillips

The 2008 crisis should have led us to reshape how our economy works. But a decade on, what has really changed? The public knows that the same attitude that got us into the previous economic crisis will not bring us long-term prosperity, yet there is little vision from our leaders of what the future should look like. Our politicians are sleeping, yet have no dreams. To solve this, we must change emphasis from creating “growth” to creating the future: the former is an inevitable product of the latter.

Britain used to create the future, and we must return to this role by turning to scientists and engineers. Science defined the last century by creating new industries. It will define this century too: robotics, clean energy, artificial intelligence, cures for disease and other unexpected advances lie in wait. The country that gives birth to these industries will lead the world, and yet we seem incapable of action.

So how can we create new industries quickly? A clue lies in a small number of institutes that produced a strikingly large number of key advances. Bell Labs produced much of the technology underlying computing. The Palo Alto Research Centre did the same for the internet. There are simple rules of thumb about how great science arises, embodied in such institutes. They provided ambitious long-term funding to scientists, avoided unnecessary bureaucracy and chased high-risk, high-reward projects.

Today, scientists spend much of their time completing paperwork. A culture of endless accountability has arisen out of a fear of misspending a single pound. We’ve seen examples of routine purchases of LEDs that cost under £10 having to go through a nine-step bureaucratic review process.

Scientists on the cusp of great breakthroughs can be slowed by years mired in review boards and waiting on a decision from on high. Their discoveries are thus made, and capitalised on, elsewhere. We waste money, miss patents, lose cures and drive talented scientists away to high-paid jobs. You don’t cure cancer with paperwork. Rather than invigilate every single decision, we should do spot checks retrospectively, as is done with tax returns.

A similar risk aversion is present in the science funding process. Many scientists are forced to specify years in advance what they intend to do, and spend their time continually applying for very short, small grants. However, it is the unexpected, the failures and the accidental, which are the inevitable cost and source of fruit in the scientific pursuit. It takes time, it takes long-term thinking, it takes flexibility. Peter Higgs, Nobel laureate who predicted the Higgs Boson, says he wouldn’t stand a chance of being funded today for lack of a track record. This leads scientists collectively to pursue incremental, low-risk, low-payoff work.

The current funding system is also top-down, prescriptive and homogenous, administered centrally from London. It is slow to respond to change and cut off from the real world.

We should return to funding university departments more directly, allowing more rapid, situation-aware decision-making of the kind present in start-ups, and create a diversity of funding systems. This is how the best research facilities in history operated, yet we do not learn their key lesson: that science cannot be managed by central edict, but flourishes through independent inquiry.

While Britain built much of modern science, today it neglects it, lagging behind other comparable nations in funding, and instead prioritising a financial industry prone to blowing up. Consider that we spent more money bailing out the banks in a single year than we have on science in the entirety of history.

We scarcely pause to consider the difference in return on investment. Rather than prop up old industries, we should invest in world-leading research institutes with a specific emphasis on high-risk, high-payoff research.

Those who say this is not government’s role fail the test of history. Much great science has come from government investment in times of crisis. Without Nasa, there would be no SpaceX. These government investments were used to provide a long-term, transformative vision on a scale that cannot be achieved through private investment alone – especially where there is a high risk of failure but high reward in success. The payoff of previous investments was enormous, so why not replicate the defence funding agencies that led to them with peacetime civilian equivalents?

In order to be the nation where new discoveries are made, we must take decisive steps to make the UK a magnet for talented young scientists.

However, a recent report on ensuring a successful UK research endeavour scarcely mentioned young scientists at all. An increased focus on this goal, alongside simple steps like long-term funding and guaranteed work visas for their spouses, would go a long way. In short, we should be to scientific innovation what we are to finance: a highly connected nerve centre for the global economy.

The political candidate that can leverage a pro-science platform to combine economic stimulus with the reality of economic pragmatism will transform the UK. We should lead the future by creating it.

James Phillips is a PhD student in neuroscience at the HHMI Janelia Research Campus in the US and the University of Cambridge. 
Matthew Phillips is a PhD student in neuroscience at the Sainsbury Wellcome Centre, University College London


UPDATE

Alan Kay, the brilliant researcher I mentioned above, happened to read this blog and posted this comment which I will also paste below here…

[From Alan Kay]

Good advice! However, I’m afraid that currently in the US there is nothing like the fabled Bell Labs or ARPA-PARC funding, at least in computing where I’m most aware of what is and is not happening (I’m the “Alan Kay” of the famous quote).

It is possible that things were still better a few years ago in the US than in the UK (I live in London half the year and in Los Angeles the other half). But I have some reasons to doubt. Since the new “president”, the US does not even have a science advisor, nor is there any sign of desire for one.

A visit to the classic Bell Labs of its heyday would reveal many things. One of the simplest was a sign posted randomly around: “Either do something very useful, or very beautiful”. Funders today won’t fund the second at all, and are afraid to fund at the risk level needed for the first.

It is difficult to sum up ARPA-PARC, but one interesting perspective on this kind of funding was that it was both long range and stratospherically visionary, and part of the vision was that good results included “better problems” (i.e. “problem finding” was highly valued and funded well) and good results included “good people” (i.e. long range funding should also create the next generations of researchers). in fact, virtually all of the researchers at Xerox PARC had their degrees funded by ARPA, they were “research results” who were able to get better research results.

Since the “D” was put on ARPA in the early 70s, it was then not able to do what it did in the 60s. NSF in the US never did this kind of funding. I spent quite a lot of time on some of the NSF Advisory Boards and it was pretty much impossible to bridge the gap between what was actually needed and the difficulties the Foundation has with congressional oversight (and some of the stipulations of their mission).

Bob Noyce (one of the founders of Intel) used to say “Wealth is created by Scientists, Engineers and Artists, everyone else just moves it around”.

Einstein said “We cannot solve important problems of the world using the same level of thinking we used to create them”.

A nice phrase by Vi Hart is “We must insure human wisdom exceeds human power”.

To make it to the 22nd century at all, and especially in better shape than we are now, we need to heed all three of these sayings, and support them as the civilization we are sometimes trying to become. It’s the only context in which “The best way to predict the future is to invent it” makes any useful sense.

State-of-the-art in AI #1: causality, hypotheticals, and robots with free will & capacity for evil (UPDATED)

Judea Pearl is one of the most important scholars in the field of causal reasoning. His book Causality is the leading textbook in the field.

This blog has two short parts — a paper he wrote a few months ago and an interview he gave a few days ago.

*

He recently wrote a very interesting (to the very limited extent I understand it) short paper about the limits of state-of-the-art AI systems using ‘deep learning’ neural networks — such as the AlphaGo system which recently conquered the game of GO and AlphaZero which blew past centuries of human knowledge of chess in 24 hours — and how these systems could be improved.

The human ability to interrogate stored representations of their environment with counter-factual questions is fundamental and, for now, absent in machines. (All bold added my me.)

‘If we examine the information that drives machine learning today, we find that it is almost entirely statistical. In other words, learning machines improve their performance by optimizing parameters over a stream of sensory inputs received from the environment. It is a slow process, analogous in many respects to the evolutionary survival-of-the-fittest process that explains how species like eagles and snakes have developed superb vision systems over millions of years. It cannot explain however the super-evolutionary process that enabled humans to build eyeglasses and telescopes over barely one thousand years. What humans possessed that other species lacked was a mental representation, a blue-print of their environment which they could manipulate at will to imagine alternative hypothetical environments for planning and learning…

‘[T]he decisive ingredient that gave our homo sapiens ancestors the ability to achieve global dominion, about 40,000 years ago, was their ability to sketch and store a representation of their environment, interrogate that representation, distort it by mental acts of imagination and finally answer “What if?” kind of questions. Examples are interventional questions: “What if I act?” and retrospective or explanatory questions: “What if I had acted differently?” No learning machine in operation today can answer such questions about actions not taken before. Moreover, most learning machines today do not utilize a representation from which such questions can be answered.

‘We postulate that the major impediment to achieving accelerated learning speeds as well as human level performance can be overcome by removing these barriers and equipping learning machines with causal reasoning tools. This postulate would have been speculative twenty years ago, prior to the mathematization of counterfactuals. Not so today. Advances in graphical and structural models have made counterfactuals computationally manageable and thus rendered meta-statistical learning worthy of serious exploration

Figure: the ladder of causation

Screenshot 2018-03-12 11.22.54

‘An extremely useful insight unveiled by the logic of causal reasoning is the existence of a sharp classification of causal information, in terms of the kind of questions that each class is capable of answering. The classification forms a 3-level hierarchy in the sense that questions at level i (i = 1, 2, 3) can only be answered if information from level j (j ≥ i) is available. [See figure]… Counterfactuals are placed at the top of the hierarchy because they subsume interventional and associational questions. If we have a model that can answer counterfactual queries, we can also answer questions about interventions and observations… The translation does not work in the opposite direction… No counterfactual question involving retrospection can be answered from purely interventional information, such as that acquired from controlled experiments; we cannot re-run an experiment on subjects who were treated with a drug and see how they behave had then not given the drug. The hierarchy is therefore directional, with the top level being the most powerful one. Counterfactuals are the building blocks of scientific thinking as well as legal and moral reasoning…

‘This hierarchy, and the formal restrictions it entails, explains why statistics-based machine learning systems are prevented from reasoning about actions, experiments and explanations. It also suggests what external information need to be provided to, or assumed by, a learning system, and in what format, in order to circumvent those restrictions

[He describes his approach to giving machines the ability to reason in more advanced ways (‘intent-specific optimization’) than standard approaches and the success of some experiments on real problems.]

[T]he value of intent-base optimization … contains … the key by which counterfactual information can be extracted out of experiments. The key is to have agents who pause, deliberate, and then act, possibly contrary to their original intent. The ability to record the discrepancy between outcomes resulting from enacting one’s intent and those resulting from acting after a deliberative pause, provides the information that renders counterfactuals estimable. It is this information that enables us to cross the barrier between layer 2 and layer 3 of the causal hierarchy… Every child undergoes experiences where he/she pauses and thinks: Can I do better? If mental records are kept of those experiences, we have experimental semantic to counterfactual thinking in the form of regret sentences “I could have done better.” The practical implications of this new semantics is worth exploring.’

The paper is here: http://web.cs.ucla.edu/~kaoru/theoretical-impediments.pdf.

*

By chance this evening I came across this interview with Pearl in which he discuses some of the ideas above less formally, HERE.

‘The problems that emerged in the early 1980s were of a predictive or diagnostic nature. A doctor looks at a bunch of symptoms from a patient and wants to come up with the probability that the patient has malaria or some other disease. We wanted automatic systems, expert systems, to be able to replace the professional — whether a doctor, or an explorer for minerals, or some other kind of paid expert. So at that point I came up with the idea of doing it probabilistically.

‘Unfortunately, standard probability calculations required exponential space and exponential time. I came up with a scheme called Bayesian networks that required polynomial time and was also quite transparent.

‘[A]s soon as we developed tools that enabled machines to reason with uncertainty, I left the arena to pursue a more challenging task: reasoning with cause and effect.

‘All the machine-learning work that we see today is conducted in diagnostic mode — say, labeling objects as “cat” or “tiger.” They don’t care about intervention; they just want to recognize an object and to predict how it’s going to evolve in time.

‘I felt an apostate when I developed powerful tools for prediction and diagnosis knowing already that this is merely the tip of human intelligence. If we want machines to reason about interventions (“What if we ban cigarettes?”) and introspection (“What if I had finished high school?”), we must invoke causal models. Associations are not enough — and this is a mathematical fact, not opinion.

‘As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.

‘I’m very impressed, because we did not expect that so many problems could be solved by pure curve fitting. It turns out they can. But I’m asking about the future — what next? Can you have a robot scientist that would plan an experiment and find new answers to pending scientific questions? That’s the next step. We also want to conduct some communication with a machine that is meaningful, and meaningful means matching our intuition.

‘If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans. The next step will be that machines will postulate such models on their own and will verify and refine them based on empirical evidence. That is what happened to science; we started with a geocentric model, with circles and epicycles, and ended up with a heliocentric model with its ellipses.

We’re going to have robots with free will, absolutely. We have to understand how to program them and what we gain out of it. For some reason, evolution has found this sensation of free will to be computationally desirable… Evidently, it serves some computational function.

‘I think the first evidence will be if robots start communicating with each other counterfactually, like “You should have done better.” If a team of robots playing soccer starts to communicate in this language, then we’ll know that they have a sensation of free will. “You should have passed me the ball — I was waiting for you and you didn’t!” “You should have” means you could have controlled whatever urges made you do what you did, and you didn’t.

[When will robots be evil?] When it appears that the robot follows the advice of some software components and not others, when the robot ignores the advice of other components that are maintaining norms of behavior that have been programmed into them or are expected to be there on the basis of past learning. And the robot stops following them.’

Please leave links to significant critiques of this paper or work that has developed the ideas in it.

If interested in the pre-history of the computer age and internet, this paper explores it.

Review of Allison’s book on US/China & nuclear destruction, and some connected thoughts on technology, the EU, and space

‘The combination of physics and politics could render the surface of the earth uninhabitable… [Technological progress] gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we have known them, cannot continue.’ John von Neumann, one of the 20th Century’s most important mathematicians, one of the two most responsible for developing digital computers, central to Manhattan Project etc.

‘Politics is always like visiting a country one does not know with people whom one does not know and whose reactions one cannot predict. When one person puts a hand in his pocket, the other person is already drawing his gun, and when he pulls the trigger the first one fires and it is too late then to ask whether the requirements of common law with regard to self-defence apply, and since common law is not effective in politics people are very, very quick to adopt an aggressive defence.’ Bismarck, 1879.

*

Below is a review of Graham Allison’s book, Destined for War: Can America and China Escape Thucydides’s Trap?. Allison’s book is particularly interesting given what is happening with North Korea and Trump. It is partly about the most urgent question: whether and how humanity can survive the collision between science and politics.

Beneath the review are a few other thoughts on the book and its themes. I will also post some notes on stuff connecting ideas about advanced technology and strategy (conventional and nuclear) including notes from the single best book on nuclear strategy, Payne’s The Great American Gamble: deterrence theory and practice from the Cold War to the twenty-first century. If you want to devote your life to a cause with maximum impact, then studying this book is a good start and it also connects to debates on other potential existential threats such as biological engineering and AI.

Payne’s book connects directly to Allison’s. Allison focuses a lot on the circumstances in which crises could spin out of control and end in US-China war. Payne’s book is the definitive account of nuclear strategy and its intellectual and practical problems. Payne’s book in a nutshell: 1) politicians and most senior officials operate with the belief that there is a dependable ‘rational’ basis for successful deterrence in which ‘rational’ US opponents will respond prudently and cautiously to US nuclear deterrence threats; 2) the re-evaluation of nuclear strategy in expert circles since the Cold War exposes the deep flaws of Cold War thinking in general and the concept of ‘rational’ deterrence in particular (partly because strategy was dangerously influenced by ideas about rationality from economics). Expert debate has not permeated to most of those responsible or the media. Trump’s language over North Korea and the media debate about it are stuck in the language of Cold War deterrence.

I would bet that no UK Defence Secretary has read Payne’s book. (Have the MoD PermSecs? The era of Michael Quinlan has long gone as the Iraq inquiries revealed.) What emerges from UK Ministers suggests they are operating with Cold War illusions. If you think I’m probably too pessimistic, then ponder this comment by Professor Allison who has spent half a century in these circles: ‘Over the past decade, I have yet to meet a senior member of the US national security team who had so much as read the official national security strategies’ (emphasis added). NB. he is referring to reading the official strategies, not the explanations of why they are partly flawed!

This of course relates to the theme of much I have written: the dangers created by the collision of science and markets with dysfunctional individuals and political institutions, and the way the political-media system actively suppresses thinking about, and focus on, what’s important.

Priorities are fundamental to politics because of inevitable information bottlenecks: these bottlenecks can be transformed by rare good organisation but they cannot be eradicated. People are always asking ‘how could the politicians let X happen with Y?’ where Y is something important. People find it hard to believe that Y is not the focus of  serious attention and therefore things like X are bound to happen all the time. People like Osborne and Clegg are focused on some magazine profile, not Y. The subject of nuclear command and control ought to make people realise that their mental models for politics are deeply wrong. It is beyond doubt that politicians do not even take the question of accidental nuclear war seriously, so a fortiori there is no reason to have confidence in their general approach to priorities.

If you think of politics as ‘serious people focusing seriously on the most important questions’, which is the default mode of most educated people and the media (but not the less-educated public which has better instincts), then your model of reality is badly wrong. A more accurate model is: politics is a system that 1) selects against skills needed for rigorous thinking and for qualities such as groupthink and confirmation bias, 2) incentivises a badly selected set of people to consider their career not the public interest, 3) drops them into dysfunctional institutions with no relevant training and poor tools, 4) centralises vast amounts of power in the hands of these people and institutions in ways we know are bound to cause huge errors, and 5) provides very weak (and often damaging) feedback so facing reality is rare, learning is practically impossible, and system reform is seen as a hostile act by political parties and civil services worldwide.

I meant to publish this a few days ago on ‘Petrov day’, the anniversary of 26 September 1983 when Petrov saw US nuclear missiles heading for Russia on his screen but in a snap decision without consultation he decided not to inform his superiors, guessing it was some sort of technical error and not wanting to risk catastrophic escalation. (Petrov died a few weeks ago.) I forgot to post but my point is: we will not keep getting lucky like that, and our odds worsen with every week that the political system works as it does. The cumulative probability of disaster grows alarmingly even if you assume a small chance of disaster. For example, a 1% chance of wipeout per year means the probability of wipeout is about 20% within 20 years, about 50% within 70 years, and about two-thirds within a century. Given what we now know it’s reasonable to plan on the basis that the chance of a nuclear accident of some sort leading to mass destruction is at least 1% per year. A 1:30 chance per year means a ~97% chance of wipeout in a century…

*

Review of Destined for War: Can America and China Escape Thucydides’s Trap?, by Graham Allison

Every day on his way to work at Harvard, Professor Allison wondered how the reconstruction of the bridge over Boston’s Charles River could take years while in China bigger bridges are replaced in days. His book tells the extraordinary story of China’s transformation since Deng abandoned Mao’s catastrophic Stalinism, and considers whether the story will end in war between China and America.

China erects skyscrapers in weeks while Parliament delays Heathrow expansion for over a decade. The EU discusses dumb rules made 60 years ago while China produces a Greece-sized economy every 16 weeks. China’s economy doubles roughly every seven years; it is already the size of America’s and will likely dwarf it in 20 years. More serious than Europe, it invests this growth in education and technology from genetic engineering to artificial intelligence.

Allison analyses the formidable President Xi, who has known real suffering and is very different to western leaders obsessed with the frivolous spin cycles of domestic politics. Xi’s goal is to ensure that China’s renaissance returns it to its position as the richest, strongest and most advanced culture on earth. Allison asks: will the US-China relationship repeat the dynamics between Athens and Sparta that led to war in 431 bc or might it resemble the story of the British-American alliance in the 20th century?

In Thucydides’ history the dynamic growth of Athens caused such fear that, amid confusing signals in an escalating crisis, Sparta gambled on preventive war. Similarly, after Bismarck unified Germany in 1870-71, Europe’s balance of power was upended. In summer 1914, the leaderships of all Great Powers were overwhelmed by confusing signals amid a rapidly escalating crisis. The prime minister doodled love letters to his girlfriend as the cabinet discussed Ireland, and European civilisation tottered over the brink.

Allison discusses how America, China and Taiwan [or Korea] might play the roles of Britain, Germany and Belgium. China has invested in weapons with powerful asymmetric advantages: cheap missiles can sink an aircraft carrier costing billions, and cyber weapons could negate America’s powerful space and communication infrastructure. American war-games often involve bombing Chinese coastal installations. How far might it escalate?

Nuclear weapons increase destructive power a million-fold and give a leader just minutes to decide whether a (possibly false) warning justifies firing weapons that would destroy civilisation, while relying on the same sort of hierarchical decision-making processes that failed in the much slower 1914 crisis.

Terrifying near misses have already happened, and we have been saved by individuals’ snap judgments. They have occurred, luckily, during episodes of relative calm. Similar incidents during an intense crisis could spark catastrophe. The Pentagon hoped that technology would bring ‘information dominance’: instead, technology accelerates crises and overwhelms decisions. Real and virtual robots will fight battles and influence minds faster than traditional institutions can follow.

Allison hopes Washington will rediscover its 1940s seriousness, when it built a strategy and institutions to contain Stalin. He suggests abandoning ‘containment’, which is unlikely to work in the same way against capitalist China as it did against Soviet Russia. It could drop security guarantees to Taiwan to lower escalation risks. It could promote new institutions to tackle destructive technology and terrorism. Since China will upend post-1945 institutions anyway, why not try to shape what comes next together? Perhaps, channelling Sun Tzu, the West could avoid defeat by not trying to ‘win’.

It is hard to see how the necessary leadership might emerge.

We need government teams capable of the rare high performance we see in George Mueller’s Nasa, which put man on the moon; or in Silicon Valley, entrepreneurs such as Sam Altman and Patrick Collison. This means senior politicians and officials of singular ability and with different education, training and experience. It means extremely adaptive institutions and state-of-the-art tools, not the cabinet processes that failed in 1914. It means breaking the power of self-absorbed parties and bureaucracies that evolved before nuclear physics and the internet.

New leaders must build institutions for global cooperation that can transcend Thucydides’ dynamics. For example, the plan of Jeff Bezos, Amazon’s CEO, to build a permanent moon base in which countries work together to harness the resources of the solar system is the sort of project that could create an alternative focus to nationalist antagonism.

The scale of change seems impossible, yet technology gives us no choice — we must try to escape our evolutionary origins, since we cannot survive repeated roulette with advanced technology. Churchill wrote how in 1914 governments drifted into ‘fathomless catastrophe’ in ‘a kind of dull cataleptic trance’. Western leaders are in another such trance. Unless new forces evolve outside closed political systems and force change we will suffer greater catastrophe; it’s just a matter of when.

I hope people like Jeff Bezos read this timely book and resolve to build the political forces we need.

(Originally appeared in The Spectator.)

*

A few other thoughts

I’ve got some quibbles, such as interpretations of Thucydides, but I won’t go into those.

There are many issues in it I did not have time to mention in a short review…

1. Nuclear crises / accidents

In the context of US-China crises, it is very instructive to consider some of the most dangerous episodes of the Cold War that remained secret at the time.

Here are some of the near misses that have been declassified (see this timeline from Future of Life Institute).

  • 24 January 1961. A US bomber broke up and dropped two hydrogen bombs on North Carolina. Five of six safety devices failed. ‘By the slightest margin of chance, literally the failure of two wires to cross, a nuclear explosion was averted’ (Defence Secretary Robert McNamara).
  • 25 October 1962, during the Cuban Missile Crisis. A sabotage alarm was triggered at a US base. Faulty wiring meant that the alarm triggered the take-off of nuclear armed US planes. Fortunately they made contact with the ground and were stood down. The alarm had been triggered by a bear — yes, a bear, like in a Simpsons episode — pottering around outside the base. This was one of many incidents during this crisis, including one base where missiles and codes were mishandled such that a single person could have launched.
  • 27 October 1962, during the Cuban Missile Crisis. A Soviet submarine was armed with nuclear weapons. It was cornered by US ships which dropped depth charges. It had no contact with Moscow for days and had no idea if war had already broken out. Malfunctioning systems meant carbon dioxide poisoning and crew were fainting. In panic the captain ordered a nuclear missile fired. Orders said that three officers had to agree. Only two did. Vasili Arkhipov said No. It was not known until after the collapse of the Soviet Union that there were also tactical nuclear missiles deployed to Cuba and, for the only time, under direct authority of field commanders who could fire without further authority from Moscow, so if the US had decided to attack Cuba, as many urged JFK to do, there is a reasonable chance that local commanders would have begun a nuclear exchange. Castro wanted these missiles, unknown to America, transferred to Cuban control. Fortunately, Mikoyan, the Soviet in charge on the ground, witnessed Castro’s unstable character and decided not to transfer these missiles to his control. The missiles were secretly returned to Russia shortly after.
  • 1 August 1974. A warning of the danger of allowing one person to give a catastrophic order: Nixon was depressed, drinking heavily, and unstable so Defense Secretary Schlesinger told the Joint Chiefs to come to him in the event of any order to fire nuclear weapons.
  • 9 November 1979. NORAD thought there was a large-scale Soviet nuclear attack. Planes were scrambled and ICBM crews put on highest alert. The National Security Adviser was called at home. He looked at his wife asleep and decided not to wake her as they would shortly both be dead and he turned his mind to calling President Carter about plans for massive retaliation before he died. After 6 minutes no satellite data confirmed launches. Decisions were delayed. It turned out that a technician had accidentally input a training program which played through the computer system as if it were a real attack. (There were other similar incidents.)
  • 26 September 1983. A month after the Soviet Union shot down a Korean passenger jet and at a time of international tension, a Soviet satellite showed America had launched five nuclear missiles. The data suggested the satellite was working properly but the officer on duty, Stanilov Petrov, decided to report it to his superiors as a false alarm without knowing if it was true. It turned out to be an odd effect of sun glinting off clouds that fooled the system.
  • 2-11 November 1983. NATO ran a large wargame with a simulation of DEFCON 1 and coordinated attack on the Soviet Union. The war-game was so realistic that Soviet intelligence thought it was a cover for a real attack and Soviet missiles were placed on high alert. On 11 November the Soviets intercepted a message saying US missiles had launched. Fortunately, incidents such as 26 September 1983 did not randomly occur during this 10 days.
  • 25 January 1995. The Russian system detected the launch of a missile off the coast of Norway that was thought to be a US submarine launch. The warning went to Yeltsin who activated his ‘nuclear football’ and retrieved launch codes. There was no corroboration from satellites. Norway had actually reached a scientific rocket and somehow this was not notified properly in Russia.
  • 29-30 August 2007. Six US nuclear weapons were accidentally loaded into a B52 which was left unguarded overnight, flown to another base where it was left unguarded for another nine hours before ground crew realised what they were looking at. For 36 hours nobody realised the missiles were missing.
  • 23 October 2010. The US command and control system responsible for detecting and stopping unauthorised launches lost all control of 50 ICBMs for an hour because of communication failure caused by a dodgy component.
  • A 2013 monitoring exercise found the US nuclear command and control system generally shambolic. Staff were found to be on drugs and otherwise unsuitable, the system was deemed unfit to cope with a major hack, and the commander of the ICBM force was compromised by a classic KGB ‘honey trap’ (when I lived in Moscow I met some of the women who worked on such operations and I’d bet >90% of male UK Ministers/PermSecs would throw themselves at them faster than you can say ‘honey trap’).

This is just a sample. The full list still understates the scale of luck we have had in at least two ways. First, the data is mostly from America because America is a more open society. The most sensible assumption is that there have been more incidents in Russia than we know about. Second, there is a selection bias towards older incidents that have been declassified.

Right now there are hundreds of missiles on ‘hair-trigger’ alert for launch within minutes. Decisions about how reliable a warning is and whether to fire must all be taken within minutes. This makes the whole world vulnerable to accidents, unauthorised use, unhinged leaders, and false alarms. This situation could get worse. China’s missiles are not on hair-trigger alert but the Chinese military is pushing to change this. Adding a third country operating like this would make the system even more unstable. It also seems very likely that proliferation will continue to spread. The West preaches non-proliferation at non-nuclear countries but this unsurprisingly is not persuasive.

2. China’s weaknesses, including the tension between informational openness needed for growth and its political dangers

During the Cold War, many people from different political perspectives were agreed on one thing: that the Soviet Union was much stronger than it later turned out to have been. This view was so powerful that people like Andy Marshall, the founder and multi-decade head of the Office of Net Assessment, struggled to find support for his argument that the CIA and Pentagon were systematically overstating the strength of the Soviet economy and understating the burden of defence spending. They had, of course, strong bureaucratic reasons to do so: a more dangerous enemy was the best argument for more funding. It is important to keep in mind this potential error viz China.

1929 and 2008 each had profound effects on US politics. China, interestingly, was not as badly hit by 2008 as the West. What is the probability that it will continue to avoid an economic crisis somewhere between a serious recession and a 1929/2008-style event over the next say 20 years? If it does experience such a shock, how effective will its political institutions be in coping relative to those of America’s and Britain’s over the long-term? Might debt and bad financial institutions create a political crisis serious enough to threaten the legitimacy of the regime? Might other problems such as secession movements (perhaps combined with terrorism) cause an equivalently serious political crisis? After all, historically the country has fallen apart repeatedly and this is the great fear of its leaders.

China also has serious resource vulnerabilities. It has to import most of its energy. It has serious water shortages. It has serious ecological crises. It has serious corruption problems. It has a rapidly ageing population. Although it, unlike the EU, has built brilliant private companies to rival Google et al, its state-owned enterprises (with special phones on CEO desks for Communist Party instructions) control gigantic resources and are not run as well as Google or Alibaba. There has been significant emigration of educated Chinese particularly to America where they buy houses and educate their children (Xi himself quietly sent a daughter to Harvard). Many of these tensions result in occasional public outcries that the regime carefully appeases. These problems are not trivial to solve even for very competent people who don’t have to worry about elections.

In terms of the risks of war and escalation over flashpoints like Korea or Taiwan, major internal crises like a financial crash might easily make it more likely that an external crisis escalates out of control. When regimes face crises of legitimacy they often, for obvious evolutionary reasons, resort to picking fights with out-groups to divert people. Much of Germany’s military-industrial elite saw nationalist diversions as crucial to escape the terrifying spread of socialism before 1914.

I’m ignorant about all these dynamics in China but if forced to bet I would bet that Allison underplays these weaknesses and I would bet against another 20 years of straight line growth. In the spirit of Tetlock, I’ll put a number on it and say a 80% probability of a bad recession or some other internal crisis within 20 years that is bad enough to be considered ‘the worst domestic crisis for the leadership since Tiananmen and a prelude to major political change’ and which results in either a Tiananmen-style clampdown or big political change. (I have not formulated this well, suggestions from Superforecasters welcome in comments.)

Part of my reason for thinking China will not be able to avoid such crises is a fundamental dynamic that Fukuyama discussed in his much-misunderstood ‘The End of History’: economic development requires openness and the protection of individual rights in various dimensions, and this creates an inescapable tension between an elite desire for economic dynamism and technological progress viz competitor Powers, and an elite fear of openness and what it brings politically/culturally.

The KGB and Soviet military realised this in the late 1970s as they watched the microelectronics revolution in America but they could never develop a response that worked: they were very successful at stealing stuff but they could not develop domestic companies because of the political constraints, as Marshall Ogarkov admitted (off-the-record!) to the New York Times in 1983. China watched the Soviet Union implode and chose a different path: economic liberalisation combined with greater economic and information rights, but no Gorbachev-style political opening up. This caution has worked so far but does not solve the problem.

Singapore and China could not develop economically as they have without also allowing much greater individual freedom in some domains than Soviet Russia. Developing hi-tech businesses cannot be done without a degree of openness to the rest of the world that is politically risky for China. If there is too much arbitrary seizure of property, as in the KGB-mafia state of Russia, then people will focus on theft and moving assets offshore rather than building long-term value. Chinese entrepreneurs have to be able to download software, read scientific and technical papers, and access most of the internet if they are not to be seriously disadvantaged. China knows that its path to greatness must include continued growth and greater productivity. If it does not, then like other oligarchies it will rapidly lose legitimacy and risks collapse. This is inconsistent with all-out repression. It will therefore have to tread a fine line of allowing social unhappiness to be expressed and adapting to it without letting it spin out of control. Given social movements are inherently complex and nonlinear, plus social media already seethes with unhappiness in China, there will be a constant danger that this dynamic tension breaks free of centralised control.

This is, obviously, one of the many reasons why the leadership is so interested in advanced technology and particularly AI. Such tools may help the leadership tread this tightrope without tumbling off, though maintaining a culture at the edge-of-the-art in technologies like AI simultaneously exacerbates the very turbulence that the AI needs to monitor — there are many tricky feedback loops to navigate and many reasons to suspect that eventually the centralised leadership will blunder, be overwhelmed, collapse internally and so on. Can China’s leaders maintain this dynamic tension for another 20 years? As Andy Grove always said, only the paranoid survive…

3. Contrast between the EU and China

High-tech breakthroughs are increasingly focused in North East America (around Harvard), West Coast America (around Stanford), and coastal China (e.g Shenzhen). When the UK leaves the EU, the EU will have zero universities in the global top 20. EU politicians are much more interested in vindictive legal action against Silicon Valley giants than asking themselves why Europe cannot match America or China. On issues such as CRISPR and genetic engineering the EU is regulating itself out of the competition and many businesspeople are unaware that this will get much worse once the ECJ starts using the Charter of Fundamental Rights to seize control of such regulation for itself, which will mean not just more anti-science regulation but also damaging uncertainty as scientists and companies face the ECJ suddenly pulling a human rights ‘top trump’ out of the deck whenever they fancy (one of the many arguments Vote Leave made during the referendum that we could not get the media to report, partly because of persistent confusion between the COFR and the ECHR). Organisations like YCombinator provide a welcoming environment for talented and dynamic young Europeans in California while the EU’s regulatory structure is dominated by massive incumbent multinationals like Goldman Sachs that use the Single Market to crush startup competitors.

If you watch this documentary on Shenzhen, you will see parts of China with the same or even greater dynamism than Silicon Valley and far, far beyond the EU. The contrast between the reality of Shenzhen and the rhetoric of blowhards like Macron is one of the reasons why many massive institutional investors do not share CBI-style conventional wisdom on Brexit. The young vote with their feet. If they want to be involved in world-leading projects, they head to coastal China or coastal America, few go to Paris, Rome, or Berlin. The Commission publishes figures on this but never faces the logic.

Chart: notice how irrelevant the EU is

Screenshot 2017-09-28 16.42.08

We are escaping the Single Market / ECJ / Charter of Fundamental Rights quagmire that will deepen the EU’s stagnation (despite Whitehall’s best efforts to scupper the referendum). The UK should now be thinking about how we provide the most dynamic environment in Europe for scientists and entrepreneurs. After 50 years of wasting time in dusty meeting rooms failing to ‘influence’ the EU to ditch its Monnet-Delors plan, we could start building things with real value and thereby acquire real, rather than the Foreign Office’s chimerical, influence. Let Macron et al continue with the same antiquated rhetoric: we know what will happen, we’ve seen it since all the pro-euro forces in the UK babbled about the ‘Lisbon Agenda’ in 2000 — rhetoric about ‘reform’ always turns into just more centralisation in Brussels institutions, it does not produce dynamic forces that create breakthroughs and real value. Economic, technological, and political power will continue to shift away from an EU that cannot and will not adapt to the forces changing the world: its legal model of Single Market plus ECJ make fast adaptation impossible. We will soon be out of Monnet’s house and Whitehall’s comfortable delusions (‘special relationship’, ‘punching above our weight’) will fade. Contra the EU’s logic, in a world increasingly defined by information and computation the winning asset is not size — it is institutional adaptability.

Those on the pro-EU side who disagree with this analysis have to face a fact: people like Mandelson, Adair Turner, the FT, and the Economist have been repeatedly wrong in their predictions for 20 years about ‘EU reform’, and people like me who have made the same arguments for 20 years, and called bullshit on ‘EU reform’, have been repeatedly vindicated by actual EU Treaties, growth rates, unemployment trends, euro crises and so on. (The Commission itself doesn’t even produce fake reports showing big gains from the Single Market, the gains it claims are relatively trivial even if you believe them.) What is happening in the EU now to suggest to reasonable observers that this will change over the next 20 years? Every sign from Juncker to Macron is that yet again Brussels will double down on Monnet’s mid-20th Century vision and the entire institutional weight of the Commission and legal system exerts an inescapable gravitational pull that way.

4. ‘Anti-access / area denial’ (A2/AD)

One aspect of China’s huge conventional buildup is what is known as A2/AD: i.e building forces to prevent America intervening near China, using missiles, submarines, cyber, anti-space and other weapons. The US response is known as ‘AirSea Battle’.

I won’t go into this here but it is an interesting topic that is also relevant to UK defence debates. The transformation of US forces goes back to a mid-1970s DARPA project known as Assault Breaker that began a series of breakthroughs in ‘precision strike’ where computerised command and control combined with sensors, radar, GPS and so on to provide the capability for precise conventional strike. The first public demo of all this was the famous films in the first Gulf War of bombs dropping down chimneys. This development was central to the last phase of the Cold War and the intolerable pressure put on Soviet defence expenditure. Soviets led the thinking but could not build the technology.

One of the consequences of these developments is that aircraft carriers are no longer safe from cheap missiles. I started making these arguments in 2004 when it was already clear that the UK Ministry of Defence carrier project was a disaster. Since then it has been a multi-billion pound case study in Whitehall incompetence, the MoD’s appalling ‘planning’ system and corrupt procurement, and Westminster’s systemic inability to think about complex long-term issues. Talking to someone at the MoD last year they said that in NATO wargames the UK carriers immediately bug out for the edge of the game to avoid being sunk. Of course they do. Carriers cannot be deployed against top tier forces because of the vast and increasing asymmetry between their cost and vulnerability to cheap sinking. Soon they will not be deployable even against Third World forces because of the combination of cheap cruise missiles and exponential price collapse and performance improvement of guidance systems (piggybacking the commercial drone industry). Soon an intelligent terrorist with a cruise missile and some off-the-shelf kit will be able to sink a carrier using their iPhone: see this blog for details. The MoD has lied and bluffed about all this for 20 years, this Government will continue the trend, and the appalling BAE will continue to scam billions from taxpayers unbothered by MPs.

5. Strategy, Sun Tzu and Bismarck: Great Powers and ‘the passions of sheep stealers’

China is the home of Sun Tzu. His most famous advice was that ‘winning without fighting is the highest form of warfare’ — advice often quoted but rarely internalised by those responsible for vital decisions in conflicts. This requires what Sun Tzu called ‘Cheng/Ch’i’ operations. You pull the opponent off balance with a series of disorienting moves, feints, bluffs, carrots, and sticks (e.g ‘where strong appear weak’). You disorient them with speed so they make blunders that undermine their own moral credibility with potential allies. You try to make the opponent look like an unreasonable aggressor. You isolate them, you break their alliances and morale. Where possible you collapse their strategy and will to fight instead of wasting resources on an actual battle. And so on…

Looking at the US-China relationship through the lens of ‘winning without fighting’ and nuclear risk suggests that the way for America to ‘win’ this Thucydidean struggle is: ‘don’t try to win in a conventional sense, but instead redefine winning’. Given the unlimited downside of nuclear war and what we now know about the near-disasters of Cold War brinkmanship, it certainly suggests focus on the goal of avoiding escalating crises involving nuclear weapons, and this goal has vast consequences for America’s whole approach to China.

Allison’s ideas about how the US might change strategy are interesting though I think his ‘academic’ approach is too rigid. Allison suggests distinct strategies as distinct choices. If one looks at the world champion of politics and diplomacy in the modern world, Bismarck, his approach was the opposite of ‘pick a strategy’ in the sense Allison means. Over 27 years he was close to and hostile to all the other Powers at different times, sometimes in such rapid succession that his opponents felt badly disoriented as though they were dealing with ‘the devil himself’, as many said.

Bismarck contained an extremely tyrannical ego and an even more extreme epistemological caution about the unpredictability of a complex world and a demonic practical adaptability. He knew events could suddenly throw his calculations into chaos. He was always ready to ditch his own ideas and commitments that suddenly seemed shaky. He was interested in winning, not consistency. He had a small number of fundamental goals — such as strengthening the monarchy’s power against Parliament and strengthening Prussia as a serious Great Power — which he pursued with constantly changing tactics. He was always feinting and fluid, pushing one line openly and others privately, pushing and pulling the other Powers in endless different combinations. He was the Grand Master of Cheng/Ch’i operations.

I think that if Bismarck read Allison’s book, he would not ‘pick a strategy’. He would use many of the different elements Allison sketches (and invent others) at the same time while watching China’s evolution and the success of different individuals/factions in the governing elite. For example, he would both suggest a bargain over dropping security guarantees for Taiwan and launch a covert (apparently domestic) cyber campaign to spread details of the Chinese leadership’s wealth and corruption all over the internet inside ‘the Great Firewall’. Carrot and stick, threaten and cajole, pull the opponent off balance.

I think that Bismarck’s advice would be: get what you can from dropping the Taiwanese guarantees and do not create nuclear tripwires in Korea. He was contemptuous of any argument that he ought to care about the Balkans for its own sake and repeatedly stressed that Germany should not fight for Austrian interests in the Balkans despite their alliance. He often repeated variations on his famous line — that the whole of the Balkans was not worth the bones of a single Pomeranian grenadier. Great Powers, he warned, should not let their fates be tied to ‘the passions of sheep stealers’. On another occasion: ‘All Turkey, including the various people who live there, is not worth so much that civilised European peoples should destroy themselves in great wars for its sake.’ At the Congress of Berlin, he made clear his priority: ‘We are not here to consider the happiness of the Bulgarians but to secure the peace of Europe.’ A decade later he warned other Powers not to ‘play Pericles beyond the confines of the area allocated by God’ and said clearly: ‘Bulgaria … is far from being an object of adequate importance … for which to plunge Europe from Moscow to the Pyrenees, and from the North Sea to Palermo, into a war whose issue no man can foresee. At the end of the conflict we should scarcely know why we had fought.’

In order to avoid a Great Power war he stressed the need to stay friendly with Russia, and the importance of being able to play Russia and Austria off against each other, France, and Britain: ‘The security of our relations with the Austro-Hungarian state depends to a great extent on our being able, should Austria make unreasonable demands on us, to come to terms with Russia as well.’ This was the logic behind his infamous secret Reinsurance Treaty in which, unknown to Austria with which he already had an alliance, Germany and Russia made promises to each other about their conduct in the event of war breaking out in different scenarios, the heart of which was Bismarck promising to stay out of a Russia-Austria war if Austria was the aggressor. In 1887 when military factions rumbled about a preventive war against Russia to help Austria in the Balkans he squashed the notion flat: ‘They want to urge me into war and I want peace. It would be frivolous to start a new war; we are not a pirate state which makes war because it suits a few.’ Preventive war, he said, was an egg from which very dangerous chicks would hatch.

His successors ditched his approach, ditched the Reinsurance Treaty, pushed Russia towards France, and made growing commitments to support Austria in the Balkans. This series of errors (combined with Wilhelm II’s appalling combination of vanity, aggression, and indolence which is echoed in a frightening proportion of leading politicians today) exploded in summer 1914.

Would Bismarck tie the probability of nuclear holocaust to the possibilities for extremely fast-moving crises in the South China Seas and ‘the passions of sheep stealers’ in places like North Korea? No chance.

Instead of taking the lead on Korea, I suspect Bismarck’s approach would be to go quiet publicly other than to suggest that China has a clear responsibility for Kim’s behaviour while perhaps leaking a ‘secret’ study on the consequences of Japan going nuclear, to focus minds in Beijing. Regardless of whose ‘fault’ it is, if the situation spirals out of control and ends with North Korea, perhaps because of collapsed command and control empowering some mentally ill / on drugs local commander (America has had plenty of those in charge of nukes) killing millions of Koreans and America destroying North Korea, who thinks this would be seen as a ‘win’ for America? Trump’s threats are straight out of the Cold War playbook but we know that playbook was dodgy even against the relative ‘rationality’ of people like Brezhnev and Andropov, never mind nutjobs like Kim…

So: avoid nuclear crises. Therefore do not give local security ties to Taiwan and Korea that could trigger disaster. What positive agenda can be pushed?

America should seek cooperation in areas of deep significance and moral force where institutions can be shaped that align orientation over decades. Three obvious areas are: disaster response in Asia (naval cooperation), WMD terrorism (intel cooperation), and space. China already has an aggressive space program. It has demonstrated edge-of-the-art capabilities in developing a satellite-based quantum communication network, a revolutionary goal with even deeper effects than GPS. It will go to the moon. The Cold War got humans onto the moon then perceived superiority ended American politicians’ ambition. Instead of rebooting a Cold War style rivalry, it would be better to try to do things together. One of the most important projects humans can pursue is — as Jeff Bezos has argued and committed billions to — to use the resources of space (which are approximately ALL resources in the solar system) to alleviate earth’s problems, and the logic of energy and population growth is to shift towards heavy manufacturing in space while Earth is ‘zoned residential and light industrial’. Building the infrastructure to allow such ambition for humanity is inherently a project of great moral force that encourages international friendship and provides an invaluable perspective: a tiny blue dot friendly to life surrounded by vast indifferent blackness. People can be proud of their nation’s contributions and proud of a global effort. (As I have said before, contributing to this should be one of the UK’s priorities post-Brexit — how much more real value we could create with this than we have in 50 years with the EU, and developing the basic and applied research for robotics would have crossover applications both with commercial autonomous vehicles and the military sphere.)

Of course, there must be limits to friendly cooperation. What if China takes this as weakness and increasingly exerts more and more power, direct and indirect, over her neighbours? This is obviously possible. But I think the Bismarck/Sun Tzu response would be: if that is how she will behave driven by internal dynamics, then let her behave like that, as that will do more than anything you can do to persuade those neighbours to try to contain China. Trying to contain China now won’t work and would be seen not just in China but elsewhere as classic aggression from an imperial power. China is neither like Hitler’s Germany nor Stalin’s Soviet Union and treating it as such is bound to provoke intense and dangerous resentment among a billion people who suffered appallingly for decades under Mao. But if America backs off and makes clear that she prefers cooperation to containment, and then over time China seeks to threaten and dominate Japan, Australia and others, then that is the time to start building alliances because that is when you will have moral authority with local forces — the vital element.

A Bismarckian approach would also, obviously, involve ensuring that America remains technologically ahead of China, though this is a much more formidable task than it was with Russia and that was seen for a while (after Sputnik) as an existential challenge (and famous economists like Paul Samuelson continued to predict wrongly that the Soviet economy would overtake America’s). Attempting to escape Thucydides means trying to build institutions and feelings of cooperation but it also requires that militaristic Chinese don’t come to see America as vulnerable to pre-emptive strikes. As AI, biological engineering, digital fabrication and so on accelerate, there may soon be non-nuclear dangers at least as frightening as nuclear dangers.

Finally, there is an interesting question of self-awareness. American leaders have a tendency to talk about American interests as if they are self-evidently humanity’s interests. Others find this amusing or enraging. Its leaders need a different language for discussing China if they are to avoid Thucydides.

Talented political leaders sometimes show an odd empathy for the psychology of opposing out-groups. Perhaps it’s a product of a sort of ‘complementarity’ ability, an ability to hold contradictory ideas in one’s head simultaneously. It is often a shock for students when they read in Pericles’s speech that he confronted the plague-struck Athenians with the sort of uncomfortable truth that democratic politicians rarely speak:

‘You have an empire to lose, and there is the danger to which the hatred of your imperial rule has exposed you… For by this time your empire has become a tyranny which in the opinion of mankind may have been unjustly gained, but which cannot be safely surrendered… To be hateful and offensive has ever been the fate of those who have aspired to empire.’ Thucydides, 2.63-4, emphasis added.

Bismarck too didn’t fool himself about how others saw him, his political allies, and his country. He much preferred boozing with revolutionary communists than reactionaries on his own side. When various commercial interests tried to get him to support them in China, he told the English Ambassador crossly:

‘These blackguard Hamburg and Lubeck merchants have no other idea of policy in China but to, what they call ‘shoot down those damned niggers of Chinese’ for six months and then dictate peace to them etc. Now, I believe those Chinese are better Christians than our vile mercantile snobs and wish for peace with us and are not thinking of war, and I’ll see the merchants and their Yankee and French allies damned before I consent to go to war with China to fill their pockets with money.’

There are powerful interests urging Washington to aggression against China. The nexus of commercial and military interests is always dangerous, as Eisenhower famously warned in his Farewell Speech. They will be more dangerous as jobs continue to shift East driven by markets and technology regardless of Trump’s promises. The Pentagon will overhype Chinese aggression to justify their budgets, as they did with Russia.

Bismarck was a monster and the world would have been better if one of the assassination attempts had succeeded (see HERE for other branching histories) but he also understood fundamental questions better than others. Those responsible for policy on China should study his advice. They should also study summer 1914 and ponder how those responsible for war and peace still make these decisions in much the same way as then, while the crises are 1,000 times faster and a million times more potentially destructive.

Such problems require embedding lessons from effective institutions into our systematically flawed political institutions. I describe in detail the systems management approach to complex projects developed in the 1950s and 1960s that is far more advanced than anything in Whitehall today and which is part of necessary reforms (see HERE, p.26ff for summary of lessons). I will blog on other ideas. Unless we find a way to build political institutions that produce much more reliable decisions from the raw material of unreliable humans the law of averages means we are sure to fall off our tightrope, and unlike in 1918 or 1945 we won’t have anything to clamber back on to…

The unrecognised simplicities of effective action #3: lessons on ‘capturing the heavens’ from the ARPA/PARC project that created the internet & PC

Below is a short summary of some basic principles of the ARPA/PARC project that created the internet and the personal computer. I wrote it originally as part of an anniversary blog on the referendum but it is also really part of this series on effective action.

One of the most interesting aspects of this project, like Mueller’s reforms of NASA, is the contrast between 1) extreme effectiveness, changing the world in a profound way, and 2) the general reaction to the methods was not only a failure to learn but a widespread hostility inside established bureaucracies (public and private) to the successful approach: NASA dropped Mueller’s approach when he left and has never been the same, and XEROX closed PARC and fired Bob Taylor. Changing the world in a profound and beneficial way is not enough to put a dint in bureaucracies which operate on their own dynamics.

Warren Buffet explained decades ago how institutions actively fight against learning and fight to stay in a closed and vicious feedback loop:

‘My most surprising discovery: the overwhelming importance in business of an unseen force that we might call “the institutional imperative”. In business school, I was given no hint of the imperative’s existence and I did not intuitively understand it when I entered the business world. I thought then that decent, intelligence, and experienced managers would automatically make rational business decisions. But I learned the hard way that isn’t so. Instead rationality frequently wilts when the institutional imperative comes into play.

‘For example, 1) As if governed by Newton’s First Law, any institution will resist any change in its current direction. 2) … Corporate projects will materialise to soak up available funds. 3) Any business craving of the leader, however foolish, will quickly be supported by … his troops. 4) The behaviour of peer companies … will be mindlessly imitated.’

Many of the principles behind ARPA/PARC could be applied to politics and government but they will not be learned from ‘naturally’ inside the system. Dramatic improvements will only happen if a group of people force ‘system’ changes on how government works so it is open to learning.

I have modified the below very slightly and added some references.

*

ARPA/PARC and ‘capturing the heavens’: The best way to predict the future is to invent it

The panic over Sputnik brought many good things such as a huge increase in science funding. America also created the Advanced Research Projects Agency (ARPA, which later added ‘Defense’ and became DARPA). Its job was to fund high risk / high payoff technology development. In the 1960s and 1970s, a combination of unusual people and unusually wise funding from ARPA created a community that in turn invented the internet, or ‘the intergalactic network’ as Licklider originally called it, and the personal computer. One of the elements of this community was PARC, a research centre working for Xerox. As Bill Gates said, he and Steve Jobs essentially broke into PARC, stole their ideas, and created Microsoft and Apple.

The ARPA/PARC project is an example of how if something is set up properly then a tiny number of people can do extraordinary things.

  • PARC had about 25 people and about $12 million per year in today’s money.
  • The breakthroughs from the ARPA/PARC project  created over 35 TRILLION DOLLARS of value for society and counting.
  • The internet architecture they built, based on decentralisation and distributed control, has scaled up over ten orders of magnitude (1010) without ever breaking and without ever being taken down for maintenance since 1969.

The whole story is fascinating in many ways. I won’t go into the technological aspects. I just want to say something about the process.

What does a process that produces ideas that change the world look like?

One of the central figures was Alan Kay. One of the most interesting things about the project is that not only has almost nobody tried to repeat this sort of research but the business world has even gone out of its way to spread mis-information about it because it was seen as so threatening to business-as-usual.

I will sketch a few lessons from one of Kay’s pieces but I urge you to read the whole thing.

‘This is what I call “The power of the context” or “Point of view is worth 80 IQ points”. Science and engineering themselves are famous examples, but there are even more striking processes within these large disciplines. One of the greatest works of art from that fruitful period of ARPA/PARC research in the 60s and 70s was the almost invisible context and community that catalysed so many researchers to be incredibly better dreamers and thinkers. That it was a great work of art is confirmed by the world-changing results that appeared so swiftly, and almost easily. That it was almost invisible, in spite of its tremendous success, is revealed by the disheartening fact today that, as far as I’m aware, no governments and no companies do edge-of-the-art research using these principles.’

‘[W]hen I think of ARPA/PARC, I think first of good will, even before brilliant people… Good will and great interest in graduate students as “world-class researchers who didn’t have PhDs yet” was the general rule across the ARPA community.

‘[I]t is no exaggeration to say that ARPA/PARC had “visions rather than goals” and “funded people, not projects”. The vision was “interactive computing as a complementary intellectual partner for people pervasively networked world-wide”. By not trying to derive specific goals from this at the funding side, ARPA/PARC was able to fund rather different and sometimes opposing points of view.

‘The pursuit of Art always sets off plans and goals, but plans and goals don’t always give rise to Art. If “visions not goals” opens the heavens, it is important to find artistic people to conceive the projects.

‘Thus the “people not projects” principle was the other cornerstone of ARPA/PARC’s success. Because of the normal distribution of talents and drive in the world, a depressingly large percentage of organizational processes have been designed to deal with people of moderate ability, motivation, and trust. We can easily see this in most walks of life today, but also astoundingly in corporate, university, and government research. ARPA/PARC had two main thresholds: self-motivation and ability. They cultivated people who “had to do, paid or not” and “whose doings were likely to be highly interesting and important”. Thus conventional oversight was not only not needed, but was not really possible. “Peer review” wasn’t easily done even with actual peers. The situation was “out of control”, yet extremely productive and not at all anarchic.

‘”Out of control” because artists have to do what they have to do. “Extremely productive” because a great vision acts like a magnetic field from the future that aligns all the little iron particle artists to point to “North” without having to see it. They then make their own paths to the future. Xerox often was shocked at the PARC process and declared it out of control, but they didn’t understand that the context was so powerful and compelling and the good will so abundant, that the artists worked happily at their version of the vision. The results were an enormous collection of breakthroughs.

‘Our game is more like art and sports than accounting, in that high percentages of failure are quite OK as long as enough larger processes succeed… [I]n most processes today — and sadly in most important areas of technology research — the administrators seem to prefer to be completely in control of mediocre processes to being “out of control” with superproductive processes. They are trying to “avoid failure” rather than trying to “capture the heavens”.

‘All of these principles came together a little over 30 years ago to eventually give rise to 1500 Altos, Ethernetworked to: each other, Laserprinters, file servers and the ARPAnet, distributed to many kinds of end-users to be heavily used in real situations. This anticipated the commercial availability of this genre by 10-15 years. The best way to predict the future is to invent it.

‘[W]e should realize that many of the most important ARPA/PARC ideas haven’t yet been adopted by the mainstream. For example, it is amazing to me that most of Doug Engelbart’s big ideas about “augmenting the collective intelligence of groups working together” have still not taken hold in commercial systems. What looked like a real revolution twice for end-users, first with spreadsheets and then with Hypercard, didn’t evolve into what will be commonplace 25 years from now, even though it could have. Most things done by most people today are still “automating paper, records and film” rather than “simulating the future”. More discouraging is that most computing is still aimed at adults in business, and that aimed at nonbusiness and children is mainly for entertainment and apes the worst of television. We see almost no use in education of what is great and unique about computer modeling and computer thinking. These are not technological problems but a lack of perspective. Must we hope that the open-source software movements will put things right?

‘The ARPA/PARC history shows that a combination of vision, a modest amount of funding, with a felicitous context and process can almost magically give rise to new technologies that not only amplify civilization, but also produce tremendous wealth for the society. Isn’t it time to do this again by Reason, even with no Cold War to use as an excuse? How about helping children of the world grow up to think much better than most adults do today? This would truly create “The Power of the Context”.’

Note how this story runs contrary to how free market think tanks and pundits describe technological development. The impetus for most of this development came from government funding, not markets.

Also note that every attempt since the 1950s to copy ARPA and JASON (the semi-classified group that partly gave ARPA its direction) in the UK has been blocked by Whitehall. The latest attempt was in 2014 when the Cabinet Office swatted aside the idea. Hilariously its argument was ‘DARPA has had a lot of failures’ thus demonstrating extreme ignorance about the basic idea — the whole point is you must have failures and if you don’t have lots of failures then you are failing!

People later claimed that while PARC may have changed the world it never made any money for XEROX. This is ‘absolute bullshit’ (Kay). It made billions from the laser printer alone and overall Xerox made 250 times what they invested in PARC before they went bust. In 1983 they fired Bob Taylor, the manager of PARC and the guy who made it all happen.

‘They hated [Taylor] for the very reason that most companies hate people who are doing something different, because it makes middle and upper management extremely uncomfortable. The last thing they want to do is make trillions, they want to make a few millions in a comfortable way’ (Kay).

Someone finally listened to Kay recently. ‘YC Research’, the research arm of the world’s most successful (by far) technology incubator, is starting to fund people in this way. I am not aware of any similar UK projects though I know that a small network of people are thinking again about how something like this could be done here. If you can help them, take a risk and help them! Someone talk to science minister Jo Johnson but be prepared for the Treasury’s usual ignorant bullshit — ‘what are we buying for our money, and how can we put in place appropriate oversight and compliance?’ they will say!

*

As we ponder the future of the UK-EU relationship shaped amid the farce of modern Whitehall, we should think hard about the ARPA/PARC example: how a small group of people can make a huge breakthrough with little money but the right structure, the right ways of thinking, and the right motives.

Those of us outside the political system thinking ‘we know we can do so much better than this but HOW can we break through the bullshit?’ need to change our perspective and gain 80 IQ points.

This real picture is a metaphor for the political culture: ad hoc solutions that are either bad or don’t scale.

Screenshot 2017-06-14 16.58.14.png

ARPA said ‘Let’s get rid of all the wires’. How do we ‘get rid of all the wires’ and build something different that breaks open the closed and failing political cultures? Winning the referendum was just one step that helps clear away dead wood but we now need to build new things.

The ARPA vision that aligned the artists ‘like little iron filings’ was:

‘Computers are destined to become interactive intellectual amplifiers for everyone in the world universally networked worldwide’ (Licklider).

We need a motivating vision aimed not at tomorrow but at changing the basic wiring of  the whole system, a vision that can align ‘the little iron filings’, and then start building for the long-term.

I will go into what I think this vision could be and how to do it another day. I think it is possible to create something new that could scale very fast and enable us to do politics and government extremely differently, as different to today as the internet and PC were to the post-war mainframes. This would enable us to build huge long-term value for humanity in a relatively short time (less than 20 years). To create it we need a process as well suited to the goal as the ARPA/PARC project was and incorporating many of its principles.

We must try to escape the current system with its periodic meltdowns and international crises. These crises move 500-1,000 times faster than that of summer 1914. Our destructive potential is at least a million-fold greater than it was in 1914. Yet we have essentially the same hierarchical command-and-control decision-making systems in place now that could not even cope with 1914 technology and pace. We have dodged nuclear wars by fluke because individuals made snap judgements in minutes. Nobody who reads the history of these episodes can think that this is viable long-term, and we will soon have another wave of innovation to worry about with autonomous robots and genetic engineering. Technology gives us no option but to try to overcome evolved instincts like destroying out-group competitors.

Watch Alan Kay explain how to invent the future HERE and HERE.

This link has these seminal papers:

  • Man-Computer Symbiosis, Licklider (1960)
  • The computer as a communications device, Licklider & Taylor (1968)

Part I of this series is HERE.

Part II on the emergence of ‘systems management’, how George Mueller used it to put man on the moon, and a checklist of how successful management of complex projects is systematically different to how Whitehall (and other state bureaucracies) work HERE.


Ps. Kay also points out that the real computer revolution won’t happen until people fulfil the original vision of enabling children to use this powerful way of thinking:

‘The real printing revolution was a qualitative change in thought and argument that lagged the hardware inventions by almost two centuries. The special quality of computers is their ability to rapidly simulate arbitrary descriptions, and the real computer revolution won’t happen until children can learn to read, write, argue and think in this powerful new way. We should all try to make this happen much sooner than 200 or even 20 more years!’

Almost nobody in education policy is aware of the educational context for the ARPA/PARC project which also speaks volumes about the abysmal field of ‘education research/policy’. People rightly say ‘education tech has largely failed’ but very few are aware that many of the original ideas from Licklider, Engelbart et al have never been tried and the Apple and MS versions are not the original vision.

 

Complexity and Prediction Part V: The crisis of mathematical paradoxes, Gödel, Turing and the basis of computing

Before the referendum I started a series of blogs and notes exploring the themes of complexity and prediction. This was part of a project with two main aims: first, to sketch a new approach to education and training in general but particularly for those who go on to make important decisions in political institutions and, second, to suggest a new approach to political priorities in which progress with education and science becomes a central focus for the British state. The two are entangled: progress with each will hopefully encourage progress with the other.

I was working on this paper when I suddenly got sidetracked by the referendum and have just looked at it again for the first time in about two years.

The paper concerns a fascinating episode in the history of ideas that saw the most esoteric and unpractical field, mathematical logic, spawn a revolutionary technology, the modern computer. NB. a great lesson to science funders: it’s a great mistake to cut funding on theory and assume that you’ll get more bang for buck from ‘applications’.

Apart from its inherent fascination, knowing something of the history is helpful for anybody interested in the state-of-the-art in predicting complex systems which involves the intersection between different fields including: maths, computer science, economics, cognitive science, and artificial intelligence. The books on it are either technical, and therefore inaccessible to ~100% of the population, or non-chronological so it is impossible for someone like me to get a clear picture of how the story unfolded.

Further, there are few if any very deep ideas in maths or science that are so misunderstood and abused as Gödel’s results. As Alan Sokal, author of the brilliant hoax exposing post-modernist academics, said, ‘Gödel’s theorem is an inexhaustible source of intellectual abuses.’ I have tried to make clear some of these using the best book available by Franzen, which explains why almost everything you read about it is wrong. If even Stephen Hawking can cock it up, the rest of us should be particularly careful.

I sketched these notes as I tried to pull together the story from many different books. I hope they are useful particularly for some 15-25 year-olds who like chronological accounts about ideas. I tried to put the notes together in the way that I wish I had been able to read at that age. I tried hard to eliminate errors but they are inevitable given how far I am from being competent to write about such things. I wish someone who is competent would do it properly. It would take time I don’t now have to go through and finish it the way I originally intended to so I will just post it as it was 2 years ago when I got calls saying ‘about this referendum…’

The only change I think I have made since May 2015 is to shove in some notes from a great essay later that year by the man who wrote the textbook on quantum computers, Michael Nielsen, which would be useful to read as an introduction or instead, HERE.

As always on this blog there is not a single original thought and any value comes from the time I have spent condensing the work of others to save you the time. Please leave corrections in comments.

The PDF of the paper is HERE (amended since first publication to correct an error, see Comments).

 

‘Gödel’s achievement in modern logic is singular and monumental – indeed it is more than a monument, it is a land mark which will remain visible far in space and time.’  John von Neumann.

‘Einstein had often told me that in the late years of his life he has continually sought Gödel’s company in order to have discussions with him. Once he said to me that his own work no longer meant much, that he came to the Institute merely in order to have the privilege of walking home with Gödel.’ Oskar Morgenstern (co-author with von Neumann of the first major work on Game Theory).

‘The world is rational’, Kurt Gödel.

Unrecognised simplicities of effective action #2: ‘Systems’ thinking — ideas from the Apollo programme for a ‘systems politics’

This is the second in a series: click this link 201702-effective-action-2-systems-engineering-to-systems-politics. The first is HERE.

This paper concerns a very interesting story combining politics, management, institutions, science and technology. When high technology projects passed a threshold of complexity post-1945 amid the extreme pressure of the early Cold War, new management ideas emerged. These ideas were known as ‘systems engineering’ and ‘systems management’. These ideas were particularly connected to the classified program to build the first Intercontinental Ballistic Missiles (ICBMs) in the 1950s and successful ideas were transplanted into a failing NASA by George Mueller and others from 1963 leading to the successful moon landing in 1969.

These ideas were then applied in other mission critical teams and could be used to improve government performance. Urgently needed projects to lower the probability of catastrophes for humanity will benefit from considering why Mueller’s approach was 1) so successful and 2) so un-influential in politics. Could we develop a ‘systems politics’ that applies the unrecognised simplicities of effective action?

For those interested, it also looks briefly at an interesting element of the story – the role of John von Neumann, the brilliant mathematician who was deeply involved in the Manhattan Project, the project to build ICBMs, the first digital computers, and subjects like artificial intelligence, artificial life, possibilities for self-replicating machines made from unreliable components, and the basic problem that technological progress ‘gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we have known them, cannot continue.’

An obvious project with huge inherent advantages for humanity is the development of an international manned lunar base as part of developing space for commerce and science. It is the sort of thing that might change political dynamics on earth and could generate enormous support across international boundaries. After 23 June 2016, the UK has to reorient national policy on many dimensions. Developing basic science is one of the most important dimensions (for example, as I have long argued we urgently need a civilian version of DARPA similarly operating outside normal government bureaucratic systems including procurement and HR). Supporting such an international project would be a great focus for UK efforts and far more productive than our largely wasted decades of focus on the dysfunctional bureaucracy in Brussels that is dominated by institutions that fail the most important test – the capacity for error-correction the importance of which has been demonstrated over long periods and through many problems by the Anglo-American political system and its common law.

Please leave comments or email dmc2.cummings at gmail.com