‘Two hands are a lot’ — we’re hiring data scientists, project managers, policy experts, assorted weirdos…

‘This is possibly the single largest design flaw contributing to the bad Nash equilibrium in which … many governments are stuck. Every individual high-functioning competent person knows they can’t make much difference by being one more face in that crowd.’ Eliezer Yudkowsky, AI expert, LessWrong etc.

‘[M]uch of our intellectual elite who think they have “the solutions” have actually cut themselves off from understanding the basis for much of the most important human progress.’ Michael Nielsen, physicist and one of the handful of most interesting people I’ve ever talked to.

‘People, ideas, machines — in that order.’ Colonel Boyd.

‘There isn’t one novel thought in all of how Berkshire [Hathaway] is run. It’s all about … exploiting unrecognized simplicities.’ Charlie Munger,Warren Buffett’s partner.

‘Two hands, it isn’t much considering how the world is infinite. Yet, all the same, two hands, they are a lot.’ Alexander Grothendieck, one of the great mathematicians.

*

There are many brilliant people in the civil service and politics. Over the past five months the No10 political team has been lucky to work with some fantastic officials. But there are also some profound problems at the core of how the British state makes decisions. This was seen by pundit-world as a very eccentric view in 2014. It is no longer seen as eccentric. Dealing with these deep problems is supported by many great officials, particularly younger ones, though of course there will naturally be many fears — some reasonable, most unreasonable.

Now there is a confluence of: a) Brexit requires many large changes in policy and in the structure of decision-making, b) some people in government are prepared to take risks to change things a lot, and c) a new government with a significant majority and little need to worry about short-term unpopularity while trying to make rapid progress with long-term problems.

There is a huge amount of low hanging fruit — trillion dollar bills lying on the street — in the intersection of:

  • the selection, education and training of people for high performance
  • the frontiers of the science of prediction
  • data science, AI and cognitive technologies (e.g Seeing Rooms, ‘authoring tools designed for arguing from evidence’, Tetlock/IARPA prediction tournaments that could easily be extended to consider ‘clusters’ of issues around themes like Brexit to improve policy and project management)
  • communication (e.g Cialdini)
  • decision-making institutions at the apex of government.

We want to hire an unusual set of people with different skills and backgrounds to work in Downing Street with the best officials, some as spads and perhaps some as officials. If you are already an official and you read this blog and think you fit one of these categories, get in touch.

The categories are roughly:

  • Data scientists and software developers
  • Economists
  • Policy experts
  • Project managers
  • Communication experts
  • Junior researchers one of whom will also be my personal assistant
  • Weirdos and misfits with odd skills

We want to improve performance and make me much less important — and within a year largely redundant. At the moment I have to make decisions well outside what Charlie Munger calls my ‘circle of competence’ and we do not have the sort of expertise supporting the PM and ministers that is needed. This must change fast so we can properly serve the public.

A. Unusual mathematicians, physicists, computer scientists, data scientists

You must have exceptional academic qualifications from one of the world’s best universities or have done something that demonstrates equivalent (or greater) talents and skills. You do not need a PhD — as Alan Kay said, we are also interested in graduate students as ‘world-class researchers who don’t have PhDs yet’.

You should have the following:

  • PhD or MSc in maths or physics.
  • Outstanding mathematical skills are essential.
  • Experience of using analytical languages: e.g. Python, SQL, R.
  • Familiarity with data tools and technologies such as Postgres, Scikit Learn, NEO4J.

A few examples of papers that you will be considering:

You should be able to explain to other mathematicians, physicists and computer scientists the ideas in such papers, discuss what could be useful for our projects, synthesise ideas for other data scientists, and apply them to practical problems. You won’t be expert on the maths used in all these papers but you should be confident that you could study it and understand it.

We will be using machine learning and associated tools so it is important you can program. You do not need software development levels of programming but it would be an advantage.

Those applying must watch Bret Victor’s talks and study Dynamic Land. If this excites you, then apply; if not, then don’t. I and others interviewing will discuss this with anybody who comes for an interview. If you want a sense of the sort of things you’d be working on, then read my previous blog on Seeing Rooms, cognitive technologies etc.

B. Unusual software developers

We are looking for great software developers who would love to work on these ideas, build tools and work with some great people. You should also look at some of Victor’s technical talks on programming languages and the history of computing.

You will be working with data scientists, designers and others.

C. Unusual economists

We are looking to hire some recent graduates in economics. You should a) have an outstanding record at a great university, b) understand conventional economic theories, c) be interested in arguments on the edge of the field — for example, work by physicists on ‘agent-based models’ or by the hedge fund Bridgewater on the failures/limitations of conventional macro theories/prediction, and d) have very strong maths and be interested in working with mathematicians, physicists, and computer scientists.

The ideal candidate might, for example, have a degree in maths and economics, worked at the LHC in one summer, worked with a quant fund another summer, and written software for a YC startup in a third summer!

We’ve found one of these but want at least one more.

The sort of conversation you might have is discussing these two papers in Science (2015): Computational rationality: A converging paradigm for intelligence in brains, minds, and machines, Gershman et al and Economic reasoning and artificial intelligence, Parkes & Wellman

You will see in these papers an intersection of:

  • von Neumann’s foundation of game theory and ‘expected utility’,
  • mainstream economic theories,
  • modern theories about auctions,
  • theoretical computer science (including problems like the complexity of probabilistic inference in Bayesian networks, which is in the NP–hard complexity class),
  • ideas on ‘computational rationality’ and meta-reasoning from AI, cognitive science and so on.

If these sort of things are interesting, then you will find this project interesting.

It’s a bonus if you can code but it isn’t necessary.

D. Great project managers.

If you think you are one of the a small group of people in the world who are truly GREAT at project management, then we want to talk to you. Victoria Woodcock ran Vote Leave — she was a truly awesome project manager and without her Cameron would certainly have won. We need people like this who have a 1 in 10,000 or higher level of skill and temperament.

The Oxford Handbook on Megaprojects points out that it is possible to quantify lessons from the failures of projects like high speed rail projects because almost all fail so there is a large enough sample to make statistical comparisons, whereas there can be no statistical analysis of successes because they are so rare.

It is extremely interesting that the lessons of Manhattan (1940s), ICBMs (1950s) and Apollo (1960s) remain absolutely cutting edge because it is so hard to apply them and almost nobody has managed to do it. The Pentagon systematically de-programmed itself from more effective approaches to less effective approaches from the mid-1960s, in the name of ‘efficiency’. Is this just another way of saying that people like General Groves and George Mueller are rarer than Fields Medallists?

Anyway — it is obvious that improving government requires vast improvements in project management. The first project will be improving the people and skills already here.

If you want an example of the sort of people we need to find in Britain, look at this on CC Myers — the legendary builders. SPEED. We urgently need people with these sort of skills and attitude. (If you think you are such a company and you could dual carriageway the A1 north of Newcastle in record time, then get in touch!)

E. Junior researchers

In many aspects of government, as in the tech world and investing, brains and temperament smash experience and seniority out of the park.

We want to hire some VERY clever young people either straight out of university or recently out with with extreme curiosity and capacity for hard work.

One of you will be a sort of personal assistant to me for a year — this will involve a mix of very interesting work and lots of uninteresting trivia that makes my life easier which you won’t enjoy. You will not have weekday date nights, you will sacrifice many weekends — frankly it will hard having a boy/girlfriend at all. It will be exhausting but interesting and if you cut it you will be involved in things at the age of ~21 that most people never see.

I don’t want confident public school bluffers. I want people who are much brighter than me who can work in an extreme environment. If you play office politics, you will be discovered and immediately binned.

F. Communications

In SW1 communication is generally treated as almost synonymous with ‘talking to the lobby’. This is partly why so much punditry is ‘narrative from noise’.

With no election for years and huge changes in the digital world, there is a chance and a need to do things very differently.

We’re particularly interested in deep experts on TV and digital. We also are interested in people who have worked in movies or on advertising campaigns. There are some very interesting possibilities in the intersection of technology and story telling — if you’ve done something weird, this may be the place for you.

I noticed in the recent campaign that the world of digital advertising has changed very fast since I was last involved in 2016. This is partly why so many journalists wrongly looked at things like Corbyn’s Facebook stats and thought Labour was doing better than us — the ecosystem evolves rapidly while political journalists are still behind the 2016 tech, hence why so many fell for Carole’s conspiracy theories. The digital people involved in the last campaign really knew what they are doing, which is incredibly rare in this world of charlatans and clients who don’t know what they should be buying. If you are interested in being right at the very edge of this field, join.

We have some extremely able people but we also must upgrade skills across the spad network.

G. Policy experts

One of the problems with the civil service is the way in which people are shuffled such that they either do not acquire expertise or they are moved out of areas they really know to do something else. One Friday, X is in charge of special needs education, the next week X is in charge of budgets.

There are, of course, general skills. Managing a large organisation involves some general skills. Whether it is Coca Cola or Apple, some things are very similar — how to deal with people, how to build great teams and so on. Experience is often over-rated. When Warren Buffett needed someone to turn around his insurance business he did not hire someone with experience in insurance: ‘When Ajit entered Berkshire’s office on a Saturday in 1986, he did not have a day’s experience in the insurance business’ (Buffett).

Shuffling some people who are expected to be general managers is a natural thing but it is clear Whitehall does this too much while also not training general management skills properly. There are not enough people with deep expertise in specific fields.

If you want to work in the policy unit or a department and you really know your subject so that you could confidently argue about it with world-class experts, get in touch.

It’s also the case that wherever you are most of the best people are inevitably somewhere else. This means that governments must be much better at tapping distributed expertise. Of the top 20 people in the world who best understand the science of climate change and could advise us what to do with COP 2020, how many now work as a civil servant/spad or will become one in the next 5 years?

G. Super-talented weirdos

People in SW1 talk a lot about ‘diversity’ but they rarely mean ‘true cognitive diversity’. They are usually babbling about ‘gender identity diversity blah blah’. What SW1 needs is not more drivel about ‘identity’ and ‘diversity’ from Oxbridge humanities graduates but more genuine cognitive diversity.

We need some true wild cards, artists, people who never went to university and fought their way out of an appalling hell hole, weirdos from William Gibson novels like that girl hired by Bigend as a brand ‘diviner’ who feels sick at the sight of Tommy Hilfiger or that Chinese-Cuban free runner from a crime family hired by the KGB. If you want to figure out what characters around Putin might do, or how international criminal gangs might exploit holes in our border security, you don’t want more Oxbridge English graduates who chat about Lacan at dinner parties with TV producers and spread fake news about fake news.

By definition I don’t really know what I’m looking for but I want people around No10 to be on the lookout for such people.

We need to figure out how to use such people better without asking them to conform to the horrors of ‘Human Resources’ (which also obviously need a bonfire).

*

Send a max 1 page letter plus CV to ideasfornumber10@gmail.com and put in the subject line ‘job/’ and add after the / one of: data, developer, econ, comms, projects, research, policy, misfit.

I’ll have to spend time helping you so don’t apply unless you can commit to at least 2 years.

I’ll bin you within weeks if you don’t fit — don’t complain later because I made it clear now. 

I will try to answer as many as possible but last time I publicly asked for job applications in 2015 I was swamped and could not, so I can’t promise an answer. If you think I’ve insanely ignored you, persist for a while.

I will use this blog to throw out ideas. It’s important when dealing with large organisations to dart around at different levels, not be stuck with formal hierarchies. It will seem chaotic and ‘not proper No10 process’ to some. But the point of this government is to do things differently and better and this always looks messy. We do not care about trying to ‘control the narrative’ and all that New Labour junk and this government will not be run by ‘comms grid’.

As Paul Graham and Peter Thiel say, most ideas that seem bad are bad but great ideas also seem at first like bad ideas — otherwise someone would have already done them. Incentives and culture push people in normal government systems away from encouraging ‘ideas that seem bad’. Part of the point of a small, odd No10 team is to find and exploit, without worrying about media noise, what Andy Grove called ‘very high leverage ideas’ and these will almost inevitably seem bad to most.

I will post some random things over the next few weeks and see what bounces back — it is all upside, there’s no downside if you don’t mind a bit of noise and it’s a fast cheap way to find good ideas…

On the referendum #31: Project Maven, procurement, lollapalooza results & nuclear/AGI safety

On the referendum #31: Project Maven, procurement, lollapalooza results & nuclear/AGI safety

‘People, ideas, machines — in that order!’ Colonel Boyd

‘[R]ational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. Those drives will lead to anti-social and dangerous behaviour if not explicitly countered. The current computing infrastructure would be very vulnerable to unconstrained systems with these drives.’ Omohundro.

‘For progress there is no cure…’ von Neumann

This blog sketches a few recent developments connecting AI and issues around ‘systems management’ and government procurement.

The biggest problem for governments with new technologies is that the limiting factor on applying new technologies is not the technology but management and operational ideas which are extremely hard to change fast. This has been proved repeatedly: eg. the tank in the 1920s-30s or the development of ‘precision strike’ in the 1970s. These problems are directly relevant to the application of AI by militaries and intelligence services. The Pentagon’s recent crash program, Project Maven, discussed below, was an attempt to grapple with these issues.

‘The good news is that Project Maven has delivered a game-changing AI capability… The bad news is that Project Maven’s success is clear proof that existing AI technology is ready to revolutionize many national security missions… The project’s success was enabled by its organizational structure.

This blog sketches some connections between:

  • Project Maven.
  • The example of ‘precision strike’ in the 1970s, Marshal Ogarkov and Andy Marshall, implications for now — ‘anti-access / area denial’ (A2/AD), ‘Air-Sea Battle’ etc.
  • Development of ‘precision strike’ to lethal autonomous cheap drone swarms hunting humans cowering underground.
  • Adding AI to already broken nuclear systems and doctrines, hacking the NSA etc — mix coke, Milla Jovovich and some alpha engineers and you get…?
  • A few thoughts on ‘systems management’ and procurement, lessons from the Manhattan Project etc.
  • The Chinese attitude to ‘systems management’ and Qian Xuesen, combined with AI, mass surveillance, ‘social credit’ etc.
  • A few recent miscellaneous episodes such as an interesting DARPA demo on ‘self-aware’ robots.
  • Charts on Moore’s Law: what scale would a ‘Manhattan Project for AGI’ be?
  • AGI safety — the alignment problem, the dangers of science as a ‘blind search algorithm’, closed vs open security architectures etc.

A theme of this blog since before the referendum campaign has been that thinking about organisational structure/dynamics can bring what Warren Buffett calls ‘lollapalooza’ results. What seems to be very esoteric and disconnected from ‘practical politics’ (studying things like the management of the Manhattan Project and Apollo) turns out to be extraordinarily practical (gives you models for creating super-productive processes).

Part of the reason lollapalooza results are possible is that almost nobody near the apex of power believes the paragraph above is true and they actively fight to stop people learning from extreme successes so there is gold lying on the ground waiting to be picked up for trivial costs. Nudging reality down an alternative branch of history in summer 2016 only cost ~£106 so the ‘return on investment’ if you think about altered GDP, technology, hundreds of millions of lives over decades and so on was truly lollapalooza. Politics is not like the stock market where you need to be an extreme outlier like Buffett/Munger to find such inefficiencies and results consistently. The stock market is an exploitable market where being right means you get rich and you help the overall system error-correct which makes it harder to be right (the mechanism pushes prices close to random,  they’re not quite random but few can exploit the non-randomness). Politics/government is not like this. Billionaires who want to influence politics could get better ‘returns on investment’ than from early stage Amazon.

This blog is not directly about Brexit at all but if you are thinking — how could we escape this nightmare and turn government institutions from hopeless to high performance and what should we focus on to replace the vision of ‘influencing the EU’ that has been blown up by Brexit? — it will be of interest. Lessons that have been lying around for over half a century could have pushed the Brexit negotiations in a completely different direction and still could do but require an extremely different ‘model of effective action’ to dominant models in Westminster.

*

Project Maven: new organisational approaches for rapid deployment of AI to war / hybrid-war

The quotes below are from a piece in The Bulletin of Atomic Scientists about a recent AI project by the Pentagon. The most interesting aspect is not the technical details but the management approach and implications for Pentagon-style bureaucraties.

Project Maven is a crash Defense Department program that was designed to deliver AI technologiesto an active combat theater within six months from when the project received funding… Technologies developed through Project Maven have already been successfully deployed in the fight against ISIS. Despite their rapid development and deployment, these technologies are getting strong praise from their military intelligence users. For the US national security community, Project Maven’s frankly incredible success foreshadows enormous opportunities ahead — as well as enormous organizational, ethical, and strategic challenges.

‘In late April, Robert Work — then the deputy secretary of the Defense Department — wrote a memo establishing the Algorithmic Warfare Cross-Functional Team, also known as Project Maven. The team had only six members to start with, but its small size belied the significance of its charter… Project Maven is the first time the Defense Department has sought to deploy deep learning and neural networks, at the level of state-of-the-art commercial AI, in department operations in a combat theater…

‘Every day, US spy planes and satellites collect more raw data than the Defense Department could analyze even if its whole workforce spent their entire lives on it. As its AI beachhead, the department chose Project Maven, which focuses on analysis of full-motion video data from tactical aerial drone platforms… These drone platforms and their full-motion video sensors play a major role in the conflict against ISIS across the globe. The tactical and medium-altitude video sensors of the Scan Eagle, MQ-1C, and MQ-9 produce imagery that more or less resembles what you see on Google Earth. A single drone with these sensors produces many terabytes of data every day. Before AI was incorporated into analysis of this data, it took a team of analysts working 24 hours a day to exploit only a fraction of one drone’s sensor data.

‘The Defense Department spent tens of billions of dollars developing and fielding these sensors and platforms, and the capabilities they offer are remarkable. Whenever a roadside bomb detonates in Iraq, the analysts can simply rewind the video feed to watch who planted it there, when they planted it, where they came from, and where they went. Unfortunately, most of the imagery analysis involves tedious work—people look at screens to count cars, individuals, or activities, and then type their counts into a PowerPoint presentation or Excel spreadsheet. Worse, most of the sensor data just disappears — it’s never looked at — even though the department has been hiring analysts as fast as it can for years… Plenty of higher-value analysis work will be available for these service members and contractors once low-level counting activity is fully automated.

‘The six founding members of Project Maven, though they were assigned to run an AI project, were not experts in AI or even computer science. Rather, their first task was building partnerships, both with AI experts in industry and academia and with the Defense Department’s communities of drone sensor analysts… AI experts and organizations who are interested in helping the US national security mission often find that the department’s contracting procedures are so slow, costly, and painful that they just don’t want to bother. Project Maven’s team — with the help of Defense Information Unit Experimental, an organization set up to accelerate the department’s adoption of commercial technologies — managed to attract the support of some of the top talent in the AI field (the vast majority of which lies outside the traditional defense contracting base). Figuring out how to effectively engage the tech sector on a project basis is itself a remarkable achievement…

‘Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI. A traditional defense acquisition process lasts multiple years, with separate organizations defining the functions that acquisitions must perform, or handling technology development, production, or operational deployment. Each of these organizations must complete its activities before results are handed off to the next organization. When it comes to digital technologies, this approach often results in systems that perform poorly and are obsolete even before they are fielded.

Project Maven has taken a different approach, one modeled after project management techniques in the commercial tech sector: Product prototypes and underlying infrastructure are developed iteratively, and tested by the user community on an ongoing basis. Developers can tailor their solutions to end-user needs, and end users can prepare their organizations to make rapid and effective use of AI capabilities. Key activities in AI system development — labeling data, developing AI-computational infrastructure, developing and integrating neural net algorithms, and receiving user feedback — are all run iteratively and in parallel…

‘In Maven’s case, humans had to individually label more than 150,000 images in order to establish the first training data sets; the group hopes to have 1 million images in the training data set by the end of January. Such large training data sets are needed for ensuring robust performance across the huge diversity of possible operating conditions, including different altitudes, density of tracked objects, image resolution, view angles, and so on. Throughout the Defense Department, every AI successor to Project Maven will need a strategy for acquiring and labeling a large training data set…

‘From their users, Maven’s developers found out quickly when they were headed down the wrong track — and could correct course. Only this approach could have provided a high-quality, field-ready capability in the six months between the start of the project’s funding and the operational use of its output. In early December, just over six months from the start of the project, Maven’s first algorithms were fielded to defense intelligence analysts to support real drone missions in the fight against ISIS.

‘The good news is that Project Maven has delivered a game-changing AI capability… The bad news is that Project Maven’s success is clear proof that existing AI technology is ready to revolutionize many national security missions

The project’s success was enabled by its organizational structure: a small, operationally focused, cross-functional team that was empowered to develop external partnerships, leverage existing infrastructure and platforms, and engage with user communities iteratively during development. AI needs to be woven throughout the fabric of the Defense Department, and many existing department institutions will have to adopt project management structures similar to Maven’s if they are to run effective AI acquisition programs. Moreover, the department must develop concepts of operations to effectively use AI capabilities—and train its military officers and warfighters in effective use of these capabilities…

‘Already the satellite imagery analysis community is working on its own version of Project Maven. Next up will be migrating drone imagery analysis beyond the campaign to defeat ISIS and into other segments of the Defense Department that use drone imagery platforms. After that, Project Maven copycats will likely be established for other types of sensor platforms and intelligence data, including analysis of radar, signals intelligence, and even digital document analysis… In October 2016, Michael Rogers (head of both the agency and US Cyber Command) said “Artificial Intelligence and machine learning … [are] foundational to the future of cybersecurity. … It is not the if, it’s only the when to me.”

‘The US national security community is right to pursue greater utilization of AI capabilities. The global security landscape — in which both Russia and China are racing to adapt AI for espionage and warfare — essentially demands this. Both Robert Work and former Google CEO Eric Schmidt have said that leadership in AI technology is critical to the future of economic and military power and that continued US leadership is far from guaranteed. Still, the Defense Department must explore this new technological landscape with a clear understanding of the risks involved…

‘The stakes are relatively low when AI is merely counting the number of cars filmed by a drone camera, but drone surveillance data can also be used to determine whether an individual is directly engaging in hostilities and is thereby potentially subject to direct attack. As AI systems become more capable and are deployed across more applications, they will engender ever more difficult ethical and legal dilemmas.

‘US military and intelligence agencies will have to develop effective technological and organizational safeguards to ensure that Washington’s military use of AI is consistent with national values. They will have to do so in a way that retains the trust of elected officials, the American people, and Washington’s allies. The arms-race aspect of artificial intelligence certainly doesn’t make this task any easier…

‘The Defense Department must develop and field AI systems that are reliably safe when the stakes are life and death — and when adversaries are constantly seeking to find or create vulnerabilities in these systems.

‘Moreover, the department must develop a national security strategy that focuses on establishing US advantages even though, in the current global security environment, the ability to implement advanced AI algorithms diffuses quickly. When the department and its contractors developed stealth and precision-guided weapons technology in the 1970s, they laid the foundation for a monopoly, nearly four decades long, on technologies that essentially guaranteed victory in any non-nuclear war. By contrast, today’s best AI tech comes from commercial and academic communities that make much of their research freely available online. In any event, these communities are far removed from the Defense Department’s traditional technology circles. For now at least, the best AI research is still emerging from the United States and allied countries, but China’s national AI strategy, released in July, poses a credible challenge to US technology leadership.’

Full article here: https://thebulletin.org/project-maven-brings-ai-fight-against-isis11374

Project Maven shows recurring lessons from history. Speed and adaptability are crucial to success in conflict and can be helped by new technologies. So is the capacity for new operational ideas about using new technologies. These ideas depend on unusual people. Bureaucracies naturally slow things down (for some good but mostly bad reasons), crush new ideas, and exclude unusual people in order to defend established interests. The limiting factor for the Pentagon in deploying advanced technology to conflict in a useful time period was not new technical ideas — overcoming its own bureaucracy was harder than overcoming enemy action. This is absolutely normal in conflict (e.g it was true of the 2016 referendum where dealing with internal problems was at least an order of magnitude harder and more costly than dealing with Cameron).

As Colonel Boyd used to shout to military audiences, ‘People, ideas, machines — in that order!’

*

DARPA, ‘precision strike’, the ‘Revolution in Military Affairs’ and bureaucracies

The Project Maven experience is similar to the famous example of the tank. Everybody could see tanks were possible from the end of World War I but over 20 years Britain and France were hampered by their own bureaucracies in thinking about the operational implications and how to use them most effectively. Some in Britain and France did point out the possibilities but the possibilities were not absorbed into official planning. Powerful bureaucratic interests reinforced the normal sort of blindness to new possibilities. Innovative thinking  flourished, relatively, in Germany where people like Guderian and von Manstein could see the possibilities for a very big increase in speed turning into a huge nonlinear advantage — possibilities applied to the ‘von Manstein plan’ that shocked the world in 1940. This was partly because the destruction of German forces after 1918 meant everything had to be built from scratch and this connects to another lesson about successful innovation: in the military, as in business, it is more likely if a new entity is given the job, as with the Manhattan Project to develop nuclear weapons. The consequences were devastating for the world in 1940 but, lucky for us, the nature of the Nazi regime meant that it made very similar errors itself, e.g regarding the importance of air power in general and long range bombers in particular. (This history is obviously very complex but this crude summary is roughly right about the main point)

There was a similar story with the technological developments mainly sparked by DARPA in the 1970s including stealth (developed in a classified program by the legendary ‘Skunk Works’, tested at ‘Area 51’), global positioning system (GPS), ‘precision strike’ long-range conventional weapons, drones, advanced wide-area sensors, computerised command and control (C2), and new intelligence, reconnaissance and surveillance capabilities (ISR). The hope was that together these capabilities could automate the location and destruction of long-range targets and greatly improve simultaneously the precision, destructiveness, and speed of operations. 

The approach became known in America as ‘deep-strike architectures’ (DSA) and in the Soviet Union as ‘reconnaissance-strike complexes’ (RUK). The Soviet Marshal Ogarkov realised that these developments, based on America’s superior ability to develop micro-electronics and computers, constituted what he called a ‘Military-Technical Revolution’ (MTR) and was an existential threat to the Soviet Union. He wrote about them from the late 1970s. (The KGB successfully stole much of the technology but the Soviet system still could not compete.) His writings were analysed in America particularly by Andy Marshall at the Pentagon’s Office of Net Assessment (ONA) and others. ONA’s analyses of what they started calling the Revolution in Military Affairs (RMA) in turn affected Pentagon decisions. In 1991 the Gulf War demonstrated some of these technologies just as the Soviet Union was imploding. In 1992 the ONA wrote a very influential report (The Military-Technical Revolution) which, unusually, they made public (almost all ONA documents remain classified). 

The ~1978 Assault Breaker concept

Screenshot 2019-03-01 16.06.35

Soviet depiction of Assault Breaker (Sergeyev, ‘Reconnaissance-Strike Complexes,’ Red Star, 1985)

Screenshot 2019-03-01 16.07.48

In many ways Marshal Ogarkov thought more deeply about how to develop the Pentagon’s own technologies than the Pentagon did, hampered by the normal problems that the operationalising of new ideas threatened established bureaucratic interests, including the Pentagon’s procurement system. These problems have continued. It is hard to overstate the scale of waste and corruption in the Pentagon’s horrific procurement system (see below).

China has studied this episode intensely. It has integrated lessons into their ‘anti-access / area denial’ (A2/AD) efforts to limit American power projection in East Asia. America’s response to A2/AD is the ‘Air-Sea Battle’ concept. As Marshal Ogarkov predicted in the 1970s the ‘revolution’ has evolved into opposing ‘reconnaissance-strike complexes’ facing each other with each side striving to deploy near-nuclear force using extremely precise conventional weapons from far away, all increasingly complicated by possibilities for cyberwar to destroy the infrastructure on which all this depends and information operations to alter the enemy population’s perception (very Sun Tzu!).

Graphic: Operational risks of conventional US approach vs A2/AD (CSBA, 2016)

Screenshot 2019-03-01 16.12.17

The penetration of the CIA by the KGB, the failure of the CIA to provide good predictions, the general American failure to understand the Soviet economy, doctrine and so on despite many billions spent over decades, the attempts by the Office of Net Assessment to correct institutional failings, the bureaucratic rivalries and so on — all this is a fascinating subject and one can see why China studies it so closely.

*

From experimental drones in the 1970s to drone swarms deployed via iPhone 

The next step for reconnaissance-strike is the application of advanced robotics and artificial intelligence which could bring further order(s) of magnitude performance improvements, cost reductions, and increases in tempo. This is central to the US-China military contest. It will also affect everyone else as much of the technology becomes available to Third World states and small terrorist groups.

I wrote in 2004 about the farce of the UK aircraft carrier procurement story (and many others have warned similarly). Regardless of elections, the farce has continued to squander billions of pounds, enriching some of the worst corporate looters and corrupting public life via the revolving door of officials/lobbyists. Scrutiny by our MPs has been contemptible. They have built platforms that already cannot be sent to a serious war against a serious enemy. A teenager will be able to deploy a drone from their smartphone to sink one of these multi-billion dollar platforms. Such a teenager could already take out the stage of a Downing Street photo op with a little imagination and initiative, as I wrote about years ago

The drone industry is no longer dependent on its DARPA roots and is no longer tied to the economics of the Pentagon’s research budgets and procurement timetables. It is driven by the economics of the extremely rapidly developing smartphone market including Moore’s Law, plummeting costs for sensors and so on. Further, there are great advantages of autonomy including avoiding jamming counter-measures. Kalashnikov has just unveiled its drone version of the AK-47: a cheap anonymous suicide drone that flies to the target and blows itself up — it’s so cheap you don’t care. So you have a combination of exponentially increasing capabilities, exponentially falling costs, greater reliability, greater lethality, greater autonomy, and anonymity (if you’re careful and buy them through cut-outs etc). Then with a bit of added sophistication you add AI face recognition etc. Then you add an increasing capacity to organise many of these units at scale in a swarm, all running off your iPhone — and consider how effective swarming tactics were for people like Alexander the Great.

This is why one of the world’s leading AI researchers, Stuart Russell (professor of computer science at Berkeley) has made this warning:

‘The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them. For instance, as flying robots become smaller, their manoeuvrability increases and their ability to be targeted decreases… Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless

‘A very, very small quadcopter, one inch in diameter can carry a one- or two-gram shaped charge. You can order them from a drone manufacturer in China. You can program the code to say: “Here are thousands of photographs of the kinds of things I want to target.” A one-gram shaped charge can punch a hole in nine millimeters of steel, so presumably you can also punch a hole in someone’s head. You can fit about three million of those in a semi-tractor-trailer. You can drive up I-95 with three trucks and have 10 million weapons attacking New York City. They don’t have to be very effective, only 5 or 10% of them have to find the target.

‘There will be manufacturers producing millions of these weapons that people will be able to buy just like you can buy guns now, except millions of guns don’t matter unless you have a million soldiers. You need only three guys to write the program and launch them. So you can just imagine that in many parts of the world humans will be hunted. They will be cowering underground in shelters and devising techniques so that they don’t get detected. This is the ever-present cloud of lethal autonomous weapons… There are really no technological breakthroughs that are required. Every one of the component technologies is available in some form commercially… It’s really a matter of just how much resources are invested in it.’

There is some talk in London of ‘what if there is an AI arms race’ but there is already an AI/automation arms race between companies and between countries — it’s just that Europe is barely relevant to the cutting edge of it. Europe wants to be a world player but it has totally failed to generate anything approaching what is happening in coastal America and China. Brussels spends its time on posturing, publishing documents about ‘AI and trust’, whining, spreading fake news about fake news (while ignoring experts like Duncan Watts), trying to damage Silicon Valley companies rather than considering how to nourish European entities with real capabilities, and imposing bad regulation like GDPR (that ironically was intended to harm Google/Facebook but actually helped them in some ways because Brussels doesn’t understand them).

Britain had a valuable asset, Deep Mind, and let Google buy it for trivial money without the powers-that-be in Whitehall understanding its significance — it is relevant but it is not under British control. Britain has other valuable assets — for example, it is a potential strategic asset to have the AI centre, financial centre, and political centre all in London, IF politicians cared and wanted to nourish AI research and companies. Very obviously, right now we have a MP/official class that is unfit to do this even if they had the vaguest idea what to do, which almost none do (there is a flash of hope on genomics/AI).

Unlike during the Cold War when the Soviet Union could not compete in critical industries such as semi-conductors and consumer electronics, China can compete, is competing, and in some areas is already ahead.

The automation arms race is already hitting all sorts of low skilled jobs from baristas to factory cleaning, some of which will be largely eliminated much more quickly than economists and politicians expect. Many agricultural jobs are being rapidly eliminated as are jobs in fields like mining and drilling. Look at a modern mine and you will see driverless trucks on the ground and drones overhead. The implications for millions who make a living from driving is now well known. (This also has obvious implications for the wisdom of allowing millions of un-skilled immigrants and one of the oddities of Silicon Valley is that people there simultaneously argue a) politicians are clueless about the impact of automation on unskilled people and b) politicians should allow millions more unskilled immigrants into the country — an example of how technical people are not always as rational about politics as they think they are.)

This automation arms race will affect different countries at different speeds depending on their exposure to fields that are ripe for disruption sooner or later. If countries cannot tax those companies that lead in AI, they will have narrower options. They may even be forced into a sort of colony status. Those who think this is an exaggeration should look at China’s recent deals in Africa where countries are handing over vast amounts of data to China on extremely unfavourable terms. Huge server farms in China are processing facial recognition data on millions of Africans who have no idea their personal data has been handed over. The western media focuses on Facebook with almost no coverage of these issues.

In the extreme case, a significant lead in AI for country X could lead to a self-reinforcing cycle in which it increasingly dominates economically, scientifically, and militarily and perhaps cannot be caught as Ian Hogarth has argued and to which Putin recently alluded.

China’s investment in AI — more data = better product = more users = more revenue  = better talent + more data in a beautiful flywheel…

China has about x3 number of internet users than America but the gap in internet and mobile usage is much larger. ‘In China, people use their mobile phones to pay for goods 50 times more often than Americans. Food delivery volume in China is 10 times more than that of the United States. And shared bicycle usage is 300 times that of the US. This proliferation of data — with more people generating far more information than any other country – is the fuel for improving China’s AI’ (report).

Screenshot 2018-08-03 16.53.14

Screenshot 2018-08-03 17.02.34

Screenshot 2018-08-03 16.57.10

China’s AI policy priority is clear. The ‘Next Generation Artificial Intelligence Development Plan‘ announced in July 2017 states that China should catch America by 2020 and be the global leader by 2030.  Xi Jinping emphasises this repeatedly.

Screenshot 2018-08-03 17.05.15

 

*

Some implications for entangling AI with WMD — take a Milla Jovovich lookalike then add some alpha engineers…

It is important to consider nuclear safety when thinking about AI safety.

The missile silos for US nuclear weapons have repeatedly been shown to be terrifyingly insecure. Sometimes incidents are just bog standard unchecked incompetence: e.g nuclear weapons are accidentally loaded onto a plane which is then left unattended on an insecure airfield. Coke, great unconventional hookers and a bit of imagination get you into nuclear facilities, just as they get you into pretty much anywhere.

Cyber security is also awful. For example, in a major  2013 study the Pentagon’s Defense Science Board concluded that the military’s systems were vulnerable to cyberattacks, the government was ‘not prepared to defend against this threat’, and a successful cyberattack could cause military commanders to lose ‘trust in the information and ability to control U.S. systems and forces [including nuclear]’ (cf. this report). Since then, the NSA itself has had its deepest secrets hacked by an unidentified actor (possibly/probably AI-enabled) in a breach much more serious but infinitely less famous than Snowden (and resembles a chapter in the best recent techno-thriller, Daemon).

This matches research just published in the Bulletin of Atomic Scientists on the most secure (Level 3/enhanced and Level 4) bio-labs. It is now clear that laboratories conducting research on viruses that could cause a global pandemic are extremely dangerous. I am not aware of any mainstream media in Britain reporting this (story here).

Further, the systems for coping with nuclear crises have failed repeatedly. They are extremely vulnerable to false alarms, malicious attacks or even freaks like, famously, a bear (yes, a bear) triggering false alarms. We have repeatedly escaped accidental nuclear war because of flukes such as odd individuals not passing on ‘launch’ warnings or simply refusing to act. The US National Security Adviser has sat at the end of his bed looking at his sleeping wife ‘knowing’ she won’t wake up while pondering his advice to the President on a counterattack that will destroy half the world, only to be told minutes later the launch warning was the product of a catastrophic error. These problems have not been dealt with. We don’t know how bad this problem is: many details are classified and many incidents are totally unreported.

Further, the end of the Cold War gave many politicians and policy people in the West the completely false idea that established ideas about deterrence had been vindicated but they have not been vindicated (cf. Payne’s Fallacies of Cold War deterrence and The Great American Gamble). Senior decision-makers are confident that their very dangerous ideas are ‘rational’

US and Russian nukes remain on ‘launch on warning’ — i.e a hair trigger — so the vulnerabilities could recur any time. Threats to use them are explicitly contemplated over crises such as Taiwan and Kashmir. Nuclear weapons have proliferated and are very likely to proliferate further. There are now thousands of people, including North Korean and Pakistani scientists, who understand the technology. And there is a large network of scientists involved in the classified Soviet bio-weapon programme that was largely unknown to western intelligence services before the end of the Cold War and has dispersed across the world.

These are all dangers already known to experts. But now we are throwing at these wobbling systems and flawed/overconfident thinking the development of AI/ML capabilities. This will exacerbate all these problems and make crises even faster, more confusing and more dangerous.

Yes, you’re right to ask ‘why don’t I read about this stuff in the mainstream media?’. There is very little media coverage of reports on things like nuclear safety and pretty much nobody with real power pays any attention to all this. If those at the apex of power don’t take nuclear safety seriously, why would you think they are on top of anything? Markets and science have done wondrous things but they cannot by themselves fix such crazy incentive problems with government institutions.

*

Government procurement — ‘the horror, the horror’

The problem of ‘rational procurement’ is incredibly hard to solve and even during existential conflicts problems with incentives recur. If state agencies, out of  fear of what opponents might be doing, create organisations that escape most normal bureaucratic constraints, then AI will escalate in importance to the military and intelligence services even more rapidly than it already is. It is possible that China will build organisations to deploy AI to war/pseudo-war/hybrid-war faster and better than America.

In January 2017 I wrote about systems engineering and systems management — an approach for delivering extremely complex and technically challenging projects. (It was already clear the Brexit negotiations were botched, that Heywood, Hammond et al had effectively destroyed any sort of serious negotiating position, and I suggested Westminster/Whitehall had to learn from successful management of complex projects to avert what would otherwise be a debacle.) These ideas were born with the Manhattan Project to build the first nuclear bomb, the ICBMs project in the 1950s, and the Apollo program in the 1960s which put man on the moon. These projects combined a) some of the most astonishing intellects the world has seen of which a subset were also brilliant at navigating government (e.g von Neumann) and b) phenomenally successful practical managers: e.g General Groves on Manhattan Project, Bernard Schriever on ICBMs and George Mueller on Apollo.

The story we are told about the Manhattan Project focuses almost exclusively on the extraordinary collection of physicists and mathematicians at Los Alamos but they were a relatively small part of the whole story which involved an engineer building an unprecedented operation at multiple sites across America in secret and with extraordinary speed while many doubted the project was possible —  then coordinating multiple projects, integrating distributed expertise and delivering a functioning bomb.

If you read Groves’ fascinating book, Now It Can Be Told, and read a recent biography of him, in many important ways you will acquire what is effectively cutting-edge knowledge today about making huge endeavours work — ‘cutting-edge’ because almost nobody has learned from this (see below). If you are one of the many MPs aspiring to be not just Prime Minister but a Prime Minister who gets important things done, there are very few books that would repay careful study as much as Groves’. If you do then you could avoid joining the list of Major, Blair, Brown, Cameron and May who bungle around for a few years before being spat out to write very similar accounts about how they struggled to ‘find the levers of power’, couldn’t get officials to do what they want, and never understood how to get things done.

Screenshot 2019-02-22 13.13.41

Systems management is generally relevant to the question: how best to manage very big complex projects? It was relevant to the referendum (Victoria Woodcock was Vote Leave’s George Mueller). It is relevant to the Brexit negotiations and the appalling management process between May/Hammond/Heywood/Robbins et al, which has been a case study in how not to manage a complex project (Parliament also deserves much blame for never scrutinising this process). It is relevant to China’s internal development and the US-China geopolitical struggle. It is relevant to questions like ‘how to avoid nuclear war’ and ‘how would you build a Manhattan Project for safe AGI?’. It is relevant to how you could develop a high performance team in Downing Street that could end the current farce. The same issues and lessons crop up in every account of a Presidency and the role of the Chief of Staff. If you want to change Whitehall from 1) ‘failure is normal’ to 2) ‘align incentives with predictive accuracy, operational excellence and high performance’, then systems management provides an extremely valuable anti-checklist for Whitehall.

Given vital principles were established more than half a century ago that were proved to do things much faster and more effectively than usual, it would be natural to assume that these lessons became integrated in training and practice both in the worlds of management and politics/government. This did not happen. In fact, these lessons have been ‘unlearned’.

General Groves was pushed out of the Pentagon (‘too difficult’). The ICBM project, conducted in extreme panic post-Sputnik, had to re-create an organisation outside the Pentagon and re-learn Groves’ lessons a decade later. NASA was a mess until Mueller took over and imported the lessons from Manhattan and ICBMs. After Apollo’s success in 1969, Mueller left and NASA reverted to being a ‘normal’ organisation and forgot his successful approach. (The plans Mueller left for developing a manned lunar base, space commercialisation, and man on Mars by the end of the 1980s were also tragically abandoned.)

While Mueller was putting man on the moon, MacNamara’s ‘Whizz Kids’ in the Pentagon, who took America into the Vietnam War, were dismantling the successful approach to systems management claiming that it was ‘wasteful’ and they could do it ‘more efficiently’. Their approach was a disaster and not just regarding Vietnam. The combination of certain definitions of ‘efficiency’ and new legal processes ensured that procurement was routinely over-budget, over-schedule, over-promising, and generated more and more scandals. Regardless of failure the MacNamara approach metastasised across the Pentagon. Incentives are so disastrously misaligned that almost every attempt at reform makes these problems worse and lawyers and lobbyists get richer. Of course, if lawmakers knew how the Manhattan Project and Apollo were done — the lack of ‘legal process’, things happening with a mere handshake instead of years of reviews enriching lawyers! — they would be stunned.

Successes since the 1960s have often been freaks (e.g the F-16, Boyd’s brainchild) or ‘black’ projects (e.g stealth) and often conducted in SkunkWorks-style operations outside normal laws. It is striking that US classified special forces, JSOC (equivalent to SAS/SBS etc), routinely use a special process to procure technologies outside the normal law to avoid the delays. This connects to George Mueller saying late in life that Apollo would be impossible with the current legal/procurement system and it could only be done as a ‘black’ program. 

The lessons of success have been so widely ‘unlearned’ throughout the government system that when Obama tried to roll out ObamaCare, it blew up. When they investigated, the answer was: we didn’t use systems management so the parts didn’t connect and we never tested this properly. Remember: Obama had the support of the vast majority of Silicon Valley expertise but this did not avert disaster. All anyone had to do was read Groves’ book and call Sam Altman or Patrick Collison and they could have provided the expertise to do it properly but none of Obama’s staff or responsible officials did.

The UK is the same. MPs constantly repeat the absurd SW1 mantra that ‘there’s no money’ while handing out a quarter of a TRILLION pounds every year on procurement and contracting. I engaged with this many times in the Department for Education 2010-14. The Whitehall procurement system is embedded in the dominant framework of EU law (the EU law is bad but UK officials have made it worse). It is complex, slow and wasteful. It hugely favours large established companies with powerful political connections — true corporate looters. The likes of Carillion and lawyers love it because they gain from the complexity, delays, and waste. It is horrific for SMEs to navigate and few can afford even to try to participate. The officials in charge of multi-billion processes are mostly mediocre, often appalling. In the MoD corruption adds to the problems.

Because of mangled incentives and reinforcing culture, the senior civil service does not care about this and does not try to improve. Total failure is totally irrelevant to the senior civil service and is absolutely no reason to change behaviour even if it means thousands of people killed and many billions wasted. Occasionally incidents like Carillion blow up and the same stories are written and the same quotes given — ‘unbelievable’, ‘scandal’, ‘incompetence’, ‘heads will roll’. Nothing changes. The closed and dysfunctional Whitehall system fights to stay closed and dysfunctional. The media caravan soon rolls on. ‘Reform’ in response to botches and scandals almost inevitably makes things even slower and more expensive — even more focus on process rather than outcomes, with the real focus being ‘we can claim to have acted properly because of our Potemkin process’. Nobody is incentivised to care about high performance and error-correction. The MPs ignore it all. Select Committees issue press releases about ‘incompetence’ but never expose the likes of Heywood to persistent investigation to figure out what has really happened and why. Nobody cares.

This culture has been encouraged by the most senior leaders. The recent Cabinet Secretary Jeremy Heywood assured us all that the civil service could easily cope with Brexit and  the civil service would handle Brexit fine and ‘definitely on digital, project management we’ve got nothing to learn from the private sector’. His predecessor, O’Donnell, made similar asinine comments. The fact that Heywood could make such a laughable claim after years of presiding over expensive debacle after expensive debacle and be universally praised by Insiders tells you all you need to know about ‘the blind leading the blind’ in Westminster. Heywood was a brilliant courtier-fixer but he didn’t care about management and operational excellence. Whitehall now incentivises the promotion of courtier-fixers, not great managers like Groves and Mueller. Management, like science, is regarded contemptuously as something for the lower orders to think about, not the ‘strategists’ at the top.

Long-term leadership from the likes of O’Donnell and Heywood is why officials know that practically nobody is ever held accountable regardless of the scale of failure. Being in charge of massive screwups is no barrier to promotion. Operational excellence is no requirement for promotion. You will often see the official in charge of some debacle walking to the tube at 4pm (‘compressed hours’ old boy) while the debacle is live on TV (I know because I saw this regularly in the DfE). The senior civil service now operates like a protected caste to preserve its power and privileges regardless of who the ignorant plebs vote for.

You can see how crazy the incentives are when you consider elections. If you look back at recent British elections the difference in the spending plans between the two sides has been a tiny fraction of the £250 billion p/a procurement and contracting budget — yet nobody ever really talks about this budget, it is the great unmentionable subject in Westminster! There’s the odd slogan about ‘let’s cut waste’ but the public rightly ignores this and assumes both sides will do nothing about it out of a mix of ignorance, incompetence and flawed incentives so big powerful companies continue to loot the taxpayer. Look at both parties now just letting the HS2 debacle grow and grow with the budget out of control, the schedule out of control, officials briefing ludicrously that the ‘high speed’ rail will be SLOWED DOWN to reduce costs and so on, all while an army of privileged looters, lobbyists, and lawyers hoover up taxpayer cash. 

And now, when Brexit means the entire legal basis for procurement is changing, do these MPs, ministers and officials finally examine it and see how they could improve? No of course not! The top priority for Heywood et al viz Brexit and procurement has been to get hapless ministers to lock Britain into the same nightmare system even after we leave the EU — nothing must disrupt the gravy train! There’s been a lot of talk about £350 million per week for the NHS since the referendum. I could find this in days and in ways that would have strong public support. But nobody is even trying to do this and if some minister took a serious interest, they would soon find all sorts of things going wrong for them until the PermSec has a quiet word and the natural order is restored…

To put the failures of politicians and official in context, it is fascinating that most of the commercial world also ignores the crucial lessons from Groves et al! Most commercial megaprojects are over-schedule, over-budget, and over-promise. The data shows that there has been little improvement over decades. (Cf. What You Should Know About Megaprojects, and Why, Flyvbjerg). And look at this  2019 article in Harvard Business Review which, remarkably, argues that managers in modern MBA programmes are taught NOT TO VALUE OPERATIONAL EXCELLENCE! ‘Operational effectiveness — doing the same thing as other companies but doing it exceptionally well — is not a path to sustainable advantage in the competitive universe’, elite managers are taught. The authors have looked at a company data and concluded that, shock horror, operational excellence turns out to be vital after all! They conclude:

‘[T]he management community may have badly underestimated the benefits of core management practices [and] it’s unwise to teach future leaders that strategic decision making and basic management processes are unrelated.’ [!]

The study of management, like politics, is not a field with genuine expertise. Like other social sciences there is widespread ‘cargo cult science’, fads and charlatans drowning out core lessons. This makes it easier to understand the failure of politicians: when elite business schools now teach students NOT to value operational excellence, when supposed management gurus like MacNamara actually push things in a worse direction, then it is less surprising people like Cameron and Heywood don’t know know which way to turn. Imagine the normal politician or senior official in Washington or London. They have almost no exposure to genuinely brilliant managers or very well run organisations. Their exposure is overwhelmingly to ‘normal’ CEOs of public companies and normal bureaucracies. As the most successful investors in world history, Buffett and Munger, have pointed out for over 50 years, many of these corporate CEOs, the supposedly ‘serious people’, don’t know what they are doing and have terrible incentives.

But surely if someone recently created something unarguably massively world-changing,  like inventing the internet and personal computing, then everyone would pay attention, right? WRONG! I wrote this (2018) about the extraordinary ARPA-PARC episode, which created much of the ecosystem for interactive personal computing and the internet and provided a model for how to conduct high-risk-high-payoff technology research.

There is almost no research funded on ARPA-PARC principles worldwide. ARPA was deliberately made less like what it was like when it created the internet. The man most responsible for PARC’s success, Robert Taylor, was fired and the most effective team in the history of computing research was disbanded. XEROX notoriously could not overcome its internal incentive problems and let Steve Jobs and Bill Gates develop the ideas. Although politicians love giving speeches about ‘innovation’ and launching projects for PR, governments subsequently almost completely ignored the lessons of how to create superproductive processes and there are almost zero examples of the ARPA-PARC approach in the world today (an interesting partial exception is Janelia). Whitehall, as a subset of its general vandalism towards science, has successfully resisted all attempts at learning from ARPA for decades and this has been helped by the attitude of leading scientists themselves whose incentives push them toward supporting objectively bad funding models. In science as well as politics, incentives can be destructive and stop learning. As Alan Kay, one of the crucial PARC researchers, wrote:

‘The most interesting thing has been the contrast between appreciation/exploitation of the inventions/contributions versus the almost complete lack of curiosity and interest in the processes that produced them… [I]n most processes today — and sadly in most important areas of technology research — the administrators seem to prefer to be completely in control of mediocre processes to being “out of control” with superproductive processes.They are trying to “avoid failure” rather than trying to “capture the heavens”.’

Or as George Mueller said later in life about the institutional imperative and project failures:

‘Fascinating that the same problems recur time after time, in almost every program, and that the management of the program, whether it happened to be government or industry, continues to avoid reality.

So, on one hand, radical improvements in non-military spheres would be a wonderful free lunch. We simply apply old lessons, scale them up with technology and there are massive savings for free.

But wouldn’t it be ironic if we don’t do this — instead, we keep our dysfunctional systems for non-military spheres and carry on the waste, failure and corruption but we channel the Cold War and, in the atmosphere of an arms race, America and China apply the lessons from Groves, Schreiver and Mueller but to military AI procurement?!

Not everybody has unlearned the lessons from Groves and Mueller…

*

China: a culture of learning from systems management

‘All stable processes we shall predict. All unstable processes we shall control.’ von Neumann.

In Science there was an interesting article on Qian Xuesen, the godfather of China’s nuclear and space programs which also had a profound affect on ideas about government. Qian studied in California at Caltech where he worked with the Hungarian mathematician Theodore von Kármán who co-founded the Jet Propulsion Laboratory (JPL) which worked on rockets after 1945.

In the West, systems engineering’s heyday has long passed. But in China, the discipline is deeply integrated into national planning. The city of Wuhan is preparing to host in August the International Conference on Control Science and Systems Engineering, which focuses on topics such as autonomous transportation and the “control analysis of social and human systems.” Systems engineers have had a hand in projects as diverse as hydropower dam construction and China’s social credit system, a vast effort aimed at using big data to track citizens’ behavior. Systems theory “doesn’t just solve natural sciences problems, social science problems, and engineering technology problems,” explains Xue Huifeng, director of the China Aerospace Laboratory of Social System Engineering (CALSSE) and president of the China Academy of Aerospace Systems Science and Engineering in Beijing. “It also solves governance problems.”

The field has resonated with Chinese President Xi Jinping, who in 2013 said that “comprehensively deepening reform is a complex systems engineering problem.” So important is the discipline to the Chinese Communist Party that cadres in its Central Party School in Beijing are required to study it. By applying systems engineering to challenges such as maintaining social stability, the Chinese government aims to “not just understand reality or predict reality, but to control reality,” says Rogier Creemers, a scholar of Chinese law at the Leiden University Institute for Area Studies in the Netherlands…

‘In a building flanked by military guards, systems scientists from CALSSE sit around a large conference table, explaining to Science the complex diagrams behind their studies on controlling systems. The researchers have helped model resource management and other processes in smart cities powered by artificial intelligence. Xue, who oversees a project named for Qian at CALSSE, traces his work back to the U.S.-educated scientist. “You should not forget your original starting point,” he says…

‘The Chinese government claims to have wired hundreds of cities with sensors that collect data on topics including city service usage and crime. At the opening ceremony of China’s 19th Party Congress last fall, Xi said smart cities were part of a “deep integration of the internet, big data, and artificial intelligence with the real economy.”… Xue and colleagues, for example, are working on how smart cities can manage water resources. In Guangdong province, the researchers are evaluating how to develop a standardized approach for monitoring water use that might be extended to other smart cities.

‘But Xue says that smart cities are as much about preserving societal stability as streamlining transportation flows and mitigating air pollution. Samantha Hoffman, a consultant with the International Institute for Strategic Studies in London, says the program is tied to long-standing efforts to build a digital surveillance infrastructure and is “specifically there for social control reasons” (Science, 9 February, p. 628). The smart cities initiative builds on 1990s systems engineering projects — the “golden” projects — aimed at dividing cities into geographic grids for monitoring, she adds.

‘Layered onto the smart cities project is another systems engineering effort: China’s social credit system. In 2014, the country’s State Council outlined a plan to compile data on individuals, government officials, and companies into a nationwide tracking system by 2020. The goal is to shape behavior by using a mixture of carrots and sticks. In some citywide and commercial pilot projects already underway, individuals can be dinged for transgressions such as spreading rumors online. People who receive poor marks in the national system may eventually be barred from travel and denied access to social services, according to government documents…

‘Government documents refer to the social credit system as a “social systems engineering project.” Details about which systems engineers consulted on the project are scant. But one theory that may have proved useful is Qian’s “open complex giant system,” Zhu says. A quarter-century ago, Qian proposed that society is a system comprising millions of subsystems: individual persons, in human parlance. Maintaining control in such a system is challenging because people have diverse backgrounds, hold a broad spectrum of opinions, and communicate using a variety of media, he wrote in 1993 in the Journal of Systems Engineering and Electronics. His answer sounds like an early road map for the social credit system: to use then-embryonic tools such as artificial intelligence to collect and synthesize reams of data. According to published papers, China’s hard systems scientists also use approaches derived from Qian’s work to monitor public opinion and gauge crowd behavior

‘Hard systems engineering worked well for rocket science, but not for more complex social problems, Gu says: “We realized we needed to change our approach.” He felt strongly that any methods used in China had to be grounded in Chinese culture.

‘The duo came up with what it called the WSR approach: It integrated wuli, an investigation of facts and future scenarios; shili, the mathematical and conceptual models used to organize systems; and renli. Though influenced by U.K. systems thinking, the approach was decidedly eastern, its precepts inspired by the emphasis on social relationships in Chinese culture. Instead of shunning mathematical approaches, WSR tried to integrate them with softer inquiries, such as taking stock of what groups a project would benefit or harm. WSR has since been used to calculate wait times for large events in China and to determine how China’s universities perform, among other projects…

‘Zhu … recently wrote that systems science in China is “under a rationalistic grip, with the ‘scientific’ leg long and the democratic leg short.” Zhu says he has no doubt that systems scientists can make projects such as the social credit system more effective. However, he cautions, “Systems approaches should not be just a convenient tool in the expert’s hands for realizing the party’s wills. They should be a powerful weapon in people’s hands for building a fair, just, prosperous society.”’

In Open Complex Giant System (1993), Qian Xuesen compares the study of physics, where large complex systems can be studied using the phenomenally successful tools of  statistical mechanics, and the study of society which has no such methods. He describes an overall approach in which fields spanning physical sciences, study of the mind, medicine, geoscience and so on must be integrated in a sort of uber-field he calls ‘social systems engineering‘.

‘Studies and practices have clearly proved that the only feasible and effective way to treat an open complex giant system is a metasynthesis from the qualitative to the quantitative, i.e. the meta—synthetic engineering method. This method has been extracted, generalized and abstracted from practical studies…’

This involves integrating: scientific theories, data, quantitative models, qualitative practical expert experience into ‘models built from empirical data and reference material, with hundreds and thousands of parameters’ then simulated.

This is quantitative knowledge arising from qualitative understanding. Thus metasynthesis from qualitative to quantitative approach is to unite organically the expert group, data, all sorts of information, and the computer technology, and to unite scien- tific theory of various disciplines and human experience and knowledge.’

He gives some examples and gives this diagram as a high level summary:

Screenshot 2019-02-22 17.31.33

So, China is combining:

  • A massive ~$150 billion data science/AI investment program with the goal of global leadership in the science/technology and economic dominance.
  • A massive investment program in associated science/technology such as quantum information/computing.
  • A massive domestic surveillance program combining AI, facial recognition, genetic identification, the ‘social credit system’ and so on.
  • A massive anti-access/area denial military program aimed at America/Taiwan.
  • A massive technology espionage program that, for example, successfully stole the software codes for the F-35.
  • A massive innovation ecosystem that rivals Silicon Valley and may eclipse it (cf. this fascinating documentary on Shenzhen).
  • The use of proven systems management techniques for integrating principles of effective action to predict and manage complex systems at large scale.

America led the development of AI technologies and has the huge assets of its universities, a tradition (weakening) of welcoming scientists (since they opened Princeton to Einstein, von Neumann and Gödel in the 1930s), and the ecosystem of places like Silicon Valley.

It is plausible that China could find a way within 15 years to find some nonlinear asymmetries that provide an edge while, channeling Marshal Ogarkov, it outthinks the Pentagon in management and operations.

*

A few interesting recent straws in the AI/robotics wind

I blogged recently about Judea Pearl. He is one of the most important scholars in the field of causal reasoning. He wrote a short paper about the limits of state-of-the-art AI systems using ‘deep learning’ neural networks — such as the AlphaGo system which recently conquered the game of GO — and how these systems could be improved. Humans can interrogate stored representations of their environment with counter-factual questions: how to instantiate this in machines? (Also economists, NB. Pearl’s statement that ‘I can hardly name a handful (<6) of economists who can answer even one causal question posed in ucla.in/2mhxKdO‘.)

In an interview he said this about self-aware robots:

‘If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans. The next step will be that machines will postulate such models on their own and will verify and refine them based on empirical evidence. That is what happened to science; we started with a geocentric model, with circles and epicycles, and ended up with a heliocentric model with its ellipses.

We’re going to have robots with free will, absolutely. We have to understand how to program them and what we gain out of it. For some reason, evolution has found this sensation of free will to be computationally desirable… Evidently, it serves some computational function.

‘I think the first evidence will be if robots start communicating with each other counterfactually, like “You should have done better.” If a team of robots playing soccer starts to communicate in this language, then we’ll know that they have a sensation of free will. “You should have passed me the ball — I was waiting for you and you didn’t!” “You should have” means you could have controlled whatever urges made you do what you did, and you didn’t.

[When will robots be evil?] When it appears that the robot follows the advice of some software components and not others, when the robot ignores the advice of other components that are maintaining norms of behavior that have been programmed into them or are expected to be there on the basis of past learning. And the robot stops following them.’

A DARPA project recently published this on self-aware robots.

‘A robot that learns what it is, from scratch, with zero prior knowledge of physics, geometry, or motor dynamics. Initially the robot does not know if it is a spider, a snake, an arm—it has no clue what its shape is. After a brief period of “babbling,” and within about a day of intensive computing, their robot creates a self-simulation. The robot can then use that self-simulator internally to contemplate and adapt to different situations, handling new tasks as well as detecting and repairing damage in its own body

‘Initially, the robot moved randomly and collected approximately one thousand trajectories, each comprising one hundred points. The robot then used deep learning, a modern machine learning technique, to create a self-model. The first self-models were quite inaccurate, and the robot did not know what it was, or how its joints were connected. But after less than 35 hours of training, the self-model became consistent with the physical robot to within about four centimeters…

‘Lipson … notes that self-imaging is key to enabling robots to move away from the confinements of so-called “narrow-AI” towards more general abilities. “This is perhaps what a newborn child does in its crib, as it learns what it is,” he says. “We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans. While our robot’s ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness.”

‘Lipson believes that robotics and AI may offer a fresh window into the age-old puzzle of consciousness. “Philosophers, psychologists, and cognitive scientists have been pondering the nature self-awareness for millennia, but have made relatively little progress,” he observes. “We still cloak our lack of understanding with subjective terms like ‘canvas of reality,’ but robots now force us to translate these vague notions into concrete algorithms and mechanisms.”

‘Lipson and Kwiatkowski are aware of the ethical implications. “Self-awareness will lead to more resilient and adaptive systems, but also implies some loss of control,” they warn. “It’s a powerful technology, but it should be handled with care.”’

Robot paper HERE.

Press release HERE.

Recently, OpenAI, one of the world leaders in AI founded by Sam Altman and Elon Musk, announced:

‘… a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training… The model is chameleon-like — it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing… Our model is capable of generating samples from a variety of prompts that feel close to human quality and show coherence over a page or more of text… These samples have substantial policy implications: large language models are becoming increasingly easy to steer towards scalable, customized, coherent text generation, which in turn could be used in a number of beneficial as well as malicious ways.’ (bold added).

Screenshot 2019-02-15 11.48.37

OpenAI has not released the full model yet because they take safety issues seriously. Cf. this for a discussion of some safety issues and links. As the author says re some of the complaints about OpenAI not releasing the full model, when you find normal cyber security flaws you do not publish the problem immediately — that is a ‘zero day attack’ and we should not ‘promote a norm that zero-day threats are OK in AI.’ Quite. It’s also interesting that it would probably only take ~$100,000 for a resourceful individual to re-create the full model quite quickly.

A few weeks ago, Deep Mind showed that their approach to beating human champions at GO can also beat the world’s best players at StarCraft, a game of IMperfect information which is much closer to real life human competitions than perfect information games like chess and GO. OpenAI has shown something similar with a similar game, DOTA.

 

*

Moore’s Law: what if a country spends 1-10% GDP pushing such curves?

The march of Moore’s Law is entangled in many predictions. It is true that in some ways Moore’s Law has flattened out recently…

Screenshot 2018-03-12 11.55.21

… BUT specialised chips developed for machine learning and other adaptations have actually kept it going. This chart shows how it actually started long before Moore and has been remarkably steady for ~120 years (NVIDIA in the top right is specialised for deep learning)…

Screenshot 2018-03-12 11.56.15

NB. This is a logarithmic scale so makes progress seem much less dramatic than the ~20 orders of magnitude it represents.

  • Since Von Neumann and Turing led the development of the modern computer in the 1940s, the price of computation has got ~x10 cheaper every five years (so x100 per decade), so over ~75 years that’s a factor of about a thousand trillion (1015).
  • The industry seems confident the graph above will continue roughly as it has for at least another decade, though not because of continued transistor doubling rates which has reached such a tiny nanometer scale that quantum effects will soon interfere with engineering. This means ~100-fold improvement before 2030 and combined with the ecosystem of entrepreneurs/VC/science investment etc this will bring many major disruptions even without significant progress with general intelligence.
  • Dominant companies like Apple, Amazon, Google, Baidu, Alibaba etc (NB. no big EU players) have extremely strong incentives to keep this trend going given the impact of mobile computing / the cloud etc on their revenues.
  • Computers will be ~10,000 times more powerful than today for the same price if this chart holds for another 20 years and ~1 million times more powerful for the same price than today if it holds for another 30 years. Today’s multi-billion dollar supercomputer performance would be available for ~$1,000, just as the supercomputer power of a few decades ago is now available in your smartphone.

But there is another dimension to this trend. Look at this graph below. It shows the total amount of compute, in petaflop/s-days, that was used to train some selected AI projects using neural networks / deep learning.

‘Since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). Since 2012, this metric has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase)… The chart shows the total amount of compute, in petaflop/s-days, that was used to train selected results that are relatively well known, used a lot of compute for their time, and gave enough information to estimate the compute used. A petaflop/s-day (pfs-day) consists of performing 1015neural net operations per second for one day, or a total of about 1020operations. ‘ (Cf. OpenAI blog.)

Screenshot 2018-05-19 17.04.04

The AlphaZero project in the top right is the recent Deep Mind project in which an AI system (a successor to the original AlphaGo that first beat human GO champions) zoomed by centuries of human knowledge on GO and chess in about one day of training.

Many dramatic breakthroughs in machine learning, particularly using neural networks (NNs), are open source. They are scaling up very fast. They will be networked together into ‘networks of networks’ and will become x10, x100, x1,000 more powerful. These NNs will keep demonstrating better than human performance in relatively narrowly defined tasks (like winning games) but these narrow definitions will widen unpredictably.

OpenAI’s blog showing the above graph concludes:

‘Overall, given the data above, the precedent for exponential trends in computing, work on ML specific hardware, and the economic incentives at play, we think it’d be a mistake to be confident this trend won’t continue in the short term. Past trends are not sufficient to predict how long the trend will continue into the future, or what will happen while it continues. But even the reasonable potential for rapid increases in capabilities means it is critical to start addressing both safety and malicious use of AI today. Foresight is essential to responsible policymaking and responsible technological development, and we must get out ahead of these trends rather than belatedly reacting to them.’ (Bold added)

This recent analysis of the extremely rapid growth of deep learning systems tries to estimate how long this rapid growth can continue and what interesting milestones may fall. It considers 1) the rate of growth of cost, 2) the cost of current experiments, and 3) the maximum amount that can be spent on an experiment in the future. Its rough answers are:

  1. ‘The cost of the largest experiments is increasing by an order of magnitude every 1.1 – 1.4 years.
  2. ‘The largest current experiment, AlphaGo Zero, probably cost about $10M.’
  3. On the basis of the Manhattan Project costing ~1% of GDP, that gives ~$200 billion for one AI experiment. Given the growth rate, we could expect a $200B experiment in 5-6 years.
  4. ‘There is a range of estimates for how many floating point operations per second are required to simulate a human brain for one second. Those collected by AI Impacts have a median of 1018 FLOPS (corresponding roughly to a whole-brain simulation using Hodgkin-Huxley neurons)’. [NB. many experts think 1018 is off by orders of magnitude and it could easily be x1,000 or more higher.]
  5. ‘So for the shortest estimates … we have already reached enough compute to pass the human-childhood milestone. For the median estimate, and the Hodgkin-Huxley estimates, we will have reached the milestone within 3.5 years.’
  6. We will not reach the bigger estimates (~1025FLOPS) within the 10 year window.
  7. ‘The AI-Compute trend is an extraordinarily fast trend that economic forces (absent large increases in GDP) cannot sustain beyond 3.5-10 more years. Yet the trend is also fast enough that if it is sustained for even a few years from now, it will sweep past some compute milestones that could plausibly correspond to the requirements for AGI, including the amount of compute required to simulate a human brain thinking for eighteen years, using Hodgkin Huxley neurons.’

I can’t comment on the technical aspects of this but one political/historical point. I think this analysis is wrong about the Manhattan Project (MP). His argument is the MP represents a reasonable upper-bound for what America might spend. But the MP was not constrained by money — it was mainly constrained by theoretical and engineering challenges, constraints of non-financial resources and so on. Having studied General Groves’ book (who ran the MP), he does not say money was a problem — in fact, one of the extraordinary aspects of the story is the extreme (to today’s eyes) measures he took to ensure money was not a problem. If more than 1% GDP had been needed, he’d have got it (until the intelligence came in from Europe that the Nazi programme was not threatening).

This is an important analogy. America and China are investing very heavily in AI but nobody knows — are there places at the edge of ‘breakthroughs with relatively narrow applications’ where suddenly you push ‘a bit’ and you get lollapalooza results with general intelligence? What if someone thinks — if I ‘only’ need to add some hardware and I can muster, say, 100 billion dollars to buy it, maybe I could take over the world? What if they’re right?

I think it is therefore more plausible to use the US defence budget at the height of the Cold War as a ‘reasonable estimate’ for what America might spend if they feel they are in an existential struggle. Washington knows that China is putting vast resources into AI research. If it starts taking over from Deep Mind and OpenAI as the place where the edge-of-the-art is discovered, then it WILL soon be seen as an existential struggle and there would rapidly be political pressures for a 1950s/1960s style ‘extreme’ response. So a reasonable upper bound might be at least 5-8 times bigger than 1% of GDP.

Further, unlike the nuclear race, an AGI race carries implications of not just ‘destroy global civilisation and most people’ but ‘potentially destroys ABSOLUTELY EVERYTHING not just on earth but, given time and the speed of light, everywhere’ — i.e potentially all molecules re-assembled in the pursuit of some malign energy-information optimisation process. Once people realise just how bad AGI could go if the alignment problem is not solved (see below), would it not be reasonable to assume that even more money than ~8% GDP will be found if/when this becomes a near-term fear of politicians?

Some in Silicon Valley who already have many billions at their disposal are already calculating numbers for these budgets. Surely people in Chinese intelligence are doodling the same as they listen to the week’s audio of Larry talking to Demis…?

*

General intelligence and safety

‘[R]ational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. Those drives will lead to anti-social and dangerous behaviour if not explicitly countered. The current computing infrastructure would be very vulnerable to unconstrained systems with these drives.’ Omohundro.

Shane Legg, co-founder and chief scientist of Deep Mind, said publicly a few years ago that there is a 50% probability that we will achieve human level AI by 2028, a 90% probability by 2050, and ‘I think human extinction will probably occur‘. Given Deep Mind’s progress since he said this it is surely unlikely he thinks the odds now are lower than 50% by 2028. Some at the leading edge of the field agree.

‘I think that within a few years we’ll be able to build an NN-based [neural network] AI (an NNAI) that incrementally learns to become at least as smart as a little animal, curiously and creatively learning to plan, reason and decompose a wide variety of problems into quickly solvable sub-problems. Once animal-level AI has been achieved, the move towards human-level AI may be small: it took billions of years to evolve smart animals, but only a few millions of years on top of that to evolve humans. Technological evolution is much faster than biological evolution, because dead ends are weeded out much more quickly. Once we have animal-level AI, a few years or decades later we may have human-level AI, with truly limitless applications. Every business will change and all of civilisation will change…

In 2050 there will be trillions of self-replicating robot factories on the asteroid belt. A few million years later, AI will colonise the galaxy. Humans are not going to play a big role there, but that’s ok. We should be proud of being part of a grand process that transcends humankind.’ Schmidhuber, one of the pioneers of ML, 2016.

Others have said they believe that estimates of AGI within 15-30 years are unlikely to be right. Two of the smartest people I’ve ever spoken to are physicists who understand the technical details and know the key researchers and think that dozens of Nobel Prize scale ideas will probably be needed before AGI happens and it is more likely that the current wave of enthusiasm with machine learning/neural networks will repeat previous cycles in science (e.g with quantum computing 20 years ago) — great enthusiasm, the feeling that all barriers are quickly falling, then an increasingly obvious plateau, spreading disillusion, a search for new ideas, then a revival of hope and so on. They would bet more on a 50-80 year than a 20 year scale.

Of top people I have spoken to and/or followed their predictions, it’s clear that there is a consensus that mainstream economic analysis (which is the foundation of politicians’ and media discussion) seriously underestimates the scale and speed of social/economic/military/political disruption that narrow AI/automation will soon cause. But predictions on AGI are unsurprisingly all over the place.

Chart: predictions on AGI timelines (When Will AI Exceed Human Performance? Evidence from AI Experts)

Screenshot 2019-02-28 10.00.31

Screenshot 2019-02-28 10.22.40

Many argue there even if Moore’s Law continues for 30 years (millionfold performance improvement) this may mean nothing significant for general intelligence, even if narrow AI transforms the world in many ways. Some experts think that estimates of the human brain’s computational capacity widely believed in the computer science world are actually orders of magnitude wrong. We still don’t know much about basics of the brain such as how long-term memories are formed. Maybe the brain’s processes will be much more resistant to understanding than ‘optimists’ assume.

But maybe relatively few big new ideas are needed to create world-changing capabilities. ‘Just’ applying great engineering and more resources to existing ideas allowed Deep Mind to blow past human performance metrics. I obviously cannot judge competing expert views but from a political perspective we know for sure that there is inherent uncertainty about how we discover new knowledge and this means we are bound to be surprised in all sorts of ways. We know that even brilliant researchers working right at the edge of progress are often clueless about what will happen quite soon and cannot reliably judge ‘is it less than 1% or more like 20% probability?’ questions. For example:

‘In 1901, two years before helping build the first heavier-than-air flyer, Wilbur Wright told his brother that powered flight was fifty years away. In 1939, three years before he personally oversaw the first critical chain reaction in a pile of uranium bricks, Enrico Fermi voiced 90% confidence that it was impossible to use uranium to sustain a fission chain reaction.’ (Yudkowsky)

Fermi’s experience suggests we should be extremely careful and put more resources into thinking very hard about how to minimise risks viz both narrow and general AI.

Those right at the edge of genetic engineering, such as George Church and Kevin Esvelt, are pushing for their field to be forcibly opened up to make it safer. As they argue, the current scientific approach and incentive system is essentially a ‘blind search algorithm’ in which small teams work in secret without being able to predict the consequences of their work and cannot be warned by those who do understand. A blind search algorithm is a bad approach for things like bioweapons that can destroy billions of lives and it is what we now have. The same argument applies to AGI.

We also know that political people and governments are slow to cope with major technological disruptions. Just look at TV. It’s been dominating politics since the 1950s. It is roughly 70 years old. Many politicians still do not understand it well. The UK state and political parties are in many ways much less sophisticated in its use of TV than groups like Hezbollah. This is even more true of social media. Also look at how unfounded conspiracy theories about fake news and social media viz the referendum and Trump have gripped much of the ‘educated’ class that thinks they see through fake news that fools the uneducated! Journalists are awarded THE ORWELL AWARD(!) for spreading fake news about fake news (and it’s not ‘lies’, they actually believe what they say)! (My experience is it’s much easier to fool people about politics if they have a degree than if they don’t because those with a degree tend to spend so much more energy fooling themselves.) This is not encouraging particularly if one considers that politicians are directly incentivised to understand technologies like TV and internet polling for their own short-term interests yet most don’t.

From cars to planes it has taken time for us to work out how to adapt to new things that can kill us. Given that 1) conventional research is ‘a blind search algorithm’, 2) our politicians are behind the curve on 70 year-old technologies and 3) there is little prospect of this changing without huge changes to conventional models of politics, we must ask another question about secrecy v openness and centralised vs decentralised architectures.

One of the leaders of the 3D printing / FabLab revolution wrote this comparing the closed v open models of security:

‘The history of the Internet has shown that security through obscurity doesn’t work. Systems that have kept their inner workings a secret in the name of security have consistently proved more vulnerable than those that have allowed themselves to be examined — and challenged — by outsiders. The open protocols and programs used to protect Internet communications are the result of ongoing development and testing by a large expert community. Another historical lesson is that people, not technology, are the most common weakness when it comes to security. No matter how secure a system is, someone who has access to it can always be corrupted, wittingly or otherwise. Centralized control introduces a point of vulnerability that is not present in a distributed system.’ (Bold added)

As we saw above, the centralised approach has been a disaster for nuclear weapons and we survived by fluke. Overall the history of nuclear security is surely a very relevant and bad signal for AI safety. I would bet a lot that Deep Mind et al are all hacked and spied on by China and Russia (at least) so I think it’s safest to plan on the assumption that dangerous breakthroughs will leak almost instantly and could be applied by the sort of people who spy for intel agencies. So it is natural to ask, should we take an open/decentralised approach towards possible AGI?

(Tangential thought experiment: if you were in charge of an organisation like the KGB, why would you not hack hedge funds like Renaissance Technologies and use the information for your own ‘black’ hedge fund and thus dodge the need for arguments over funding (a ‘virtuous’ circle of espionage, free money, resources for more effective R&D and espionage plus it minimises the need for irritating interactions with politicians)? How hard would it be to detect such activity IF done with intelligent modesty? Given someone can hack the NSA without their identity being revealed, why would they not be hacking Renaissance and Deep Mind, with a bit of help from a Milla Jovovich lookalike whose reading a book on n-dimensional string theory at the bar when that exhausted physics PhD with the access codes staggers in to relax?)

This seems to collide with another big problem — the alignment problem.

Stuart Russell, one of the world’s leading researchers, is one of those who has been very forceful about the fundamental importance of this: how do we GUARANTEE that entities more intelligent than us are aligned with humanity’s interests?

‘One [view] is: It’ll never happen, which is like saying we are driving towards the cliff but we’re bound to run out of gas before we get there. And that doesn’t seem like a good way to manage the affairs of the human race. And the other [view] is: Not to worry — we will just build robots that collaborate with us and we’ll be in human-robot teams. Which begs the question: If your robot doesn’t agree with your objectives, how do you form a team with it?’ .

Eliezer Yudkowsky, one of the few working on the alignment problem, described the difficulty:

‘How do you encode the goal functions of an A.I. such that it has an Off switch and it wants there to be an Off switch and it won’t try to eliminate the Off switch and it will let you press the Off switch, but it won’t jump ahead and press the Off switch itself? And if it self-modifies, will it self-modify in such a way as to keep the Off switch? We’re trying to work on that. It’s not easy… When you’re building something smarter than you, you have to get it right on the first try.

So, we know centralised systems are very vulnerable and decentralised systems have advantages, but with AGI we also have to fear that we have no room for the trial-and-error of decentralised internet style security architectures — ‘you have to get it right on the first try’. Are we snookered?! And of course there is no guarantee it is even possible to solve the alignment problem. When you hear people in this field describing ideas about ‘abstracting human ethics and encoding them’ one wonders if solving the alignment problem might prove even harder than AGI — maybe only an AGI could solve it…

Given the media debate is dominated by endless pictures of the Terminator and politicians are what they are, researchers are, understandably, extremely worried about what might happen if the political-media system makes a sudden transition from complacency to panic. After all, consider the global reaction if reputable scientists suddenly announced they have discovered plausible signals that super-intelligent aliens will arrive on earth within 30 years: even when softened by caveats, such a warning would obviously transform our culture (in many ways positively!). As Peter Thiel has said, creating true AGI is a close equivalent to the ‘super-intelligent aliens arriving on earth’ scenario and the most important questions are not economic but political, and in particular: are they friendly and can we stop them eliminating us by design, bad luck, or indifference?

Further, in my experience extremely smart technical people are often naive about politics. They greatly over-estimate the abilities of prime ministers and presidents. They greatly under-estimate the incentive problems and the degree of focus that is required to get ANYTHING done in politics. They greatly exaggerate the potential for ‘rational argument’ to change minds and wrongly assume somewhere at the top of power ‘there must be’ a group of really smart people working on very dangerous problems who have real clout. Further, everybody thinks they understand ‘communication’ but almost nobody does. We can see from recent events that even the very best engineering companies like Facebook and Google can not just make huge mistakes with the political/communication world but not learn (Facebook hiring Clegg was a sign of deep ignorance inside Facebook about their true problems). So it’s hard to be optimistic about the technical people educating the political people even assuming the technical people make progress with safety.

Hypothesis: 1) minimising nuclear/bio/AI risks and the potential for disastrous climate change requires a few very big things to change roughly simultaneously (‘normal’ political action will not be enough) and 2) this will require a weird alliance between a) technical people, b) political ‘renegades’, c) the public to ‘surround’ political Insiders locked into existing incentives:

  1. Different ‘models for effective action’ among powerful people, which will only happen if either (A) some freak individual/group pops up, probably in a crisis environment or (B) somehow incentives are hacked. (A) can’t be relied on and (B) is very hard.
  2. A new institution with global reach that can win global trust and support is needed. The UN is worse than useless for these purposes.
  3. Public opinion will have to be mobilised to overcome the resistance of political Insiders, for example, regarding the potential for technology to bring very large gains ‘to me’ and simultaneously avert extreme dangers. This connects to the very widespread view that a) the existing economic model is extremely unfair and b) this model is sustained by a loose alliance of political elites and corporate looters who get richer by screwing the rest of us.

I have an idea about a specific project, mixing engineering/economics/psychology/politics, that might do this and will blog on it separately.

I suspect almost any idea that could do 1-3 will seem at least weird but without big changes, we are simply waiting for the law of averages to do its thing. We may have decades for AGI and climate change but we could collide with the WMD law of averages tomorrow so, impractical as this sounds, it seems to me people have to try new things and risk failure and ridicule.

Please leave comments/corrections below…

Further reading

An excellent essay by Ian Hogarth, AI nationalism, which covers some of the same ground but is written by someone with deep connections to the field whereas I am extremely non-expert but interested.

AI safety is one of those subjects that is taken extremely seriously by a tiny number of people that has almost zero overlap with the policy/government world. If interested, then follow @ESYudkowsky. Cf. Intelligence Explosion Microeconomics, Yudkowsky.

Drones go to work, Chris Anderson (one of the pioneers of commercial drones). This explains the economics and how drones are transforming industries.

Meditations on Moloch, Scott Alexander. This is an extremely good essay in general about deep problems with our institutions but it touches on AI too.

Autonomous technology and the greater human good. Omohundro. ‘Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. Those drives will lead to anti-social and dangerous behaviour if not explicitly countered. The current computing infrastructure would be very vulnerable to unconstrained systems with these drives. We describe the use of formal methods to create provably safe but limited autonomous systems. We then discuss harmful systems and how to stop them. We conclude with a description of the ‘Safe-AI Scaffolding Strategy’ for creating powerful safe systems with a high confidence of safety at each stage of development.’ I strongly recommend reading this paper if interested in this blog.

Can intelligence explode? Hutter.

Read this 1955 essay by von Neumann ‘Can we survive technology?. VN was involved in the Manhattan Project, inventing computer science, game theory and much more. This essay explored the essential problem that the scale and speed of technological change have suddenly blown past political institutions. ‘For progress there is no cure…’

The recent Science piece on Qian Xuesen and systems management is HERE.

Qian Xuesen – Open Complex Giant System, 1993.

I wrote this (2018) about the extraordinary ARPA-PARC episode, which created much of the ecosystem for interactive personal computing and the internet and provided a model for how to conduct high-risk-high-payoff technology research.

I wrote this Jan 2017 on  systems management, von Neumann, Apollo, Mueller etc. It provides a checklist for how to improve Whitehall systematically and deliver complex projects like Brexit.

The Hollow Men (2014) that summarised the main problems of Westminster and Whitehall.

For some pre-history on computers, cf. The birth of computational thinking (some of the history of computing devices before the Gödel/Turing/von Neumann revolution) and for the next phase in the story — some of the history of ideas about mathematical foundations and logic such as the papers by Gödel and Turing in the 1930s — cf. The crisis of mathematical paradoxes, Gödel, Turing and the basis of computing.

My review of Allison’s book on the US-China contest and some thoughts on how Bismarck would see it.

On ‘Expertise’ from fighting and physics to economics, politics and government.

I blogged a few links to AI papers HERE.

On the referendum #30: Genetics, genomics, predictions & ‘the Gretzky game’ — a chance for Britain to help the world

On the referendum #30: Genetics, genomics, predictions & ‘the Gretzky game’ — a chance for Britain to help the world

Britain could contribute huge value to the world by leveraging existing assets, including scientific talent and how the NHS is structured, to push the frontiers of a rapidly evolving scientific field — genomic prediction — that is revolutionising healthcare in ways that give Britain some natural advantages over Europe and America. We should plan for free universal ‘SNP’ genetic sequencing as part of a shift to genuinely preventive medicine — a shift that will lessen suffering, save money, help British advanced technology companies in genomics and data science/AI, make Britain more attractive for scientists and global investment, and extend human knowledge in a crucial field to the benefit of the whole world.

‘SNP’ sequencing means, crudely, looking at the million or so most informative markers or genetic variants without sequencing every base pair in the genome. SNP sequencing costs ~$50 per person (less at scale), whole genome sequencing costs ~$1,000 per person (less at scale). The former captures most of the predictive power now possible at 1/20th of the cost of the latter.

*

Background: what seemed ‘sci fi’ ~2010-13 is now reality

In my 2013 essay on education and politics, I summarised the view of expert scientists on genetics (HERE between pages 49-51, 72-74, 194-203). Although this was only a small part of the essay most of the media coverage focused on this, particularly controversies about IQ.

Regardless of political affiliation most of the policy/media world, as a subset of ‘the educated classes’ in general, tended to hold a broadly ‘blank slate’ view of the world mostly uninformed by decades of scientific progress. Technical terms like ‘heritability’, which refers to the variance in populations, caused a lot of confusion.

When my essay hit the media, fortunately for me the world’s leading expert, Robert Plomin, told hacks that I had summarised the state of the science accurately. (I never tried to ‘give my views on the science’ as I don’t have ‘views’ — all people like me can try to do with science is summarise the state of knowledge in good faith.) Quite a lot of hacks then spent some time talking to Plomin and some even wrote about how they came to realise that their assumptions about the science had been wrong (e.g Gaby Hinsliff).

Many findings are counterintuitive to say the least. Almost everybody naturally thinks that ‘the shared environment’ in the form of parental influence ‘obviously’ has a big impact on things like cognitive development. The science says this intuition is false. The shared environment is much less important than we assume and has very little measurable effect on cognitive development: e.g an adopted child who does an IQ test in middle age will show on average almost no correlation with the parents who brought them up (genes become more influential as you age). People in the political world assumed a story of causation in which, crudely, wealthy people buy better education and this translates into better exam and IQ scores. The science says this story is false. Environmental effects on things like cognitive ability and education achievement are almost all from what is known as the ‘non-shared environment’ which has proved very hard to pin down (environmental effects that differ for children, like random exposure to chemicals in utero). Further, ‘The case for substantial genetic influence on g [g = general intelligence ≈ IQ] is stronger than for any other human characteristic’ (Plomin) and g/IQ has far more predictive power for future education than class does. All this has been known for years, sometimes decades, by expert scientists but is so contrary to what well-educated people want to believe that it was hardly known at all in ‘educated’ circles that make and report on policy.

Another big problem is that widespread ignorance about genetics extends to social scientists/economists, who are much more influential in politics/government than physical scientists. A useful heuristic is to throw ~100% of what you read from social scientists about ‘social mobility’ in the bin. Report after report repeats the same clichés, repeats factual errors about genetics, and is turned into talking points for MPs as justification for pet projects. ‘Kids who can read well come from homes with lots of books so let’s give families with kids struggling to read more books’ is the sort of argument you read in such reports without any mention of the truth: children and parents share genes that make them good at and enjoy reading, so causation is operating completely differently to the assumptions. It is hard to overstate the extent of this problem. (There are things we can do about ‘social mobility’, my point is Insider debate is awful.)

A related issue is that really understanding the science requires serious understanding of statistics and, now, AI/machine learning (ML). Many social scientists do not have this training. This problem will get worse as data science/AI invades the field. 

A good example is ‘early years’ and James Heckman. The political world is obsessed with ‘early years’ such as Sure Start (UK) and Head Start (US). Politicians latch onto any ‘studies’ that seem to justify it and few have any idea about the shocking state of the studies usually quoted to justify spending decisions. Heckman has published many papers on early years and they are understandably widely quoted by politicians and the media. Heckman is a ‘Nobel Prize’ winner in economics. One of the world’s leading applied mathematicians, Professor Andrew Gelman, has explained how Heckman has repeatedly made statistical errors in his papers but does not correct them: cf. How does a Nobel-prize-winning economist become a victim of bog-standard selection bias?  This really shows the scale of the problem: if a Nobel-winning economist makes ‘bog standard’ statistical errors that confuse him about studies on pre-school, what chance do the rest of us in the political/media world have?

Consider further that genomics now sometimes applies very advanced mathematical ideas such as ‘compressed sensing’. Inevitably few social scientists can judge such papers but they are overwhelmingly responsible for interpreting such things for ministers and senior officials. This is compounded by the dominance of social scientists in Whitehall units responsible for data and evidence. Many of these units are unable to provide proper scientific advice to ministers (I have had personal experience of this in the Department for Education). Two excellent articles by Duncan Watts recently explained fundamental problems with social science and what could be done (e.g a much greater focus on successful prediction) but as far as I can tell they have had no impact on economists and sociologists who do not want to face their lack of credibility and whose incentives in many ways push them towards continued failure (Nature paper HEREScience paper HERE — NB. the Department for Education did not even subscribe to the world’s leading science journals until I insisted in 2011).

1) The problem that the evidence for early years is not what ministers and officials think it is is not a reason to stop funding but I won’t go into this now. 2) This problem is incontrovertible evidence, I think, of the value of an alpha data science unit in Downing Street, able to plug into the best researchers around the world, and ensure that policy decisions are taken on the basis of rational thinking and good science or, just as important, everybody is aware that they have to make decisions in the absence of this. This unit would pay for itself in weeks by identifying flawed reasoning and stopping bad projects, gimmicks etc. Of course, this idea has no chance with those now at the top of Government and the Cabinet Office would crush such a unit as it would threaten the traditional hierarchy. One of the  arguments I made in my essay was that we should try to discover useful and reliable benchmarks for what children of different abilities are really capable of learning and build on things like the landmark Study of Mathematically Precocious Youth. This obvious idea is anathema to the education policy world where there is almost no interest in things like SMPY and almost everybody supports the terrible idea that ‘all children must do the same exams’ (guaranteeing misery for some and boredom/time wasting for others). NB. Most rigorous large-scale educational RCTs are uninformative. Education research, like psychology, produces a lot of what Feynman called ‘cargo cult science’.

Since 2013, genomics has moved fast and understanding in the UK media has changed probably faster in five years than over the previous 35 years. As with the complexities of Brexit, journalists have caught up with reality much better than MPs. It’s still true that almost everything written by MPs about ‘social mobility’ is junk but you could see from the reviews of Plomin’s recent book, Blueprint, that many journalists have a much better sense of the science than they did in 2013. Rare good news, though much more progress is needed…

*

What’s happening now?

Screenshot 2019-02-19 15.35.49

In 2013 it was already the case that the numbers on heritability derived from twin and adoption studies were being confirmed by direct inspection of DNA — therefore many of the arguments about twin/adoption studies were redundant — but this fact was hardly known.

I pointed out that the field would change fast. Both Plomin and another expert, Steve Hsu, made many predictions around 2010-13 some of which I referred to in my 2013 essay. Hsu is a physics professor who is also one of the world’s leading researchers on genomics. 

Hsu predicted that very large samples of DNA would allow scientists over the next few years to start identifying the actual genes responsible for complex traits, such as diseases and intelligence, and make meaningful predictions about the fate of individuals. Hsu gave estimates of the sample sizes that would be needed. His 2011 talk contains some of these predictions and also provides a physicist’s explanation of ‘what is IQ measuring’. As he said at Google in 2011, the technology is ‘right on the cusp of being able to answer fundamental questions’ and ‘if in ten years we all meet again in this room there’s a very good chance that some of the key questions we’ll know the answers to’. His 2014 paper explains the science in detail. If you spend a little time looking at this, you will know more than 99% of high status economists gabbling on TV about ‘social mobility’ saying things like ‘doing well on IQ tests just proves you can do IQ tests’.

In 2013, the world of Westminster thought this all sounded like science fiction and many MP said I sounded like ‘a mad scientist’. Hsu’s predictions have come true and just five years later this is no longer ‘science fiction’. (Also NB. Hsu’s blog was one of the very few places where you would have seen discussion of CDOs and the 2008 financial crash long BEFORE it happened. I have followed his blog since ~2004 and this from 2005, two years before the crash started, was the first time I read about things like ‘synthetic CDOs’: ‘we have yet another ill-understood casino running, with trillions of dollars in play’. The quant-physics network had much better insight into the dynamics behind the 2008 Crash than high status mainstream economists like Larry Summers responsible for regulation.)

His group and others have applied machine learning to very large genetic samples and built predictors of complex traits. Complex traits like general intelligence and most diseases are ‘polygenic’ — they depend on many genes each of which contributes a little (unlike diseases caused by a single gene). 

‘There are now ~20 disease conditions for which we can identify, e.g, the top 1% outliers with 5-10x normal risk for the disease. The papers reporting these results have almost all appeared within the last year or so.’

Screenshot 2019-02-19 15.00.14

For example, the height predictor ‘captures nearly all of the predicted SNP heritability for this trait — actual heights of most individuals in validation tests are within a few cm of predicted heights.’ Height is similar to IQ — polygenic and similar heritability estimates.

Screenshot 2019-02-19 15.00.37

These predictors have been validated with out-of-sample tests. They will get better and better as more and more data is gathered about more and more traits. 

This enables us to take DNA from unborn embryos, do SNP genetic sequencing costing ~$50, and make useful predictions about the odds of the embryo being an outlier for diseases like atrial fibrillation, diabetes, breast cancer, or prostate cancer. NB. It is important that we do not need to sequence the whole genome to do this (see below). We will also be able to make predictions about outliers in cognitive abilities (the high and low ends). (My impression is that predicting Alzheimers is still hampered by a lack of data but this will improve as the data improves.)

There are many big implications. This will obviously revolutionise IVF. ~1 million IVF embryos per year are screened worldwide using less sophisticated tests. Instead of picking embryos at random, parents will start avoiding outliers for disease risks and cognitive problems. Rich people will fly to jurisdictions offering the best services.

Forensics is being revolutionised. First, DNA samples can be used to give useful physical descriptions of suspects because you can identify ethnic group, height, hair colour etc. Second, ‘cold cases’ are now routinely being solved because if a DNA sample exists, then the police can search for cousins of the perpetrator from public DNA databases, then use the cousins to identify suspects. Every month or so now in America a cold case murder is solved and many serial killers are being found using this approach — just this morning I saw what looks to be another example just announced, a murder of an 11 year-old in 1973. (Some companies are resisting this development but they will, I am very confident, be smashed in court and have their reputations trashed unless they change policy fast. The public will have no sympathy for those who stand in the way.)

Hsu recently attended a conference in the UK where he presented some of these ideas to UK policy makers. He wrote this blog about the great advantages the NHS has in developing this science. 

The UK could become the world leader in genomic research by combining population-level genotyping with NHS health records… The US private health insurance system produces the wrong incentives for this kind of innovation: payers are reluctant to fund prevention or early treatment because it is unclear who will capture the ROI [return on investment]… The NHS has the right incentives, the necessary scale, and access to a deep pool of scientific talent. The UK can lead the world into a new era of precision genomic medicine. 

‘NHS has already announced an out-of-pocket genotyping service which allows individuals to pay for their own genotyping and to contribute their health + DNA data to scientific research. In recent years NHS has built an impressive infrastructure for whole genome sequencing (cost ~$1k per individual) that is used to treat cancer and diagnose rare genetic diseases. The NHS subsidiary Genomics England recently announced they had reached the milestone of 100k whole genomes…

‘At the meeting, I emphasized the following:

1. NHS should offer both inexpensive (~$50) genotyping (sufficient for risk prediction of common diseases) along with the more expensive $1k whole genome sequencing. This will alleviate some of the negative reaction concerning a “two-tier” NHS, as many more people can afford the former.

2. An in-depth analysis of cost-benefit for population wide inexpensive genotyping would likely show a large net cost savings: the risk predictors are good enough already to guide early interventions that save lives and money. Recognition of this net benefit would allow NHS to replace the $50 out-of-pocket cost with free standard of care.’ (Emphasis added)

NB. In terms of the short-term practicalities it is important that whole genome sequencing costs ~$1,000 (and falling) but is not necessary: a version 1/20th of the cost, looking just at the most informative genetic variants, captures most of the predictive benefits. Some have incentives to distort this, such as companies like Illumina trying to sell expensive machines for whole genome sequencing, which can distort policy — let’s hope officials are watching carefully. These costs will, obviously, keep falling.

This connects to an interesting question… Why was the likely trend in genomics clear ~2010 to Plomin, Hsu and others but invisible to most? Obviously this involves lots of elements of expertise and feel for the field but also they identified FAVOURABLE EXPONENTIALS. Here is the fall in the cost of sequencing a genome compared to Moore’s Law, another famous exponential. The drop over ~18 years has been a factor of ~100,000. Hsu and Plomin could extrapolate that over a decade and figure out what would be possible when combined with other trends they could see. Researchers are already exploring what will be possible as this trend continues.

Screenshot 2019-02-20 10.32.37

Identifying favourable exponentials is extremely powerful. Back in the early 1970s, the greatest team of computer science researchers ever assembled (PARC) looked out into the future and tried to imagine what could be possible if they brought that future back to the present and built it. They were trying to ‘compute in the future’. They created personal computing. (Chart by Alan Kay, one of the key researchers — he called it ‘the Gretzky game’ because of Gretzky’s famous line ‘I skate to where the puck is going to be, not where it has been.’ The computer is the Alto, the first personal computer that stunned Steve Jobs when he saw a demo. The sketch on the right is of children using a tablet device that Kay drew decades before the iPad was launched.)

Screenshot 2019-02-15 12.42.47

Hopefully the NHS and Department for Health will play ‘the Gretzky game’, take expert advice from the likes of Plomin and Hsu and take this opportunity to make the UK a world leader in one of the most important frontiers in science.

  • We can imagine everybody in the UK being given valuable information about their health for free, truly preventive medicine where we target resources at those most at risk, and early (even in utero) identification of risks.
  • This would help bootstrap British science into a stronger position with greater resources to study things like CRISPR and the next phase of this revolution — editing genes to fix problems, where clinical trials are already showing success.
  • It would also give a boost to British AI/data science companies — the laws, rules on data etc should be carefully shaped to ensure that British companies (not Silicon Valley or China) capture most of the financial value (though everybody will gain from the basic science).
  • These gains would have positive feedback effects on each other, just as investment in basic AI/ML research will have positive feedback effects in many industries.
  • I have argued many times for the creation of a civilian UK ‘ARPA’ — a centre for high-risk-high-payoff research that has been consistently blocked in Whitehall (see HERE for an account of how ARPA-PARC created the internet and personal computing). This fits naturally with Britain seeking to lead in genomics/AI. Thinking about this is part of a desperately needed overall investigation into the productivity of the British economy and the ecosystem of universities, basic science, venture capital, startups, regulation (data, intellectual property etc) and so on.

There will also be many controversies and problems. The ability to edit genomes — and even edit the germline with ‘gene drives’ so all descendants have the same copy of the gene — is a Promethean power implying extreme responsibilities. On a mundane level, embracing new technology is clearly hard for the NHS with its data infrastructure. Almost everyone I speak to using the NHS has had similar problems that I have had — nightmares with GPs, hospitals, consultants et al being able to share data and records, things going missing, etc. The NHS will be crippled if it can’t fix this, but this is another reason to integrate data science as a core ‘utility’ for the NHS.

On a political note…

Few scientists and even fewer in the tech world are aware of the EU’s legal framework for regulating technology and the implications of the recent Charter of Fundamental Rights (the EU’s Charter, NOT the ECHR) which gives the Commission/ECJ the power to regulate any advanced technology, accelerate the EU’s irrelevance, and incentivise investors to invest outside the EU. In many areas, the EU regulates to help the worst sort of giant corporate looters defending their position against entrepreneurs. Post-Brexit Britain will be outside this jurisdiction and able to make faster and better decisions about regulating technology like genomics, AI and robotics. Prediction: just as Insiders now talk of how we ‘dodged a bullet’ in staying out of the euro, within ~10 years Insiders will talk about being outside the Charter/ECJ and the EU’s regulation of data/AI in similar terms (assuming Brexit happens and UK politicians even try to do something other than copy the EU’s rules).

China is pushing very hard on genomics/AI and regards such fields as crucial strategic ground for its struggle for supremacy with America. America has political and regulatory barriers holding it back on genomics that are much weaker here. Britain cannot stop the development of such science. Britain can choose to be a backwater, to ignore such things and listen to MPs telling fairy stories while the Chinese plough ahead, or it can try to lead. But there is no hiding from the truth and ‘for progress there is no cure’ (von Neumann). We will never be the most important manufacturing nation again but we could lead in crucial sub-fields of advanced technology. As ARPA-PARC showed, tiny investments can create entire new industries and trillions of dollars of value.

Sadly most politicians of Left and Right have little interest in science funding with tremendous implications for future growth, or the broader question of productivity and the ecosystem of science, entrepreneurs, universities, funding, regulation etc, and we desperately need institutions that incentivise politicians and senior officials to ‘play the Gretzky game’. The next few months will be dominated by Brexit and, hopefully, the replacement of the May/Hammond government. Those thinking about the post-May landscape and trying to figure out how to navigate in uncharted and turbulent waters should focus on one of the great lessons of politics that is weirdly hard for many MPs to internalise: the public rewards sustained focus on their priorities!

One of the lessons of the 2016 referendum (that many Conservative MPs remain desperate not to face) is the political significance of the NHS. The concept described above is one of those concepts in politics that maximises positive futures for the force that adopts it because it draws on multiple sources of strength. It combines, inter alia, all the political benefits of focus on the NHS, helping domestic technology companies, incentivising global investment, doing something that shows the world that Britain is (contra the May/Hammond outlook) open to science and high skilled immigrants, it is based on intrinsic advantages that Europe and America will find hard to overcome over a decade, it supplies (NB. MPs/spads) a never-ending string of heart-wrenching good news stories, and, very rarely in SW1, those pushing it would be seen as leading something of global importance. It will, therefore, obviously be rejected by a section of Conservative MPs who much prefer to live in a parallel world, who hate anything to do with science and who are ignorant about how new industries and wealth are really created. But for anybody trying to orient themselves to reality, connect themselves to sources of power, and thinking ‘how on earth could we clamber out of this horror show’, it is an obvious home run…

NB. It ought to go without saying that turning this idea into a political/government success requires focus on A) the NHS, health, science, NOT getting sidetracked into B) arguments about things like IQ and social mobility. Over time, the educated classes will continue to be dragged to more realistic views on (B) but this will be a complex process entangled with many hysterical episodes. (A) requires ruthless focus…

Please leave comments, fix errors below. I have not shown this blog in draft to Plomin or Hsu who obviously are not responsible for my errors.

Further reading

Plomin’s excellent new book, Blueprint. I would encourage journalists who want to understand this subject to speak to Plomin who works in London and is able to explain complex technical subjects to very confused arts graduates like me.

On the genetic architecture of intelligence and other quantitative traits, Hsu 2014.

Cf. this thread by researcher Paul Pharaoh on breast cancer.

Hsu blogs on genomics.

Some recent developments with AI/ML, links to papers.

On how ARPA-PARC created the modern computer industry and lessons for high-risk-high-payoff science research.

My 2013 essay.

#29 On the referendum & #4c on Expertise: On the ARPA/PARC ‘Dream Machine’, science funding, high performance, and UK national strategy

Post-Brexit Britain should be considering the intersection of 1) ARPA/PARC-style science research and ‘systems management’ for managing complex projects with 2) the reform of government institutions so that high performance teams — with different education/training (‘Tetlock processes’) and tools (including data science and visualisations of interactive models of complex systems) — can make ‘better decisions in a complex world’.  

This paper examines the ARPA/PARC vision for computing and the nature of the two organisations. In the 1960s visionaries such as Joseph Licklider, Robert Taylor and Doug Engelbart developed a vision of networked interactive computing that provided the foundation not just for new technologies but for whole new industries. Licklider, Sutherland, Taylor et al provided a model (ARPA) for how science funding can work. Taylor provided a model (PARC) of how to manage a team of extremely talented people who turned a profound vision into reality. The original motivation for the vision of networked interactive computing was to help humans make good decisions in a complex world.

This story suggests ideas about how to make big improvements in the world with very few resources if they are structured right. From a British perspective it also suggests ideas about what post-Brexit Britain should do to help itself and the world and how it might be possible to force some sort of ‘phase transition’ on the rotten Westminster/Whitehall system.

For the PDF of the paper click HERE. Please correct errors with page numbers below. I will update it after feedback.

Further Reading

The Dream Machine.

Dealers of Lightning.

‘Sketchpad: A man-machine graphical communication system’, Ivan Sutherland 1963.

Oral history interview with Sutherland, head of ARPA’s IPTO division 1963-5.

This link has these seminal papers:

  • Man-Computer Symbiosis, Licklider (1960)
  • The computer as a communications device, Licklider & Taylor (1968)

Watch Alan Kay explain how to invent the future to YCombinator classes HERE and HERE.  

HERE for Kay quotes from emails with Bret Victor.

HERE for Kay’s paper on PARC, The Power of the Context.

Kay’s Early History of Smalltalk.

HERE for a conversation between Kay and Engelbart.

Alan Kay’s tribute to Ted Nelson at “Intertwingled” Fest (an Alto using Smalltalk).

Personal Distributed Computing: The Alto and Ethernet Software1, Butler Lampson. 

You and Your Research, Richard Hamming.

AI nationalism, essay by Ian Hogarth. This concerns implications of AI for geopolitics.

Drones go to work, Chris Anderson (one of the pioneers of commercial drones). This explains the economics of the drone industry.

Meditations on Moloch, Scott Alexander. This is an extremely good essay in general about deep problems with our institutions.

Intelligence Explosion Microeconomics, Yudkowsky.

Autonomous technology and the greater human good. Omohundro.

Can intelligence explode? Hutter.

For the issue of IQ, genetics and the distribution of talent (and much much more), cf. Steve Hsu’s brilliant blog.

Bret Victor.

Michael Nielsen.

For some pre-history on computers, cf. The birth of computational thinking (some of the history of computing devices before the Turing/von Neumann revolution) and The crisis of mathematical paradoxes, Gödel, Turing and the basis of computing (some of the history of ideas about mathematical foundations and logic such as the famous papers by Gödel and Turing in the 1930s)

Part I of this series of blogs is HERE.

Part II on the emergence of ‘systems management’, how George Mueller used it to put man on the moon, and a checklist of how successful management of complex projects is systematically different to how Whitehall works is HERE.

On the referendum #28: Some interesting stuff on AI/ML with, hopefully, implications for post-May/Hammond decisions

Here are a few interesting recent papers I’ve read over the past few months.

Bear in mind that Shane Legg, co-founder and chief scientist of Deep Mind, said publicly a few years ago that there’s a 50% probability that we will achieve human level AI by 2028 and a 90% probability by 2050. Given all that has happened since, including at Deep Mind, it’s surely unlikely he now thinks this forecast is too optimistic. Also bear in mind that the US-China AI arms race is already underway, the UK lost its main asset before almost any MPs even knew its name, and the EU in general (outside London) is decreasingly relevant as progress at the edge of the field is driven by coastal America and coastal China, spurred by commercial and national security dynamics. This will get worse as the EU Commission and the ECJ use the Charter of Fundamental Rights to grab the power to regulate all high technology fields from AI to genomics — a legal/power dynamic still greatly under-appreciated in London’s technology world. If you think GDPR is a mess, wait for the ECJ to spend three years deciding crucial cases on autonomous drones and genetic engineering before upending research in the field…

Vote Leave argued during the referendum that a Leave victory should deliver the huge changes that the public wanted and the UK should make science and technology the focus of a profound process of national renewal. On this as on everything else, from Article 50 to how to conduct the negotiations to budget priorities to immigration policy, SW1 in general and the Conservative Party in particular did the opposite of what Vote Leave said. They have driven the country into the ditch and the only upside is they have exposed the rottenness of Westminster and Whitehall and forced many who wanted to keep the duvet over their eyes to face reality — the first step in improvement.

After the abysmal May/Hammond interlude is over, hopefully some time between October 2018 — July 2019, its replacement will need to change course on almost every front from the NHS to how SW1 pours billions into the greedy paws of corporate looters via its appallingly managed >£200 BILLION annual contracting/procurement budget — ‘there’s no money’ bleats most of SW1 as it unthinkingly shovels it at the demimonde of Carillion/BaE-like companies that prop up its MPs with donations.

May’s replacement could decide to take seriously the economic and technological forces changing the world. The UK could, with a very different vision of the future to anything now proposed in Whitehall, improve its own security and prosperity and help the world but this will require 1) substantially changing the wiring of power in Whitehall so decisions are better (new people, training, ideas, tools, and institutions), and 2) making scientific research and technology projects important at the apex of power. We could build real assets with much greater real influence than the chimerical ‘influence’ in Brussels meeting rooms that SW1 has used as an excuse to give away power to Brussels where thinking is much closer to the 1970s than to today’s coastal China or Silicon Valley. Brushing aside Corbyn would be child’s play for a government that could focus on important questions and took project management — an undiscussable subject in SW1 — seriously.

The whole country — the whole world — can see our rotten parties have failed us. The parties ally with the civil service to keep new ideas and people excluded. SW1 has tried to resist the revolutionary implications of the referendum but this resistance has to crack: one way or the other the old ways are doomed. The country voted for profound change in 2016. The Tories didn’t understand this hence, partly, the worst campaign in modern history. This dire Cabinet, doomed to merciless judgement in the history books, is visibly falling: let’s ‘push what is falling’…

For specific proposals on improving the appalling science funding system, see below.

*

The Sam Altman co-founded non-profit, OpenAI, made major progress with its Dota-playing AI last week: follow @gdb for updates. Deep Mind is similarly working on Starcraft. It is a major advance to shift from perfect information games like GO to imperfect strategic games like Dota and Starcraft. If AIs shortly beat the best humans at full versions of such games, then it means they can outperform at least parts of human reasoning in ways that have been assumed to be many years away. As OpenAI says, it is a major step ‘towards advanced AI systems which can handle the complexity and uncertainty of the real world.’

https://blog.openai.com/openai-five-benchmark-results/

RAND paper on how AI affects the chances of nuclear catastrophe:

https://www.rand.org/content/dam/rand/pubs/perspectives/PE200/PE296/RAND_PE296.pdf

The Malicious Use of Artificial Intelligence:

https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/1c6q2kc4v_50335.pdf

Defense Science Board: ‘Summer Study on Autonomy’ (2016):

http://www.acq.osd.mil/dsb/reports/2010s/DSBSS15.pdf

JASON: ‘Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD’ (2017)

https://fas.org/irp/agency/dod/jason/ai-dod.pdf

Artificial Intelligence and National Security, Greg Allen Taniel Chan (for IARPA):

Artificial Intelligence and National Security – The Belfer Center for …

Some predictions on driverless cars and other automation milestones: http://rodneybrooks.com/my-dated-predictions/

Project Maven (very relevant to politicians/procurement): https://thebulletin.org/project-maven-brings-ai-fight-against-isis11374

Chris Anderson on drones changing business sectors:

https://hbr.org/cover-story/2017/05/drones-go-to-work

On the trend in AI compute and economic sustainability (NB. I think the author is wrong on the Manhattan Project being a good upper bound for what a country will spend in an arms race, US GDP spent on DoD at the height of the Cold War would be a better metric): https://aiimpacts.org/interpreting-ai-compute-trends/

Read this excellent essay on ‘AI Nationalism’ by Ian Hogarth, directly relevant to arms race arguments and UK policy.

Read ‘Intelligence Explosion Microeconomics’ by Yudkowsky.

Read ‘Autonomous technology and the greater human good’ by Omohundro — one of the best things about the dangers of AGI and ideas about safety I’ve seen by one of the most respected academics working in this field.

Existential Risk: Diplomacy and Governance (Future of Humanity Institute, 2017).

If you haven’t you should also read this 1955 essay by von Neumann ‘Can we survive technology?’. It is relevant beyond any specific technology. VN was regarded by the likes of Einstein and Dirac as the smartest person they’d ever met. He was involved in the Manhattan Project, inventing computer science, game theory and much more. This essay explored the essential problem that the scale and speed of technological change suddenly blew up assumptions about political institutions’ ability to cope. Much reads as if it were written yesterday.  ‘For progress there is no cure…’

I blogged on a paper by Judea Pearl a few months ago HERE. He is the leading scholar of causation. He argues that current ML approaches are inherently limited and advance requires giving machines causal reasoning:

‘If we want machines to reason about interventions (“What if we ban cigarettes?”) and introspection (“What if I had finished high school?”), we must invoke causal models. Associations are not enough — and this is a mathematical fact, not opinion.’

I also wrote this recently on science funding which links to a great piece by two young neuroscientists about how post-Brexit Britain should improve science and is also relevant to how the UK could set up an ARPA-like entity to fund AI/ML and other fields:

https://dominiccummings.com/2018/06/08/on-the-referendum-25-how-to-change-science-funding-post-brexit/

 

State-of-the-art in AI #1: causality, hypotheticals, and robots with free will & capacity for evil (UPDATED)

Judea Pearl is one of the most important scholars in the field of causal reasoning. His book Causality is the leading textbook in the field.

This blog has two short parts — a paper he wrote a few months ago and an interview he gave a few days ago.

*

He recently wrote a very interesting (to the very limited extent I understand it) short paper about the limits of state-of-the-art AI systems using ‘deep learning’ neural networks — such as the AlphaGo system which recently conquered the game of GO and AlphaZero which blew past centuries of human knowledge of chess in 24 hours — and how these systems could be improved.

The human ability to interrogate stored representations of their environment with counter-factual questions is fundamental and, for now, absent in machines. (All bold added my me.)

‘If we examine the information that drives machine learning today, we find that it is almost entirely statistical. In other words, learning machines improve their performance by optimizing parameters over a stream of sensory inputs received from the environment. It is a slow process, analogous in many respects to the evolutionary survival-of-the-fittest process that explains how species like eagles and snakes have developed superb vision systems over millions of years. It cannot explain however the super-evolutionary process that enabled humans to build eyeglasses and telescopes over barely one thousand years. What humans possessed that other species lacked was a mental representation, a blue-print of their environment which they could manipulate at will to imagine alternative hypothetical environments for planning and learning…

‘[T]he decisive ingredient that gave our homo sapiens ancestors the ability to achieve global dominion, about 40,000 years ago, was their ability to sketch and store a representation of their environment, interrogate that representation, distort it by mental acts of imagination and finally answer “What if?” kind of questions. Examples are interventional questions: “What if I act?” and retrospective or explanatory questions: “What if I had acted differently?” No learning machine in operation today can answer such questions about actions not taken before. Moreover, most learning machines today do not utilize a representation from which such questions can be answered.

‘We postulate that the major impediment to achieving accelerated learning speeds as well as human level performance can be overcome by removing these barriers and equipping learning machines with causal reasoning tools. This postulate would have been speculative twenty years ago, prior to the mathematization of counterfactuals. Not so today. Advances in graphical and structural models have made counterfactuals computationally manageable and thus rendered meta-statistical learning worthy of serious exploration

Figure: the ladder of causation

Screenshot 2018-03-12 11.22.54

‘An extremely useful insight unveiled by the logic of causal reasoning is the existence of a sharp classification of causal information, in terms of the kind of questions that each class is capable of answering. The classification forms a 3-level hierarchy in the sense that questions at level i (i = 1, 2, 3) can only be answered if information from level j (j ≥ i) is available. [See figure]… Counterfactuals are placed at the top of the hierarchy because they subsume interventional and associational questions. If we have a model that can answer counterfactual queries, we can also answer questions about interventions and observations… The translation does not work in the opposite direction… No counterfactual question involving retrospection can be answered from purely interventional information, such as that acquired from controlled experiments; we cannot re-run an experiment on subjects who were treated with a drug and see how they behave had then not given the drug. The hierarchy is therefore directional, with the top level being the most powerful one. Counterfactuals are the building blocks of scientific thinking as well as legal and moral reasoning…

‘This hierarchy, and the formal restrictions it entails, explains why statistics-based machine learning systems are prevented from reasoning about actions, experiments and explanations. It also suggests what external information need to be provided to, or assumed by, a learning system, and in what format, in order to circumvent those restrictions

[He describes his approach to giving machines the ability to reason in more advanced ways (‘intent-specific optimization’) than standard approaches and the success of some experiments on real problems.]

[T]he value of intent-base optimization … contains … the key by which counterfactual information can be extracted out of experiments. The key is to have agents who pause, deliberate, and then act, possibly contrary to their original intent. The ability to record the discrepancy between outcomes resulting from enacting one’s intent and those resulting from acting after a deliberative pause, provides the information that renders counterfactuals estimable. It is this information that enables us to cross the barrier between layer 2 and layer 3 of the causal hierarchy… Every child undergoes experiences where he/she pauses and thinks: Can I do better? If mental records are kept of those experiences, we have experimental semantic to counterfactual thinking in the form of regret sentences “I could have done better.” The practical implications of this new semantics is worth exploring.’

The paper is here: http://web.cs.ucla.edu/~kaoru/theoretical-impediments.pdf.

*

By chance this evening I came across this interview with Pearl in which he discuses some of the ideas above less formally, HERE.

‘The problems that emerged in the early 1980s were of a predictive or diagnostic nature. A doctor looks at a bunch of symptoms from a patient and wants to come up with the probability that the patient has malaria or some other disease. We wanted automatic systems, expert systems, to be able to replace the professional — whether a doctor, or an explorer for minerals, or some other kind of paid expert. So at that point I came up with the idea of doing it probabilistically.

‘Unfortunately, standard probability calculations required exponential space and exponential time. I came up with a scheme called Bayesian networks that required polynomial time and was also quite transparent.

‘[A]s soon as we developed tools that enabled machines to reason with uncertainty, I left the arena to pursue a more challenging task: reasoning with cause and effect.

‘All the machine-learning work that we see today is conducted in diagnostic mode — say, labeling objects as “cat” or “tiger.” They don’t care about intervention; they just want to recognize an object and to predict how it’s going to evolve in time.

‘I felt an apostate when I developed powerful tools for prediction and diagnosis knowing already that this is merely the tip of human intelligence. If we want machines to reason about interventions (“What if we ban cigarettes?”) and introspection (“What if I had finished high school?”), we must invoke causal models. Associations are not enough — and this is a mathematical fact, not opinion.

‘As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.

‘I’m very impressed, because we did not expect that so many problems could be solved by pure curve fitting. It turns out they can. But I’m asking about the future — what next? Can you have a robot scientist that would plan an experiment and find new answers to pending scientific questions? That’s the next step. We also want to conduct some communication with a machine that is meaningful, and meaningful means matching our intuition.

‘If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans. The next step will be that machines will postulate such models on their own and will verify and refine them based on empirical evidence. That is what happened to science; we started with a geocentric model, with circles and epicycles, and ended up with a heliocentric model with its ellipses.

We’re going to have robots with free will, absolutely. We have to understand how to program them and what we gain out of it. For some reason, evolution has found this sensation of free will to be computationally desirable… Evidently, it serves some computational function.

‘I think the first evidence will be if robots start communicating with each other counterfactually, like “You should have done better.” If a team of robots playing soccer starts to communicate in this language, then we’ll know that they have a sensation of free will. “You should have passed me the ball — I was waiting for you and you didn’t!” “You should have” means you could have controlled whatever urges made you do what you did, and you didn’t.

[When will robots be evil?] When it appears that the robot follows the advice of some software components and not others, when the robot ignores the advice of other components that are maintaining norms of behavior that have been programmed into them or are expected to be there on the basis of past learning. And the robot stops following them.’

Please leave links to significant critiques of this paper or work that has developed the ideas in it.

If interested in the pre-history of the computer age and internet, this paper explores it.

Review of Allison’s book on US/China & nuclear destruction, and some connected thoughts on technology, the EU, and space

‘The combination of physics and politics could render the surface of the earth uninhabitable… [Technological progress] gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we have known them, cannot continue.’ John von Neumann, one of the 20th Century’s most important mathematicians, one of the two most responsible for developing digital computers, central to Manhattan Project etc.

‘Politics is always like visiting a country one does not know with people whom one does not know and whose reactions one cannot predict. When one person puts a hand in his pocket, the other person is already drawing his gun, and when he pulls the trigger the first one fires and it is too late then to ask whether the requirements of common law with regard to self-defence apply, and since common law is not effective in politics people are very, very quick to adopt an aggressive defence.’ Bismarck, 1879.

*

Below is a review of Graham Allison’s book, Destined for War: Can America and China Escape Thucydides’s Trap?. Allison’s book is particularly interesting given what is happening with North Korea and Trump. It is partly about the most urgent question: whether and how humanity can survive the collision between science and politics.

Beneath the review are a few other thoughts on the book and its themes. I will also post some notes on stuff connecting ideas about advanced technology and strategy (conventional and nuclear) including notes from the single best book on nuclear strategy, Payne’s The Great American Gamble: deterrence theory and practice from the Cold War to the twenty-first century. If you want to devote your life to a cause with maximum impact, then studying this book is a good start and it also connects to debates on other potential existential threats such as biological engineering and AI.

Payne’s book connects directly to Allison’s. Allison focuses a lot on the circumstances in which crises could spin out of control and end in US-China war. Payne’s book is the definitive account of nuclear strategy and its intellectual and practical problems. Payne’s book in a nutshell: 1) politicians and most senior officials operate with the belief that there is a dependable ‘rational’ basis for successful deterrence in which ‘rational’ US opponents will respond prudently and cautiously to US nuclear deterrence threats; 2) the re-evaluation of nuclear strategy in expert circles since the Cold War exposes the deep flaws of Cold War thinking in general and the concept of ‘rational’ deterrence in particular (partly because strategy was dangerously influenced by ideas about rationality from economics). Expert debate has not permeated to most of those responsible or the media. Trump’s language over North Korea and the media debate about it are stuck in the language of Cold War deterrence.

I would bet that no UK Defence Secretary has read Payne’s book. (Have the MoD PermSecs? The era of Michael Quinlan has long gone as the Iraq inquiries revealed.) What emerges from UK Ministers suggests they are operating with Cold War illusions. If you think I’m probably too pessimistic, then ponder this comment by Professor Allison who has spent half a century in these circles: ‘Over the past decade, I have yet to meet a senior member of the US national security team who had so much as read the official national security strategies’ (emphasis added). NB. he is referring to reading the official strategies, not the explanations of why they are partly flawed!

This of course relates to the theme of much I have written: the dangers created by the collision of science and markets with dysfunctional individuals and political institutions, and the way the political-media system actively suppresses thinking about, and focus on, what’s important.

Priorities are fundamental to politics because of inevitable information bottlenecks: these bottlenecks can be transformed by rare good organisation but they cannot be eradicated. People are always asking ‘how could the politicians let X happen with Y?’ where Y is something important. People find it hard to believe that Y is not the focus of  serious attention and therefore things like X are bound to happen all the time. People like Osborne and Clegg are focused on some magazine profile, not Y. The subject of nuclear command and control ought to make people realise that their mental models for politics are deeply wrong. It is beyond doubt that politicians do not even take the question of accidental nuclear war seriously, so a fortiori there is no reason to have confidence in their general approach to priorities.

If you think of politics as ‘serious people focusing seriously on the most important questions’, which is the default mode of most educated people and the media (but not the less-educated public which has better instincts), then your model of reality is badly wrong. A more accurate model is: politics is a system that 1) selects against skills needed for rigorous thinking and for qualities such as groupthink and confirmation bias, 2) incentivises a badly selected set of people to consider their career not the public interest, 3) drops them into dysfunctional institutions with no relevant training and poor tools, 4) centralises vast amounts of power in the hands of these people and institutions in ways we know are bound to cause huge errors, and 5) provides very weak (and often damaging) feedback so facing reality is rare, learning is practically impossible, and system reform is seen as a hostile act by political parties and civil services worldwide.

I meant to publish this a few days ago on ‘Petrov day’, the anniversary of 26 September 1983 when Petrov saw US nuclear missiles heading for Russia on his screen but in a snap decision without consultation he decided not to inform his superiors, guessing it was some sort of technical error and not wanting to risk catastrophic escalation. (Petrov died a few weeks ago.) I forgot to post but my point is: we will not keep getting lucky like that, and our odds worsen with every week that the political system works as it does. The cumulative probability of disaster grows alarmingly even if you assume a small chance of disaster. For example, a 1% chance of wipeout per year means the probability of wipeout is about 20% within 20 years, about 50% within 70 years, and about two-thirds within a century. Given what we now know it’s reasonable to plan on the basis that the chance of a nuclear accident of some sort leading to mass destruction is at least 1% per year. A 1:30 chance per year means a ~97% chance of wipeout in a century…

*

Review of Destined for War: Can America and China Escape Thucydides’s Trap?, by Graham Allison

Every day on his way to work at Harvard, Professor Allison wondered how the reconstruction of the bridge over Boston’s Charles River could take years while in China bigger bridges are replaced in days. His book tells the extraordinary story of China’s transformation since Deng abandoned Mao’s catastrophic Stalinism, and considers whether the story will end in war between China and America.

China erects skyscrapers in weeks while Parliament delays Heathrow expansion for over a decade. The EU discusses dumb rules made 60 years ago while China produces a Greece-sized economy every 16 weeks. China’s economy doubles roughly every seven years; it is already the size of America’s and will likely dwarf it in 20 years. More serious than Europe, it invests this growth in education and technology from genetic engineering to artificial intelligence.

Allison analyses the formidable President Xi, who has known real suffering and is very different to western leaders obsessed with the frivolous spin cycles of domestic politics. Xi’s goal is to ensure that China’s renaissance returns it to its position as the richest, strongest and most advanced culture on earth. Allison asks: will the US-China relationship repeat the dynamics between Athens and Sparta that led to war in 431 bc or might it resemble the story of the British-American alliance in the 20th century?

In Thucydides’ history the dynamic growth of Athens caused such fear that, amid confusing signals in an escalating crisis, Sparta gambled on preventive war. Similarly, after Bismarck unified Germany in 1870-71, Europe’s balance of power was upended. In summer 1914, the leaderships of all Great Powers were overwhelmed by confusing signals amid a rapidly escalating crisis. The prime minister doodled love letters to his girlfriend as the cabinet discussed Ireland, and European civilisation tottered over the brink.

Allison discusses how America, China and Taiwan [or Korea] might play the roles of Britain, Germany and Belgium. China has invested in weapons with powerful asymmetric advantages: cheap missiles can sink an aircraft carrier costing billions, and cyber weapons could negate America’s powerful space and communication infrastructure. American war-games often involve bombing Chinese coastal installations. How far might it escalate?

Nuclear weapons increase destructive power a million-fold and give a leader just minutes to decide whether a (possibly false) warning justifies firing weapons that would destroy civilisation, while relying on the same sort of hierarchical decision-making processes that failed in the much slower 1914 crisis.

Terrifying near misses have already happened, and we have been saved by individuals’ snap judgments. They have occurred, luckily, during episodes of relative calm. Similar incidents during an intense crisis could spark catastrophe. The Pentagon hoped that technology would bring ‘information dominance’: instead, technology accelerates crises and overwhelms decisions. Real and virtual robots will fight battles and influence minds faster than traditional institutions can follow.

Allison hopes Washington will rediscover its 1940s seriousness, when it built a strategy and institutions to contain Stalin. He suggests abandoning ‘containment’, which is unlikely to work in the same way against capitalist China as it did against Soviet Russia. It could drop security guarantees to Taiwan to lower escalation risks. It could promote new institutions to tackle destructive technology and terrorism. Since China will upend post-1945 institutions anyway, why not try to shape what comes next together? Perhaps, channelling Sun Tzu, the West could avoid defeat by not trying to ‘win’.

It is hard to see how the necessary leadership might emerge.

We need government teams capable of the rare high performance we see in George Mueller’s Nasa, which put man on the moon; or in Silicon Valley, entrepreneurs such as Sam Altman and Patrick Collison. This means senior politicians and officials of singular ability and with different education, training and experience. It means extremely adaptive institutions and state-of-the-art tools, not the cabinet processes that failed in 1914. It means breaking the power of self-absorbed parties and bureaucracies that evolved before nuclear physics and the internet.

New leaders must build institutions for global cooperation that can transcend Thucydides’ dynamics. For example, the plan of Jeff Bezos, Amazon’s CEO, to build a permanent moon base in which countries work together to harness the resources of the solar system is the sort of project that could create an alternative focus to nationalist antagonism.

The scale of change seems impossible, yet technology gives us no choice — we must try to escape our evolutionary origins, since we cannot survive repeated roulette with advanced technology. Churchill wrote how in 1914 governments drifted into ‘fathomless catastrophe’ in ‘a kind of dull cataleptic trance’. Western leaders are in another such trance. Unless new forces evolve outside closed political systems and force change we will suffer greater catastrophe; it’s just a matter of when.

I hope people like Jeff Bezos read this timely book and resolve to build the political forces we need.

(Originally appeared in The Spectator.)

*

A few other thoughts

I’ve got some quibbles, such as interpretations of Thucydides, but I won’t go into those.

There are many issues in it I did not have time to mention in a short review…

1. Nuclear crises / accidents

In the context of US-China crises, it is very instructive to consider some of the most dangerous episodes of the Cold War that remained secret at the time.

Here are some of the near misses that have been declassified (see this timeline from Future of Life Institute).

  • 24 January 1961. A US bomber broke up and dropped two hydrogen bombs on North Carolina. Five of six safety devices failed. ‘By the slightest margin of chance, literally the failure of two wires to cross, a nuclear explosion was averted’ (Defence Secretary Robert McNamara).
  • 25 October 1962, during the Cuban Missile Crisis. A sabotage alarm was triggered at a US base. Faulty wiring meant that the alarm triggered the take-off of nuclear armed US planes. Fortunately they made contact with the ground and were stood down. The alarm had been triggered by a bear — yes, a bear, like in a Simpsons episode — pottering around outside the base. This was one of many incidents during this crisis, including one base where missiles and codes were mishandled such that a single person could have launched.
  • 27 October 1962, during the Cuban Missile Crisis. A Soviet submarine was armed with nuclear weapons. It was cornered by US ships which dropped depth charges. It had no contact with Moscow for days and had no idea if war had already broken out. Malfunctioning systems meant carbon dioxide poisoning and crew were fainting. In panic the captain ordered a nuclear missile fired. Orders said that three officers had to agree. Only two did. Vasili Arkhipov said No. It was not known until after the collapse of the Soviet Union that there were also tactical nuclear missiles deployed to Cuba and, for the only time, under direct authority of field commanders who could fire without further authority from Moscow, so if the US had decided to attack Cuba, as many urged JFK to do, there is a reasonable chance that local commanders would have begun a nuclear exchange. Castro wanted these missiles, unknown to America, transferred to Cuban control. Fortunately, Mikoyan, the Soviet in charge on the ground, witnessed Castro’s unstable character and decided not to transfer these missiles to his control. The missiles were secretly returned to Russia shortly after.
  • 1 August 1974. A warning of the danger of allowing one person to give a catastrophic order: Nixon was depressed, drinking heavily, and unstable so Defense Secretary Schlesinger told the Joint Chiefs to come to him in the event of any order to fire nuclear weapons.
  • 9 November 1979. NORAD thought there was a large-scale Soviet nuclear attack. Planes were scrambled and ICBM crews put on highest alert. The National Security Adviser was called at home. He looked at his wife asleep and decided not to wake her as they would shortly both be dead and he turned his mind to calling President Carter about plans for massive retaliation before he died. After 6 minutes no satellite data confirmed launches. Decisions were delayed. It turned out that a technician had accidentally input a training program which played through the computer system as if it were a real attack. (There were other similar incidents.)
  • 26 September 1983. A month after the Soviet Union shot down a Korean passenger jet and at a time of international tension, a Soviet satellite showed America had launched five nuclear missiles. The data suggested the satellite was working properly but the officer on duty, Stanilov Petrov, decided to report it to his superiors as a false alarm without knowing if it was true. It turned out to be an odd effect of sun glinting off clouds that fooled the system.
  • 2-11 November 1983. NATO ran a large wargame with a simulation of DEFCON 1 and coordinated attack on the Soviet Union. The war-game was so realistic that Soviet intelligence thought it was a cover for a real attack and Soviet missiles were placed on high alert. On 11 November the Soviets intercepted a message saying US missiles had launched. Fortunately, incidents such as 26 September 1983 did not randomly occur during this 10 days.
  • 25 January 1995. The Russian system detected the launch of a missile off the coast of Norway that was thought to be a US submarine launch. The warning went to Yeltsin who activated his ‘nuclear football’ and retrieved launch codes. There was no corroboration from satellites. Norway had actually reached a scientific rocket and somehow this was not notified properly in Russia.
  • 29-30 August 2007. Six US nuclear weapons were accidentally loaded into a B52 which was left unguarded overnight, flown to another base where it was left unguarded for another nine hours before ground crew realised what they were looking at. For 36 hours nobody realised the missiles were missing.
  • 23 October 2010. The US command and control system responsible for detecting and stopping unauthorised launches lost all control of 50 ICBMs for an hour because of communication failure caused by a dodgy component.
  • A 2013 monitoring exercise found the US nuclear command and control system generally shambolic. Staff were found to be on drugs and otherwise unsuitable, the system was deemed unfit to cope with a major hack, and the commander of the ICBM force was compromised by a classic KGB ‘honey trap’ (when I lived in Moscow I met some of the women who worked on such operations and I’d bet >90% of male UK Ministers/PermSecs would throw themselves at them faster than you can say ‘honey trap’).

This is just a sample. The full list still understates the scale of luck we have had in at least two ways. First, the data is mostly from America because America is a more open society. The most sensible assumption is that there have been more incidents in Russia than we know about. Second, there is a selection bias towards older incidents that have been declassified.

Right now there are hundreds of missiles on ‘hair-trigger’ alert for launch within minutes. Decisions about how reliable a warning is and whether to fire must all be taken within minutes. This makes the whole world vulnerable to accidents, unauthorised use, unhinged leaders, and false alarms. This situation could get worse. China’s missiles are not on hair-trigger alert but the Chinese military is pushing to change this. Adding a third country operating like this would make the system even more unstable. It also seems very likely that proliferation will continue to spread. The West preaches non-proliferation at non-nuclear countries but this unsurprisingly is not persuasive.

2. China’s weaknesses, including the tension between informational openness needed for growth and its political dangers

During the Cold War, many people from different political perspectives were agreed on one thing: that the Soviet Union was much stronger than it later turned out to have been. This view was so powerful that people like Andy Marshall, the founder and multi-decade head of the Office of Net Assessment, struggled to find support for his argument that the CIA and Pentagon were systematically overstating the strength of the Soviet economy and understating the burden of defence spending. They had, of course, strong bureaucratic reasons to do so: a more dangerous enemy was the best argument for more funding. It is important to keep in mind this potential error viz China.

1929 and 2008 each had profound effects on US politics. China, interestingly, was not as badly hit by 2008 as the West. What is the probability that it will continue to avoid an economic crisis somewhere between a serious recession and a 1929/2008-style event over the next say 20 years? If it does experience such a shock, how effective will its political institutions be in coping relative to those of America’s and Britain’s over the long-term? Might debt and bad financial institutions create a political crisis serious enough to threaten the legitimacy of the regime? Might other problems such as secession movements (perhaps combined with terrorism) cause an equivalently serious political crisis? After all, historically the country has fallen apart repeatedly and this is the great fear of its leaders.

China also has serious resource vulnerabilities. It has to import most of its energy. It has serious water shortages. It has serious ecological crises. It has serious corruption problems. It has a rapidly ageing population. Although it, unlike the EU, has built brilliant private companies to rival Google et al, its state-owned enterprises (with special phones on CEO desks for Communist Party instructions) control gigantic resources and are not run as well as Google or Alibaba. There has been significant emigration of educated Chinese particularly to America where they buy houses and educate their children (Xi himself quietly sent a daughter to Harvard). Many of these tensions result in occasional public outcries that the regime carefully appeases. These problems are not trivial to solve even for very competent people who don’t have to worry about elections.

In terms of the risks of war and escalation over flashpoints like Korea or Taiwan, major internal crises like a financial crash might easily make it more likely that an external crisis escalates out of control. When regimes face crises of legitimacy they often, for obvious evolutionary reasons, resort to picking fights with out-groups to divert people. Much of Germany’s military-industrial elite saw nationalist diversions as crucial to escape the terrifying spread of socialism before 1914.

I’m ignorant about all these dynamics in China but if forced to bet I would bet that Allison underplays these weaknesses and I would bet against another 20 years of straight line growth. In the spirit of Tetlock, I’ll put a number on it and say a 80% probability of a bad recession or some other internal crisis within 20 years that is bad enough to be considered ‘the worst domestic crisis for the leadership since Tiananmen and a prelude to major political change’ and which results in either a Tiananmen-style clampdown or big political change. (I have not formulated this well, suggestions from Superforecasters welcome in comments.)

Part of my reason for thinking China will not be able to avoid such crises is a fundamental dynamic that Fukuyama discussed in his much-misunderstood ‘The End of History’: economic development requires openness and the protection of individual rights in various dimensions, and this creates an inescapable tension between an elite desire for economic dynamism and technological progress viz competitor Powers, and an elite fear of openness and what it brings politically/culturally.

The KGB and Soviet military realised this in the late 1970s as they watched the microelectronics revolution in America but they could never develop a response that worked: they were very successful at stealing stuff but they could not develop domestic companies because of the political constraints, as Marshall Ogarkov admitted (off-the-record!) to the New York Times in 1983. China watched the Soviet Union implode and chose a different path: economic liberalisation combined with greater economic and information rights, but no Gorbachev-style political opening up. This caution has worked so far but does not solve the problem.

Singapore and China could not develop economically as they have without also allowing much greater individual freedom in some domains than Soviet Russia. Developing hi-tech businesses cannot be done without a degree of openness to the rest of the world that is politically risky for China. If there is too much arbitrary seizure of property, as in the KGB-mafia state of Russia, then people will focus on theft and moving assets offshore rather than building long-term value. Chinese entrepreneurs have to be able to download software, read scientific and technical papers, and access most of the internet if they are not to be seriously disadvantaged. China knows that its path to greatness must include continued growth and greater productivity. If it does not, then like other oligarchies it will rapidly lose legitimacy and risks collapse. This is inconsistent with all-out repression. It will therefore have to tread a fine line of allowing social unhappiness to be expressed and adapting to it without letting it spin out of control. Given social movements are inherently complex and nonlinear, plus social media already seethes with unhappiness in China, there will be a constant danger that this dynamic tension breaks free of centralised control.

This is, obviously, one of the many reasons why the leadership is so interested in advanced technology and particularly AI. Such tools may help the leadership tread this tightrope without tumbling off, though maintaining a culture at the edge-of-the-art in technologies like AI simultaneously exacerbates the very turbulence that the AI needs to monitor — there are many tricky feedback loops to navigate and many reasons to suspect that eventually the centralised leadership will blunder, be overwhelmed, collapse internally and so on. Can China’s leaders maintain this dynamic tension for another 20 years? As Andy Grove always said, only the paranoid survive…

3. Contrast between the EU and China

High-tech breakthroughs are increasingly focused in North East America (around Harvard), West Coast America (around Stanford), and coastal China (e.g Shenzhen). When the UK leaves the EU, the EU will have zero universities in the global top 20. EU politicians are much more interested in vindictive legal action against Silicon Valley giants than asking themselves why Europe cannot match America or China. On issues such as CRISPR and genetic engineering the EU is regulating itself out of the competition and many businesspeople are unaware that this will get much worse once the ECJ starts using the Charter of Fundamental Rights to seize control of such regulation for itself, which will mean not just more anti-science regulation but also damaging uncertainty as scientists and companies face the ECJ suddenly pulling a human rights ‘top trump’ out of the deck whenever they fancy (one of the many arguments Vote Leave made during the referendum that we could not get the media to report, partly because of persistent confusion between the COFR and the ECHR). Organisations like YCombinator provide a welcoming environment for talented and dynamic young Europeans in California while the EU’s regulatory structure is dominated by massive incumbent multinationals like Goldman Sachs that use the Single Market to crush startup competitors.

If you watch this documentary on Shenzhen, you will see parts of China with the same or even greater dynamism than Silicon Valley and far, far beyond the EU. The contrast between the reality of Shenzhen and the rhetoric of blowhards like Macron is one of the reasons why many massive institutional investors do not share CBI-style conventional wisdom on Brexit. The young vote with their feet. If they want to be involved in world-leading projects, they head to coastal China or coastal America, few go to Paris, Rome, or Berlin. The Commission publishes figures on this but never faces the logic.

Chart: notice how irrelevant the EU is

Screenshot 2017-09-28 16.42.08

We are escaping the Single Market / ECJ / Charter of Fundamental Rights quagmire that will deepen the EU’s stagnation (despite Whitehall’s best efforts to scupper the referendum). The UK should now be thinking about how we provide the most dynamic environment in Europe for scientists and entrepreneurs. After 50 years of wasting time in dusty meeting rooms failing to ‘influence’ the EU to ditch its Monnet-Delors plan, we could start building things with real value and thereby acquire real, rather than the Foreign Office’s chimerical, influence. Let Macron et al continue with the same antiquated rhetoric: we know what will happen, we’ve seen it since all the pro-euro forces in the UK babbled about the ‘Lisbon Agenda’ in 2000 — rhetoric about ‘reform’ always turns into just more centralisation in Brussels institutions, it does not produce dynamic forces that create breakthroughs and real value. Economic, technological, and political power will continue to shift away from an EU that cannot and will not adapt to the forces changing the world: its legal model of Single Market plus ECJ make fast adaptation impossible. We will soon be out of Monnet’s house and Whitehall’s comfortable delusions (‘special relationship’, ‘punching above our weight’) will fade. Contra the EU’s logic, in a world increasingly defined by information and computation the winning asset is not size — it is institutional adaptability.

Those on the pro-EU side who disagree with this analysis have to face a fact: people like Mandelson, Adair Turner, the FT, and the Economist have been repeatedly wrong in their predictions for 20 years about ‘EU reform’, and people like me who have made the same arguments for 20 years, and called bullshit on ‘EU reform’, have been repeatedly vindicated by actual EU Treaties, growth rates, unemployment trends, euro crises and so on. (The Commission itself doesn’t even produce fake reports showing big gains from the Single Market, the gains it claims are relatively trivial even if you believe them.) What is happening in the EU now to suggest to reasonable observers that this will change over the next 20 years? Every sign from Juncker to Macron is that yet again Brussels will double down on Monnet’s mid-20th Century vision and the entire institutional weight of the Commission and legal system exerts an inescapable gravitational pull that way.

4. ‘Anti-access / area denial’ (A2/AD)

One aspect of China’s huge conventional buildup is what is known as A2/AD: i.e building forces to prevent America intervening near China, using missiles, submarines, cyber, anti-space and other weapons. The US response is known as ‘AirSea Battle’.

I won’t go into this here but it is an interesting topic that is also relevant to UK defence debates. The transformation of US forces goes back to a mid-1970s DARPA project known as Assault Breaker that began a series of breakthroughs in ‘precision strike’ where computerised command and control combined with sensors, radar, GPS and so on to provide the capability for precise conventional strike. The first public demo of all this was the famous films in the first Gulf War of bombs dropping down chimneys. This development was central to the last phase of the Cold War and the intolerable pressure put on Soviet defence expenditure. Soviets led the thinking but could not build the technology.

One of the consequences of these developments is that aircraft carriers are no longer safe from cheap missiles. I started making these arguments in 2004 when it was already clear that the UK Ministry of Defence carrier project was a disaster. Since then it has been a multi-billion pound case study in Whitehall incompetence, the MoD’s appalling ‘planning’ system and corrupt procurement, and Westminster’s systemic inability to think about complex long-term issues. Talking to someone at the MoD last year they said that in NATO wargames the UK carriers immediately bug out for the edge of the game to avoid being sunk. Of course they do. Carriers cannot be deployed against top tier forces because of the vast and increasing asymmetry between their cost and vulnerability to cheap sinking. Soon they will not be deployable even against Third World forces because of the combination of cheap cruise missiles and exponential price collapse and performance improvement of guidance systems (piggybacking the commercial drone industry). Soon an intelligent terrorist with a cruise missile and some off-the-shelf kit will be able to sink a carrier using their iPhone: see this blog for details. The MoD has lied and bluffed about all this for 20 years, this Government will continue the trend, and the appalling BAE will continue to scam billions from taxpayers unbothered by MPs.

5. Strategy, Sun Tzu and Bismarck: Great Powers and ‘the passions of sheep stealers’

China is the home of Sun Tzu. His most famous advice was that ‘winning without fighting is the highest form of warfare’ — advice often quoted but rarely internalised by those responsible for vital decisions in conflicts. This requires what Sun Tzu called ‘Cheng/Ch’i’ operations. You pull the opponent off balance with a series of disorienting moves, feints, bluffs, carrots, and sticks (e.g ‘where strong appear weak’). You disorient them with speed so they make blunders that undermine their own moral credibility with potential allies. You try to make the opponent look like an unreasonable aggressor. You isolate them, you break their alliances and morale. Where possible you collapse their strategy and will to fight instead of wasting resources on an actual battle. And so on…

Looking at the US-China relationship through the lens of ‘winning without fighting’ and nuclear risk suggests that the way for America to ‘win’ this Thucydidean struggle is: ‘don’t try to win in a conventional sense, but instead redefine winning’. Given the unlimited downside of nuclear war and what we now know about the near-disasters of Cold War brinkmanship, it certainly suggests focus on the goal of avoiding escalating crises involving nuclear weapons, and this goal has vast consequences for America’s whole approach to China.

Allison’s ideas about how the US might change strategy are interesting though I think his ‘academic’ approach is too rigid. Allison suggests distinct strategies as distinct choices. If one looks at the world champion of politics and diplomacy in the modern world, Bismarck, his approach was the opposite of ‘pick a strategy’ in the sense Allison means. Over 27 years he was close to and hostile to all the other Powers at different times, sometimes in such rapid succession that his opponents felt badly disoriented as though they were dealing with ‘the devil himself’, as many said.

Bismarck contained an extremely tyrannical ego and an even more extreme epistemological caution about the unpredictability of a complex world and a demonic practical adaptability. He knew events could suddenly throw his calculations into chaos. He was always ready to ditch his own ideas and commitments that suddenly seemed shaky. He was interested in winning, not consistency. He had a small number of fundamental goals — such as strengthening the monarchy’s power against Parliament and strengthening Prussia as a serious Great Power — which he pursued with constantly changing tactics. He was always feinting and fluid, pushing one line openly and others privately, pushing and pulling the other Powers in endless different combinations. He was the Grand Master of Cheng/Ch’i operations.

I think that if Bismarck read Allison’s book, he would not ‘pick a strategy’. He would use many of the different elements Allison sketches (and invent others) at the same time while watching China’s evolution and the success of different individuals/factions in the governing elite. For example, he would both suggest a bargain over dropping security guarantees for Taiwan and launch a covert (apparently domestic) cyber campaign to spread details of the Chinese leadership’s wealth and corruption all over the internet inside ‘the Great Firewall’. Carrot and stick, threaten and cajole, pull the opponent off balance.

I think that Bismarck’s advice would be: get what you can from dropping the Taiwanese guarantees and do not create nuclear tripwires in Korea. He was contemptuous of any argument that he ought to care about the Balkans for its own sake and repeatedly stressed that Germany should not fight for Austrian interests in the Balkans despite their alliance. He often repeated variations on his famous line — that the whole of the Balkans was not worth the bones of a single Pomeranian grenadier. Great Powers, he warned, should not let their fates be tied to ‘the passions of sheep stealers’. On another occasion: ‘All Turkey, including the various people who live there, is not worth so much that civilised European peoples should destroy themselves in great wars for its sake.’ At the Congress of Berlin, he made clear his priority: ‘We are not here to consider the happiness of the Bulgarians but to secure the peace of Europe.’ A decade later he warned other Powers not to ‘play Pericles beyond the confines of the area allocated by God’ and said clearly: ‘Bulgaria … is far from being an object of adequate importance … for which to plunge Europe from Moscow to the Pyrenees, and from the North Sea to Palermo, into a war whose issue no man can foresee. At the end of the conflict we should scarcely know why we had fought.’

In order to avoid a Great Power war he stressed the need to stay friendly with Russia, and the importance of being able to play Russia and Austria off against each other, France, and Britain: ‘The security of our relations with the Austro-Hungarian state depends to a great extent on our being able, should Austria make unreasonable demands on us, to come to terms with Russia as well.’ This was the logic behind his infamous secret Reinsurance Treaty in which, unknown to Austria with which he already had an alliance, Germany and Russia made promises to each other about their conduct in the event of war breaking out in different scenarios, the heart of which was Bismarck promising to stay out of a Russia-Austria war if Austria was the aggressor. In 1887 when military factions rumbled about a preventive war against Russia to help Austria in the Balkans he squashed the notion flat: ‘They want to urge me into war and I want peace. It would be frivolous to start a new war; we are not a pirate state which makes war because it suits a few.’ Preventive war, he said, was an egg from which very dangerous chicks would hatch.

His successors ditched his approach, ditched the Reinsurance Treaty, pushed Russia towards France, and made growing commitments to support Austria in the Balkans. This series of errors (combined with Wilhelm II’s appalling combination of vanity, aggression, and indolence which is echoed in a frightening proportion of leading politicians today) exploded in summer 1914.

Would Bismarck tie the probability of nuclear holocaust to the possibilities for extremely fast-moving crises in the South China Seas and ‘the passions of sheep stealers’ in places like North Korea? No chance.

Instead of taking the lead on Korea, I suspect Bismarck’s approach would be to go quiet publicly other than to suggest that China has a clear responsibility for Kim’s behaviour while perhaps leaking a ‘secret’ study on the consequences of Japan going nuclear, to focus minds in Beijing. Regardless of whose ‘fault’ it is, if the situation spirals out of control and ends with North Korea, perhaps because of collapsed command and control empowering some mentally ill / on drugs local commander (America has had plenty of those in charge of nukes) killing millions of Koreans and America destroying North Korea, who thinks this would be seen as a ‘win’ for America? Trump’s threats are straight out of the Cold War playbook but we know that playbook was dodgy even against the relative ‘rationality’ of people like Brezhnev and Andropov, never mind nutjobs like Kim…

So: avoid nuclear crises. Therefore do not give local security ties to Taiwan and Korea that could trigger disaster. What positive agenda can be pushed?

America should seek cooperation in areas of deep significance and moral force where institutions can be shaped that align orientation over decades. Three obvious areas are: disaster response in Asia (naval cooperation), WMD terrorism (intel cooperation), and space. China already has an aggressive space program. It has demonstrated edge-of-the-art capabilities in developing a satellite-based quantum communication network, a revolutionary goal with even deeper effects than GPS. It will go to the moon. The Cold War got humans onto the moon then perceived superiority ended American politicians’ ambition. Instead of rebooting a Cold War style rivalry, it would be better to try to do things together. One of the most important projects humans can pursue is — as Jeff Bezos has argued and committed billions to — to use the resources of space (which are approximately ALL resources in the solar system) to alleviate earth’s problems, and the logic of energy and population growth is to shift towards heavy manufacturing in space while Earth is ‘zoned residential and light industrial’. Building the infrastructure to allow such ambition for humanity is inherently a project of great moral force that encourages international friendship and provides an invaluable perspective: a tiny blue dot friendly to life surrounded by vast indifferent blackness. People can be proud of their nation’s contributions and proud of a global effort. (As I have said before, contributing to this should be one of the UK’s priorities post-Brexit — how much more real value we could create with this than we have in 50 years with the EU, and developing the basic and applied research for robotics would have crossover applications both with commercial autonomous vehicles and the military sphere.)

Of course, there must be limits to friendly cooperation. What if China takes this as weakness and increasingly exerts more and more power, direct and indirect, over her neighbours? This is obviously possible. But I think the Bismarck/Sun Tzu response would be: if that is how she will behave driven by internal dynamics, then let her behave like that, as that will do more than anything you can do to persuade those neighbours to try to contain China. Trying to contain China now won’t work and would be seen not just in China but elsewhere as classic aggression from an imperial power. China is neither like Hitler’s Germany nor Stalin’s Soviet Union and treating it as such is bound to provoke intense and dangerous resentment among a billion people who suffered appallingly for decades under Mao. But if America backs off and makes clear that she prefers cooperation to containment, and then over time China seeks to threaten and dominate Japan, Australia and others, then that is the time to start building alliances because that is when you will have moral authority with local forces — the vital element.

A Bismarckian approach would also, obviously, involve ensuring that America remains technologically ahead of China, though this is a much more formidable task than it was with Russia and that was seen for a while (after Sputnik) as an existential challenge (and famous economists like Paul Samuelson continued to predict wrongly that the Soviet economy would overtake America’s). Attempting to escape Thucydides means trying to build institutions and feelings of cooperation but it also requires that militaristic Chinese don’t come to see America as vulnerable to pre-emptive strikes. As AI, biological engineering, digital fabrication and so on accelerate, there may soon be non-nuclear dangers at least as frightening as nuclear dangers.

Finally, there is an interesting question of self-awareness. American leaders have a tendency to talk about American interests as if they are self-evidently humanity’s interests. Others find this amusing or enraging. Its leaders need a different language for discussing China if they are to avoid Thucydides.

Talented political leaders sometimes show an odd empathy for the psychology of opposing out-groups. Perhaps it’s a product of a sort of ‘complementarity’ ability, an ability to hold contradictory ideas in one’s head simultaneously. It is often a shock for students when they read in Pericles’s speech that he confronted the plague-struck Athenians with the sort of uncomfortable truth that democratic politicians rarely speak:

‘You have an empire to lose, and there is the danger to which the hatred of your imperial rule has exposed you… For by this time your empire has become a tyranny which in the opinion of mankind may have been unjustly gained, but which cannot be safely surrendered… To be hateful and offensive has ever been the fate of those who have aspired to empire.’ Thucydides, 2.63-4, emphasis added.

Bismarck too didn’t fool himself about how others saw him, his political allies, and his country. He much preferred boozing with revolutionary communists than reactionaries on his own side. When various commercial interests tried to get him to support them in China, he told the English Ambassador crossly:

‘These blackguard Hamburg and Lubeck merchants have no other idea of policy in China but to, what they call ‘shoot down those damned niggers of Chinese’ for six months and then dictate peace to them etc. Now, I believe those Chinese are better Christians than our vile mercantile snobs and wish for peace with us and are not thinking of war, and I’ll see the merchants and their Yankee and French allies damned before I consent to go to war with China to fill their pockets with money.’

There are powerful interests urging Washington to aggression against China. The nexus of commercial and military interests is always dangerous, as Eisenhower famously warned in his Farewell Speech. They will be more dangerous as jobs continue to shift East driven by markets and technology regardless of Trump’s promises. The Pentagon will overhype Chinese aggression to justify their budgets, as they did with Russia.

Bismarck was a monster and the world would have been better if one of the assassination attempts had succeeded (see HERE for other branching histories) but he also understood fundamental questions better than others. Those responsible for policy on China should study his advice. They should also study summer 1914 and ponder how those responsible for war and peace still make these decisions in much the same way as then, while the crises are 1,000 times faster and a million times more potentially destructive.

Such problems require embedding lessons from effective institutions into our systematically flawed political institutions. I describe in detail the systems management approach to complex projects developed in the 1950s and 1960s that is far more advanced than anything in Whitehall today and which is part of necessary reforms (see HERE, p.26ff for summary of lessons). I will blog on other ideas. Unless we find a way to build political institutions that produce much more reliable decisions from the raw material of unreliable humans the law of averages means we are sure to fall off our tightrope, and unlike in 1918 or 1945 we won’t have anything to clamber back on to…

The unrecognised simplicities of effective action #3: lessons on ‘capturing the heavens’ from the ARPA/PARC project that created the internet & PC

Below is a short summary of some basic principles of the ARPA/PARC project that created the internet and the personal computer. I wrote it originally as part of an anniversary blog on the referendum but it is also really part of this series on effective action.

One of the most interesting aspects of this project, like Mueller’s reforms of NASA, is the contrast between 1) extreme effectiveness, changing the world in a profound way, and 2) the general reaction to the methods was not only a failure to learn but a widespread hostility inside established bureaucracies (public and private) to the successful approach: NASA dropped Mueller’s approach when he left and has never been the same, and XEROX closed PARC and fired Bob Taylor. Changing the world in a profound and beneficial way is not enough to put a dint in bureaucracies which operate on their own dynamics.

Warren Buffet explained decades ago how institutions actively fight against learning and fight to stay in a closed and vicious feedback loop:

‘My most surprising discovery: the overwhelming importance in business of an unseen force that we might call “the institutional imperative”. In business school, I was given no hint of the imperative’s existence and I did not intuitively understand it when I entered the business world. I thought then that decent, intelligence, and experienced managers would automatically make rational business decisions. But I learned the hard way that isn’t so. Instead rationality frequently wilts when the institutional imperative comes into play.

‘For example, 1) As if governed by Newton’s First Law, any institution will resist any change in its current direction. 2) … Corporate projects will materialise to soak up available funds. 3) Any business craving of the leader, however foolish, will quickly be supported by … his troops. 4) The behaviour of peer companies … will be mindlessly imitated.’

Many of the principles behind ARPA/PARC could be applied to politics and government but they will not be learned from ‘naturally’ inside the system. Dramatic improvements will only happen if a group of people force ‘system’ changes on how government works so it is open to learning.

I have modified the below very slightly and added some references.

*

ARPA/PARC and ‘capturing the heavens’: The best way to predict the future is to invent it

The panic over Sputnik brought many good things such as a huge increase in science funding. America also created the Advanced Research Projects Agency (ARPA, which later added ‘Defense’ and became DARPA). Its job was to fund high risk / high payoff technology development. In the 1960s and 1970s, a combination of unusual people and unusually wise funding from ARPA created a community that in turn invented the internet, or ‘the intergalactic network’ as Licklider originally called it, and the personal computer. One of the elements of this community was PARC, a research centre working for Xerox. As Bill Gates said, he and Steve Jobs essentially broke into PARC, stole their ideas, and created Microsoft and Apple.

The ARPA/PARC project is an example of how if something is set up properly then a tiny number of people can do extraordinary things.

  • PARC had about 25 people and about $12 million per year in today’s money.
  • The breakthroughs from the ARPA/PARC project  created over 35 TRILLION DOLLARS of value for society and counting.
  • The internet architecture they built, based on decentralisation and distributed control, has scaled up over ten orders of magnitude (1010) without ever breaking and without ever being taken down for maintenance since 1969.

The whole story is fascinating in many ways. I won’t go into the technological aspects. I just want to say something about the process.

What does a process that produces ideas that change the world look like?

One of the central figures was Alan Kay. One of the most interesting things about the project is that not only has almost nobody tried to repeat this sort of research but the business world has even gone out of its way to spread mis-information about it because it was seen as so threatening to business-as-usual.

I will sketch a few lessons from one of Kay’s pieces but I urge you to read the whole thing.

‘This is what I call “The power of the context” or “Point of view is worth 80 IQ points”. Science and engineering themselves are famous examples, but there are even more striking processes within these large disciplines. One of the greatest works of art from that fruitful period of ARPA/PARC research in the 60s and 70s was the almost invisible context and community that catalysed so many researchers to be incredibly better dreamers and thinkers. That it was a great work of art is confirmed by the world-changing results that appeared so swiftly, and almost easily. That it was almost invisible, in spite of its tremendous success, is revealed by the disheartening fact today that, as far as I’m aware, no governments and no companies do edge-of-the-art research using these principles.’

‘[W]hen I think of ARPA/PARC, I think first of good will, even before brilliant people… Good will and great interest in graduate students as “world-class researchers who didn’t have PhDs yet” was the general rule across the ARPA community.

‘[I]t is no exaggeration to say that ARPA/PARC had “visions rather than goals” and “funded people, not projects”. The vision was “interactive computing as a complementary intellectual partner for people pervasively networked world-wide”. By not trying to derive specific goals from this at the funding side, ARPA/PARC was able to fund rather different and sometimes opposing points of view.

‘The pursuit of Art always sets off plans and goals, but plans and goals don’t always give rise to Art. If “visions not goals” opens the heavens, it is important to find artistic people to conceive the projects.

‘Thus the “people not projects” principle was the other cornerstone of ARPA/PARC’s success. Because of the normal distribution of talents and drive in the world, a depressingly large percentage of organizational processes have been designed to deal with people of moderate ability, motivation, and trust. We can easily see this in most walks of life today, but also astoundingly in corporate, university, and government research. ARPA/PARC had two main thresholds: self-motivation and ability. They cultivated people who “had to do, paid or not” and “whose doings were likely to be highly interesting and important”. Thus conventional oversight was not only not needed, but was not really possible. “Peer review” wasn’t easily done even with actual peers. The situation was “out of control”, yet extremely productive and not at all anarchic.

‘”Out of control” because artists have to do what they have to do. “Extremely productive” because a great vision acts like a magnetic field from the future that aligns all the little iron particle artists to point to “North” without having to see it. They then make their own paths to the future. Xerox often was shocked at the PARC process and declared it out of control, but they didn’t understand that the context was so powerful and compelling and the good will so abundant, that the artists worked happily at their version of the vision. The results were an enormous collection of breakthroughs.

‘Our game is more like art and sports than accounting, in that high percentages of failure are quite OK as long as enough larger processes succeed… [I]n most processes today — and sadly in most important areas of technology research — the administrators seem to prefer to be completely in control of mediocre processes to being “out of control” with superproductive processes. They are trying to “avoid failure” rather than trying to “capture the heavens”.

‘All of these principles came together a little over 30 years ago to eventually give rise to 1500 Altos, Ethernetworked to: each other, Laserprinters, file servers and the ARPAnet, distributed to many kinds of end-users to be heavily used in real situations. This anticipated the commercial availability of this genre by 10-15 years. The best way to predict the future is to invent it.

‘[W]e should realize that many of the most important ARPA/PARC ideas haven’t yet been adopted by the mainstream. For example, it is amazing to me that most of Doug Engelbart’s big ideas about “augmenting the collective intelligence of groups working together” have still not taken hold in commercial systems. What looked like a real revolution twice for end-users, first with spreadsheets and then with Hypercard, didn’t evolve into what will be commonplace 25 years from now, even though it could have. Most things done by most people today are still “automating paper, records and film” rather than “simulating the future”. More discouraging is that most computing is still aimed at adults in business, and that aimed at nonbusiness and children is mainly for entertainment and apes the worst of television. We see almost no use in education of what is great and unique about computer modeling and computer thinking. These are not technological problems but a lack of perspective. Must we hope that the open-source software movements will put things right?

‘The ARPA/PARC history shows that a combination of vision, a modest amount of funding, with a felicitous context and process can almost magically give rise to new technologies that not only amplify civilization, but also produce tremendous wealth for the society. Isn’t it time to do this again by Reason, even with no Cold War to use as an excuse? How about helping children of the world grow up to think much better than most adults do today? This would truly create “The Power of the Context”.’

Note how this story runs contrary to how free market think tanks and pundits describe technological development. The impetus for most of this development came from government funding, not markets.

Also note that every attempt since the 1950s to copy ARPA and JASON (the semi-classified group that partly gave ARPA its direction) in the UK has been blocked by Whitehall. The latest attempt was in 2014 when the Cabinet Office swatted aside the idea. Hilariously its argument was ‘DARPA has had a lot of failures’ thus demonstrating extreme ignorance about the basic idea — the whole point is you must have failures and if you don’t have lots of failures then you are failing!

People later claimed that while PARC may have changed the world it never made any money for XEROX. This is ‘absolute bullshit’ (Kay). It made billions from the laser printer alone and overall Xerox made 250 times what they invested in PARC before they went bust. In 1983 they fired Bob Taylor, the manager of PARC and the guy who made it all happen.

‘They hated [Taylor] for the very reason that most companies hate people who are doing something different, because it makes middle and upper management extremely uncomfortable. The last thing they want to do is make trillions, they want to make a few millions in a comfortable way’ (Kay).

Someone finally listened to Kay recently. ‘YC Research’, the research arm of the world’s most successful (by far) technology incubator, is starting to fund people in this way. I am not aware of any similar UK projects though I know that a small network of people are thinking again about how something like this could be done here. If you can help them, take a risk and help them! Someone talk to science minister Jo Johnson but be prepared for the Treasury’s usual ignorant bullshit — ‘what are we buying for our money, and how can we put in place appropriate oversight and compliance?’ they will say!

*

As we ponder the future of the UK-EU relationship shaped amid the farce of modern Whitehall, we should think hard about the ARPA/PARC example: how a small group of people can make a huge breakthrough with little money but the right structure, the right ways of thinking, and the right motives.

Those of us outside the political system thinking ‘we know we can do so much better than this but HOW can we break through the bullshit?’ need to change our perspective and gain 80 IQ points.

This real picture is a metaphor for the political culture: ad hoc solutions that are either bad or don’t scale.

Screenshot 2017-06-14 16.58.14.png

ARPA said ‘Let’s get rid of all the wires’. How do we ‘get rid of all the wires’ and build something different that breaks open the closed and failing political cultures? Winning the referendum was just one step that helps clear away dead wood but we now need to build new things.

The ARPA vision that aligned the artists ‘like little iron filings’ was:

‘Computers are destined to become interactive intellectual amplifiers for everyone in the world universally networked worldwide’ (Licklider).

We need a motivating vision aimed not at tomorrow but at changing the basic wiring of  the whole system, a vision that can align ‘the little iron filings’, and then start building for the long-term.

I will go into what I think this vision could be and how to do it another day. I think it is possible to create something new that could scale very fast and enable us to do politics and government extremely differently, as different to today as the internet and PC were to the post-war mainframes. This would enable us to build huge long-term value for humanity in a relatively short time (less than 20 years). To create it we need a process as well suited to the goal as the ARPA/PARC project was and incorporating many of its principles.

We must try to escape the current system with its periodic meltdowns and international crises. These crises move 500-1,000 times faster than that of summer 1914. Our destructive potential is at least a million-fold greater than it was in 1914. Yet we have essentially the same hierarchical command-and-control decision-making systems in place now that could not even cope with 1914 technology and pace. We have dodged nuclear wars by fluke because individuals made snap judgements in minutes. Nobody who reads the history of these episodes can think that this is viable long-term, and we will soon have another wave of innovation to worry about with autonomous robots and genetic engineering. Technology gives us no option but to try to overcome evolved instincts like destroying out-group competitors.

Watch Alan Kay explain how to invent the future HERE and HERE.

This link has these seminal papers:

  • Man-Computer Symbiosis, Licklider (1960)
  • The computer as a communications device, Licklider & Taylor (1968)

Part I of this series is HERE.

Part II on the emergence of ‘systems management’, how George Mueller used it to put man on the moon, and a checklist of how successful management of complex projects is systematically different to how Whitehall (and other state bureaucracies) work HERE.


Ps. Kay also points out that the real computer revolution won’t happen until people fulfil the original vision of enabling children to use this powerful way of thinking:

‘The real printing revolution was a qualitative change in thought and argument that lagged the hardware inventions by almost two centuries. The special quality of computers is their ability to rapidly simulate arbitrary descriptions, and the real computer revolution won’t happen until children can learn to read, write, argue and think in this powerful new way. We should all try to make this happen much sooner than 200 or even 20 more years!’

Almost nobody in education policy is aware of the educational context for the ARPA/PARC project which also speaks volumes about the abysmal field of ‘education research/policy’. People rightly say ‘education tech has largely failed’ but very few are aware that many of the original ideas from Licklider, Engelbart et al have never been tried and the Apple and MS versions are not the original vision.

 

Complexity and Prediction Part V: The crisis of mathematical paradoxes, Gödel, Turing and the basis of computing

Before the referendum I started a series of blogs and notes exploring the themes of complexity and prediction. This was part of a project with two main aims: first, to sketch a new approach to education and training in general but particularly for those who go on to make important decisions in political institutions and, second, to suggest a new approach to political priorities in which progress with education and science becomes a central focus for the British state. The two are entangled: progress with each will hopefully encourage progress with the other.

I was working on this paper when I suddenly got sidetracked by the referendum and have just looked at it again for the first time in about two years.

The paper concerns a fascinating episode in the history of ideas that saw the most esoteric and unpractical field, mathematical logic, spawn a revolutionary technology, the modern computer. NB. a great lesson to science funders: it’s a great mistake to cut funding on theory and assume that you’ll get more bang for buck from ‘applications’.

Apart from its inherent fascination, knowing something of the history is helpful for anybody interested in the state-of-the-art in predicting complex systems which involves the intersection between different fields including: maths, computer science, economics, cognitive science, and artificial intelligence. The books on it are either technical, and therefore inaccessible to ~100% of the population, or non-chronological so it is impossible for someone like me to get a clear picture of how the story unfolded.

Further, there are few if any very deep ideas in maths or science that are so misunderstood and abused as Gödel’s results. As Alan Sokal, author of the brilliant hoax exposing post-modernist academics, said, ‘Gödel’s theorem is an inexhaustible source of intellectual abuses.’ I have tried to make clear some of these using the best book available by Franzen, which explains why almost everything you read about it is wrong. If even Stephen Hawking can cock it up, the rest of us should be particularly careful.

I sketched these notes as I tried to pull together the story from many different books. I hope they are useful particularly for some 15-25 year-olds who like chronological accounts about ideas. I tried to put the notes together in the way that I wish I had been able to read at that age. I tried hard to eliminate errors but they are inevitable given how far I am from being competent to write about such things. I wish someone who is competent would do it properly. It would take time I don’t now have to go through and finish it the way I originally intended to so I will just post it as it was 2 years ago when I got calls saying ‘about this referendum…’

The only change I think I have made since May 2015 is to shove in some notes from a great essay later that year by the man who wrote the textbook on quantum computers, Michael Nielsen, which would be useful to read as an introduction or instead, HERE.

As always on this blog there is not a single original thought and any value comes from the time I have spent condensing the work of others to save you the time. Please leave corrections in comments.

The PDF of the paper is HERE (amended since first publication to correct an error, see Comments).

 

‘Gödel’s achievement in modern logic is singular and monumental – indeed it is more than a monument, it is a land mark which will remain visible far in space and time.’  John von Neumann.

‘Einstein had often told me that in the late years of his life he has continually sought Gödel’s company in order to have discussions with him. Once he said to me that his own work no longer meant much, that he came to the Institute merely in order to have the privilege of walking home with Gödel.’ Oskar Morgenstern (co-author with von Neumann of the first major work on Game Theory).

‘The world is rational’, Kurt Gödel.

Unrecognised simplicities of effective action #2: ‘Systems’ thinking — ideas from the Apollo programme for a ‘systems politics’

This is the second in a series: click this link 201702-effective-action-2-systems-engineering-to-systems-politics. The first is HERE.

This paper concerns a very interesting story combining politics, management, institutions, science and technology. When high technology projects passed a threshold of complexity post-1945 amid the extreme pressure of the early Cold War, new management ideas emerged. These ideas were known as ‘systems engineering’ and ‘systems management’. These ideas were particularly connected to the classified program to build the first Intercontinental Ballistic Missiles (ICBMs) in the 1950s and successful ideas were transplanted into a failing NASA by George Mueller and others from 1963 leading to the successful moon landing in 1969.

These ideas were then applied in other mission critical teams and could be used to improve government performance. Urgently needed projects to lower the probability of catastrophes for humanity will benefit from considering why Mueller’s approach was 1) so successful and 2) so un-influential in politics. Could we develop a ‘systems politics’ that applies the unrecognised simplicities of effective action?

For those interested, it also looks briefly at an interesting element of the story – the role of John von Neumann, the brilliant mathematician who was deeply involved in the Manhattan Project, the project to build ICBMs, the first digital computers, and subjects like artificial intelligence, artificial life, possibilities for self-replicating machines made from unreliable components, and the basic problem that technological progress ‘gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we have known them, cannot continue.’

An obvious project with huge inherent advantages for humanity is the development of an international manned lunar base as part of developing space for commerce and science. It is the sort of thing that might change political dynamics on earth and could generate enormous support across international boundaries. After 23 June 2016, the UK has to reorient national policy on many dimensions. Developing basic science is one of the most important dimensions (for example, as I have long argued we urgently need a civilian version of DARPA similarly operating outside normal government bureaucratic systems including procurement and HR). Supporting such an international project would be a great focus for UK efforts and far more productive than our largely wasted decades of focus on the dysfunctional bureaucracy in Brussels that is dominated by institutions that fail the most important test – the capacity for error-correction the importance of which has been demonstrated over long periods and through many problems by the Anglo-American political system and its common law.

Please leave comments or email dmc2.cummings at gmail.com