On the referendum #31: Project Maven, procurement, lollapalooza results & nuclear/AGI safety

On the referendum #31: Project Maven, procurement, lollapalooza results & nuclear/AGI safety

‘People, ideas, machines — in that order!’ Colonel Boyd

‘[R]ational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. Those drives will lead to anti-social and dangerous behaviour if not explicitly countered. The current computing infrastructure would be very vulnerable to unconstrained systems with these drives.’ Omohundro.

‘For progress there is no cure…’ von Neumann

This blog sketches a few recent developments connecting AI and issues around ‘systems management’ and government procurement.

The biggest problem for governments with new technologies is that the limiting factor on applying new technologies is not the technology but management and operational ideas which are extremely hard to change fast. This has been proved repeatedly: eg. the tank in the 1920s-30s or the development of ‘precision strike’ in the 1970s. These problems are directly relevant to the application of AI by militaries and intelligence services. The Pentagon’s recent crash program, Project Maven, discussed below, was an attempt to grapple with these issues.

‘The good news is that Project Maven has delivered a game-changing AI capability… The bad news is that Project Maven’s success is clear proof that existing AI technology is ready to revolutionize many national security missions… The project’s success was enabled by its organizational structure.

This blog sketches some connections between:

  • Project Maven.
  • The example of ‘precision strike’ in the 1970s, Marshal Ogarkov and Andy Marshall, implications for now — ‘anti-access / area denial’ (A2/AD), ‘Air-Sea Battle’ etc.
  • Development of ‘precision strike’ to lethal autonomous cheap drone swarms hunting humans cowering underground.
  • Adding AI to already broken nuclear systems and doctrines, hacking the NSA etc — mix coke, Milla Jovovich and some alpha engineers and you get…?
  • A few thoughts on ‘systems management’ and procurement, lessons from the Manhattan Project etc.
  • The Chinese attitude to ‘systems management’ and Qian Xuesen, combined with AI, mass surveillance, ‘social credit’ etc.
  • A few recent miscellaneous episodes such as an interesting DARPA demo on ‘self-aware’ robots.
  • Charts on Moore’s Law: what scale would a ‘Manhattan Project for AGI’ be?
  • AGI safety — the alignment problem, the dangers of science as a ‘blind search algorithm’, closed vs open security architectures etc.

A theme of this blog since before the referendum campaign has been that thinking about organisational structure/dynamics can bring what Warren Buffett calls ‘lollapalooza’ results. What seems to be very esoteric and disconnected from ‘practical politics’ (studying things like the management of the Manhattan Project and Apollo) turns out to be extraordinarily practical (gives you models for creating super-productive processes).

Part of the reason lollapalooza results are possible is that almost nobody near the apex of power believes the paragraph above is true and they actively fight to stop people learning from extreme successes so there is gold lying on the ground waiting to be picked up for trivial costs. Nudging reality down an alternative branch of history in summer 2016 only cost ~£106 so the ‘return on investment’ if you think about altered GDP, technology, hundreds of millions of lives over decades and so on was truly lollapalooza. Politics is not like the stock market where you need to be an extreme outlier like Buffett/Munger to find such inefficiencies and results consistently. The stock market is an exploitable market where being right means you get rich and you help the overall system error-correct which makes it harder to be right (the mechanism pushes prices close to random,  they’re not quite random but few can exploit the non-randomness). Politics/government is not like this. Billionaires who want to influence politics could get better ‘returns on investment’ than from early stage Amazon.

This blog is not directly about Brexit at all but if you are thinking — how could we escape this nightmare and turn government institutions from hopeless to high performance and what should we focus on to replace the vision of ‘influencing the EU’ that has been blown up by Brexit? — it will be of interest. Lessons that have been lying around for over half a century could have pushed the Brexit negotiations in a completely different direction and still could do but require an extremely different ‘model of effective action’ to dominant models in Westminster.

*

Project Maven: new organisational approaches for rapid deployment of AI to war / hybrid-war

The quotes below are from a piece in The Bulletin of Atomic Scientists about a recent AI project by the Pentagon. The most interesting aspect is not the technical details but the management approach and implications for Pentagon-style bureaucraties.

Project Maven is a crash Defense Department program that was designed to deliver AI technologiesto an active combat theater within six months from when the project received funding… Technologies developed through Project Maven have already been successfully deployed in the fight against ISIS. Despite their rapid development and deployment, these technologies are getting strong praise from their military intelligence users. For the US national security community, Project Maven’s frankly incredible success foreshadows enormous opportunities ahead — as well as enormous organizational, ethical, and strategic challenges.

‘In late April, Robert Work — then the deputy secretary of the Defense Department — wrote a memo establishing the Algorithmic Warfare Cross-Functional Team, also known as Project Maven. The team had only six members to start with, but its small size belied the significance of its charter… Project Maven is the first time the Defense Department has sought to deploy deep learning and neural networks, at the level of state-of-the-art commercial AI, in department operations in a combat theater…

‘Every day, US spy planes and satellites collect more raw data than the Defense Department could analyze even if its whole workforce spent their entire lives on it. As its AI beachhead, the department chose Project Maven, which focuses on analysis of full-motion video data from tactical aerial drone platforms… These drone platforms and their full-motion video sensors play a major role in the conflict against ISIS across the globe. The tactical and medium-altitude video sensors of the Scan Eagle, MQ-1C, and MQ-9 produce imagery that more or less resembles what you see on Google Earth. A single drone with these sensors produces many terabytes of data every day. Before AI was incorporated into analysis of this data, it took a team of analysts working 24 hours a day to exploit only a fraction of one drone’s sensor data.

‘The Defense Department spent tens of billions of dollars developing and fielding these sensors and platforms, and the capabilities they offer are remarkable. Whenever a roadside bomb detonates in Iraq, the analysts can simply rewind the video feed to watch who planted it there, when they planted it, where they came from, and where they went. Unfortunately, most of the imagery analysis involves tedious work—people look at screens to count cars, individuals, or activities, and then type their counts into a PowerPoint presentation or Excel spreadsheet. Worse, most of the sensor data just disappears — it’s never looked at — even though the department has been hiring analysts as fast as it can for years… Plenty of higher-value analysis work will be available for these service members and contractors once low-level counting activity is fully automated.

‘The six founding members of Project Maven, though they were assigned to run an AI project, were not experts in AI or even computer science. Rather, their first task was building partnerships, both with AI experts in industry and academia and with the Defense Department’s communities of drone sensor analysts… AI experts and organizations who are interested in helping the US national security mission often find that the department’s contracting procedures are so slow, costly, and painful that they just don’t want to bother. Project Maven’s team — with the help of Defense Information Unit Experimental, an organization set up to accelerate the department’s adoption of commercial technologies — managed to attract the support of some of the top talent in the AI field (the vast majority of which lies outside the traditional defense contracting base). Figuring out how to effectively engage the tech sector on a project basis is itself a remarkable achievement…

‘Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI. A traditional defense acquisition process lasts multiple years, with separate organizations defining the functions that acquisitions must perform, or handling technology development, production, or operational deployment. Each of these organizations must complete its activities before results are handed off to the next organization. When it comes to digital technologies, this approach often results in systems that perform poorly and are obsolete even before they are fielded.

Project Maven has taken a different approach, one modeled after project management techniques in the commercial tech sector: Product prototypes and underlying infrastructure are developed iteratively, and tested by the user community on an ongoing basis. Developers can tailor their solutions to end-user needs, and end users can prepare their organizations to make rapid and effective use of AI capabilities. Key activities in AI system development — labeling data, developing AI-computational infrastructure, developing and integrating neural net algorithms, and receiving user feedback — are all run iteratively and in parallel…

‘In Maven’s case, humans had to individually label more than 150,000 images in order to establish the first training data sets; the group hopes to have 1 million images in the training data set by the end of January. Such large training data sets are needed for ensuring robust performance across the huge diversity of possible operating conditions, including different altitudes, density of tracked objects, image resolution, view angles, and so on. Throughout the Defense Department, every AI successor to Project Maven will need a strategy for acquiring and labeling a large training data set…

‘From their users, Maven’s developers found out quickly when they were headed down the wrong track — and could correct course. Only this approach could have provided a high-quality, field-ready capability in the six months between the start of the project’s funding and the operational use of its output. In early December, just over six months from the start of the project, Maven’s first algorithms were fielded to defense intelligence analysts to support real drone missions in the fight against ISIS.

‘The good news is that Project Maven has delivered a game-changing AI capability… The bad news is that Project Maven’s success is clear proof that existing AI technology is ready to revolutionize many national security missions

The project’s success was enabled by its organizational structure: a small, operationally focused, cross-functional team that was empowered to develop external partnerships, leverage existing infrastructure and platforms, and engage with user communities iteratively during development. AI needs to be woven throughout the fabric of the Defense Department, and many existing department institutions will have to adopt project management structures similar to Maven’s if they are to run effective AI acquisition programs. Moreover, the department must develop concepts of operations to effectively use AI capabilities—and train its military officers and warfighters in effective use of these capabilities…

‘Already the satellite imagery analysis community is working on its own version of Project Maven. Next up will be migrating drone imagery analysis beyond the campaign to defeat ISIS and into other segments of the Defense Department that use drone imagery platforms. After that, Project Maven copycats will likely be established for other types of sensor platforms and intelligence data, including analysis of radar, signals intelligence, and even digital document analysis… In October 2016, Michael Rogers (head of both the agency and US Cyber Command) said “Artificial Intelligence and machine learning … [are] foundational to the future of cybersecurity. … It is not the if, it’s only the when to me.”

‘The US national security community is right to pursue greater utilization of AI capabilities. The global security landscape — in which both Russia and China are racing to adapt AI for espionage and warfare — essentially demands this. Both Robert Work and former Google CEO Eric Schmidt have said that leadership in AI technology is critical to the future of economic and military power and that continued US leadership is far from guaranteed. Still, the Defense Department must explore this new technological landscape with a clear understanding of the risks involved…

‘The stakes are relatively low when AI is merely counting the number of cars filmed by a drone camera, but drone surveillance data can also be used to determine whether an individual is directly engaging in hostilities and is thereby potentially subject to direct attack. As AI systems become more capable and are deployed across more applications, they will engender ever more difficult ethical and legal dilemmas.

‘US military and intelligence agencies will have to develop effective technological and organizational safeguards to ensure that Washington’s military use of AI is consistent with national values. They will have to do so in a way that retains the trust of elected officials, the American people, and Washington’s allies. The arms-race aspect of artificial intelligence certainly doesn’t make this task any easier…

‘The Defense Department must develop and field AI systems that are reliably safe when the stakes are life and death — and when adversaries are constantly seeking to find or create vulnerabilities in these systems.

‘Moreover, the department must develop a national security strategy that focuses on establishing US advantages even though, in the current global security environment, the ability to implement advanced AI algorithms diffuses quickly. When the department and its contractors developed stealth and precision-guided weapons technology in the 1970s, they laid the foundation for a monopoly, nearly four decades long, on technologies that essentially guaranteed victory in any non-nuclear war. By contrast, today’s best AI tech comes from commercial and academic communities that make much of their research freely available online. In any event, these communities are far removed from the Defense Department’s traditional technology circles. For now at least, the best AI research is still emerging from the United States and allied countries, but China’s national AI strategy, released in July, poses a credible challenge to US technology leadership.’

Full article here: https://thebulletin.org/project-maven-brings-ai-fight-against-isis11374

Project Maven shows recurring lessons from history. Speed and adaptability are crucial to success in conflict and can be helped by new technologies. So is the capacity for new operational ideas about using new technologies. These ideas depend on unusual people. Bureaucracies naturally slow things down (for some good but mostly bad reasons), crush new ideas, and exclude unusual people in order to defend established interests. The limiting factor for the Pentagon in deploying advanced technology to conflict in a useful time period was not new technical ideas — overcoming its own bureaucracy was harder than overcoming enemy action. This is absolutely normal in conflict (e.g it was true of the 2016 referendum where dealing with internal problems was at least an order of magnitude harder and more costly than dealing with Cameron).

As Colonel Boyd used to shout to military audiences, ‘People, ideas, machines — in that order!’

*

DARPA, ‘precision strike’, the ‘Revolution in Military Affairs’ and bureaucracies

The Project Maven experience is similar to the famous example of the tank. Everybody could see tanks were possible from the end of World War I but over 20 years Britain and France were hampered by their own bureaucracies in thinking about the operational implications and how to use them most effectively. Some in Britain and France did point out the possibilities but the possibilities were not absorbed into official planning. Powerful bureaucratic interests reinforced the normal sort of blindness to new possibilities. Innovative thinking  flourished, relatively, in Germany where people like Guderian and von Manstein could see the possibilities for a very big increase in speed turning into a huge nonlinear advantage — possibilities applied to the ‘von Manstein plan’ that shocked the world in 1940. This was partly because the destruction of German forces after 1918 meant everything had to be built from scratch and this connects to another lesson about successful innovation: in the military, as in business, it is more likely if a new entity is given the job, as with the Manhattan Project to develop nuclear weapons. The consequences were devastating for the world in 1940 but, lucky for us, the nature of the Nazi regime meant that it made very similar errors itself, e.g regarding the importance of air power in general and long range bombers in particular. (This history is obviously very complex but this crude summary is roughly right about the main point)

There was a similar story with the technological developments mainly sparked by DARPA in the 1970s including stealth (developed in a classified program by the legendary ‘Skunk Works’, tested at ‘Area 51’), global positioning system (GPS), ‘precision strike’ long-range conventional weapons, drones, advanced wide-area sensors, computerised command and control (C2), and new intelligence, reconnaissance and surveillance capabilities (ISR). The hope was that together these capabilities could automate the location and destruction of long-range targets and greatly improve simultaneously the precision, destructiveness, and speed of operations. 

The approach became known in America as ‘deep-strike architectures’ (DSA) and in the Soviet Union as ‘reconnaissance-strike complexes’ (RUK). The Soviet Marshal Ogarkov realised that these developments, based on America’s superior ability to develop micro-electronics and computers, constituted what he called a ‘Military-Technical Revolution’ (MTR) and was an existential threat to the Soviet Union. He wrote about them from the late 1970s. (The KGB successfully stole much of the technology but the Soviet system still could not compete.) His writings were analysed in America particularly by Andy Marshall at the Pentagon’s Office of Net Assessment (ONA) and others. ONA’s analyses of what they started calling the Revolution in Military Affairs (RMA) in turn affected Pentagon decisions. In 1991 the Gulf War demonstrated some of these technologies just as the Soviet Union was imploding. In 1992 the ONA wrote a very influential report (The Military-Technical Revolution) which, unusually, they made public (almost all ONA documents remain classified). 

The ~1978 Assault Breaker concept

Screenshot 2019-03-01 16.06.35

Soviet depiction of Assault Breaker (Sergeyev, ‘Reconnaissance-Strike Complexes,’ Red Star, 1985)

Screenshot 2019-03-01 16.07.48

In many ways Marshal Ogarkov thought more deeply about how to develop the Pentagon’s own technologies than the Pentagon did, hampered by the normal problems that the operationalising of new ideas threatened established bureaucratic interests, including the Pentagon’s procurement system. These problems have continued. It is hard to overstate the scale of waste and corruption in the Pentagon’s horrific procurement system (see below).

China has studied this episode intensely. It has integrated lessons into their ‘anti-access / area denial’ (A2/AD) efforts to limit American power projection in East Asia. America’s response to A2/AD is the ‘Air-Sea Battle’ concept. As Marshal Ogarkov predicted in the 1970s the ‘revolution’ has evolved into opposing ‘reconnaissance-strike complexes’ facing each other with each side striving to deploy near-nuclear force using extremely precise conventional weapons from far away, all increasingly complicated by possibilities for cyberwar to destroy the infrastructure on which all this depends and information operations to alter the enemy population’s perception (very Sun Tzu!).

Graphic: Operational risks of conventional US approach vs A2/AD (CSBA, 2016)

Screenshot 2019-03-01 16.12.17

The penetration of the CIA by the KGB, the failure of the CIA to provide good predictions, the general American failure to understand the Soviet economy, doctrine and so on despite many billions spent over decades, the attempts by the Office of Net Assessment to correct institutional failings, the bureaucratic rivalries and so on — all this is a fascinating subject and one can see why China studies it so closely.

*

From experimental drones in the 1970s to drone swarms deployed via iPhone 

The next step for reconnaissance-strike is the application of advanced robotics and artificial intelligence which could bring further order(s) of magnitude performance improvements, cost reductions, and increases in tempo. This is central to the US-China military contest. It will also affect everyone else as much of the technology becomes available to Third World states and small terrorist groups.

I wrote in 2004 about the farce of the UK aircraft carrier procurement story (and many others have warned similarly). Regardless of elections, the farce has continued to squander billions of pounds, enriching some of the worst corporate looters and corrupting public life via the revolving door of officials/lobbyists. Scrutiny by our MPs has been contemptible. They have built platforms that already cannot be sent to a serious war against a serious enemy. A teenager will be able to deploy a drone from their smartphone to sink one of these multi-billion dollar platforms. Such a teenager could already take out the stage of a Downing Street photo op with a little imagination and initiative, as I wrote about years ago

The drone industry is no longer dependent on its DARPA roots and is no longer tied to the economics of the Pentagon’s research budgets and procurement timetables. It is driven by the economics of the extremely rapidly developing smartphone market including Moore’s Law, plummeting costs for sensors and so on. Further, there are great advantages of autonomy including avoiding jamming counter-measures. Kalashnikov has just unveiled its drone version of the AK-47: a cheap anonymous suicide drone that flies to the target and blows itself up — it’s so cheap you don’t care. So you have a combination of exponentially increasing capabilities, exponentially falling costs, greater reliability, greater lethality, greater autonomy, and anonymity (if you’re careful and buy them through cut-outs etc). Then with a bit of added sophistication you add AI face recognition etc. Then you add an increasing capacity to organise many of these units at scale in a swarm, all running off your iPhone — and consider how effective swarming tactics were for people like Alexander the Great.

This is why one of the world’s leading AI researchers, Stuart Russell (professor of computer science at Berkeley) has made this warning:

‘The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them. For instance, as flying robots become smaller, their manoeuvrability increases and their ability to be targeted decreases… Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless

‘A very, very small quadcopter, one inch in diameter can carry a one- or two-gram shaped charge. You can order them from a drone manufacturer in China. You can program the code to say: “Here are thousands of photographs of the kinds of things I want to target.” A one-gram shaped charge can punch a hole in nine millimeters of steel, so presumably you can also punch a hole in someone’s head. You can fit about three million of those in a semi-tractor-trailer. You can drive up I-95 with three trucks and have 10 million weapons attacking New York City. They don’t have to be very effective, only 5 or 10% of them have to find the target.

‘There will be manufacturers producing millions of these weapons that people will be able to buy just like you can buy guns now, except millions of guns don’t matter unless you have a million soldiers. You need only three guys to write the program and launch them. So you can just imagine that in many parts of the world humans will be hunted. They will be cowering underground in shelters and devising techniques so that they don’t get detected. This is the ever-present cloud of lethal autonomous weapons… There are really no technological breakthroughs that are required. Every one of the component technologies is available in some form commercially… It’s really a matter of just how much resources are invested in it.’

There is some talk in London of ‘what if there is an AI arms race’ but there is already an AI/automation arms race between companies and between countries — it’s just that Europe is barely relevant to the cutting edge of it. Europe wants to be a world player but it has totally failed to generate anything approaching what is happening in coastal America and China. Brussels spends its time on posturing, publishing documents about ‘AI and trust’, whining, spreading fake news about fake news (while ignoring experts like Duncan Watts), trying to damage Silicon Valley companies rather than considering how to nourish European entities with real capabilities, and imposing bad regulation like GDPR (that ironically was intended to harm Google/Facebook but actually helped them in some ways because Brussels doesn’t understand them).

Britain had a valuable asset, Deep Mind, and let Google buy it for trivial money without the powers-that-be in Whitehall understanding its significance — it is relevant but it is not under British control. Britain has other valuable assets — for example, it is a potential strategic asset to have the AI centre, financial centre, and political centre all in London, IF politicians cared and wanted to nourish AI research and companies. Very obviously, right now we have a MP/official class that is unfit to do this even if they had the vaguest idea what to do, which almost none do (there is a flash of hope on genomics/AI).

Unlike during the Cold War when the Soviet Union could not compete in critical industries such as semi-conductors and consumer electronics, China can compete, is competing, and in some areas is already ahead.

The automation arms race is already hitting all sorts of low skilled jobs from baristas to factory cleaning, some of which will be largely eliminated much more quickly than economists and politicians expect. Many agricultural jobs are being rapidly eliminated as are jobs in fields like mining and drilling. Look at a modern mine and you will see driverless trucks on the ground and drones overhead. The implications for millions who make a living from driving is now well known. (This also has obvious implications for the wisdom of allowing millions of un-skilled immigrants and one of the oddities of Silicon Valley is that people there simultaneously argue a) politicians are clueless about the impact of automation on unskilled people and b) politicians should allow millions more unskilled immigrants into the country — an example of how technical people are not always as rational about politics as they think they are.)

This automation arms race will affect different countries at different speeds depending on their exposure to fields that are ripe for disruption sooner or later. If countries cannot tax those companies that lead in AI, they will have narrower options. They may even be forced into a sort of colony status. Those who think this is an exaggeration should look at China’s recent deals in Africa where countries are handing over vast amounts of data to China on extremely unfavourable terms. Huge server farms in China are processing facial recognition data on millions of Africans who have no idea their personal data has been handed over. The western media focuses on Facebook with almost no coverage of these issues.

In the extreme case, a significant lead in AI for country X could lead to a self-reinforcing cycle in which it increasingly dominates economically, scientifically, and militarily and perhaps cannot be caught as Ian Hogarth has argued and to which Putin recently alluded.

China’s investment in AI — more data = better product = more users = more revenue  = better talent + more data in a beautiful flywheel…

China has about x3 number of internet users than America but the gap in internet and mobile usage is much larger. ‘In China, people use their mobile phones to pay for goods 50 times more often than Americans. Food delivery volume in China is 10 times more than that of the United States. And shared bicycle usage is 300 times that of the US. This proliferation of data — with more people generating far more information than any other country – is the fuel for improving China’s AI’ (report).

Screenshot 2018-08-03 16.53.14

Screenshot 2018-08-03 17.02.34

Screenshot 2018-08-03 16.57.10

China’s AI policy priority is clear. The ‘Next Generation Artificial Intelligence Development Plan‘ announced in July 2017 states that China should catch America by 2020 and be the global leader by 2030.  Xi Jinping emphasises this repeatedly.

Screenshot 2018-08-03 17.05.15

 

*

Some implications for entangling AI with WMD — take a Milla Jovovich lookalike then add some alpha engineers…

It is important to consider nuclear safety when thinking about AI safety.

The missile silos for US nuclear weapons have repeatedly been shown to be terrifyingly insecure. Sometimes incidents are just bog standard unchecked incompetence: e.g nuclear weapons are accidentally loaded onto a plane which is then left unattended on an insecure airfield. Coke, great unconventional hookers and a bit of imagination get you into nuclear facilities, just as they get you into pretty much anywhere.

Cyber security is also awful. For example, in a major  2013 study the Pentagon’s Defense Science Board concluded that the military’s systems were vulnerable to cyberattacks, the government was ‘not prepared to defend against this threat’, and a successful cyberattack could cause military commanders to lose ‘trust in the information and ability to control U.S. systems and forces [including nuclear]’ (cf. this report). Since then, the NSA itself has had its deepest secrets hacked by an unidentified actor (possibly/probably AI-enabled) in a breach much more serious but infinitely less famous than Snowden (and resembles a chapter in the best recent techno-thriller, Daemon).

This matches research just published in the Bulletin of Atomic Scientists on the most secure (Level 3/enhanced and Level 4) bio-labs. It is now clear that laboratories conducting research on viruses that could cause a global pandemic are extremely dangerous. I am not aware of any mainstream media in Britain reporting this (story here).

Further, the systems for coping with nuclear crises have failed repeatedly. They are extremely vulnerable to false alarms, malicious attacks or even freaks like, famously, a bear (yes, a bear) triggering false alarms. We have repeatedly escaped accidental nuclear war because of flukes such as odd individuals not passing on ‘launch’ warnings or simply refusing to act. The US National Security Adviser has sat at the end of his bed looking at his sleeping wife ‘knowing’ she won’t wake up while pondering his advice to the President on a counterattack that will destroy half the world, only to be told minutes later the launch warning was the product of a catastrophic error. These problems have not been dealt with. We don’t know how bad this problem is: many details are classified and many incidents are totally unreported.

Further, the end of the Cold War gave many politicians and policy people in the West the completely false idea that established ideas about deterrence had been vindicated but they have not been vindicated (cf. Payne’s Fallacies of Cold War deterrence and The Great American Gamble). Senior decision-makers are confident that their very dangerous ideas are ‘rational’

US and Russian nukes remain on ‘launch on warning’ — i.e a hair trigger — so the vulnerabilities could recur any time. Threats to use them are explicitly contemplated over crises such as Taiwan and Kashmir. Nuclear weapons have proliferated and are very likely to proliferate further. There are now thousands of people, including North Korean and Pakistani scientists, who understand the technology. And there is a large network of scientists involved in the classified Soviet bio-weapon programme that was largely unknown to western intelligence services before the end of the Cold War and has dispersed across the world.

These are all dangers already known to experts. But now we are throwing at these wobbling systems and flawed/overconfident thinking the development of AI/ML capabilities. This will exacerbate all these problems and make crises even faster, more confusing and more dangerous.

Yes, you’re right to ask ‘why don’t I read about this stuff in the mainstream media?’. There is very little media coverage of reports on things like nuclear safety and pretty much nobody with real power pays any attention to all this. If those at the apex of power don’t take nuclear safety seriously, why would you think they are on top of anything? Markets and science have done wondrous things but they cannot by themselves fix such crazy incentive problems with government institutions.

*

Government procurement — ‘the horror, the horror’

The problem of ‘rational procurement’ is incredibly hard to solve and even during existential conflicts problems with incentives recur. If state agencies, out of  fear of what opponents might be doing, create organisations that escape most normal bureaucratic constraints, then AI will escalate in importance to the military and intelligence services even more rapidly than it already is. It is possible that China will build organisations to deploy AI to war/pseudo-war/hybrid-war faster and better than America.

In January 2017 I wrote about systems engineering and systems management — an approach for delivering extremely complex and technically challenging projects. (It was already clear the Brexit negotiations were botched, that Heywood, Hammond et al had effectively destroyed any sort of serious negotiating position, and I suggested Westminster/Whitehall had to learn from successful management of complex projects to avert what would otherwise be a debacle.) These ideas were born with the Manhattan Project to build the first nuclear bomb, the ICBMs project in the 1950s, and the Apollo program in the 1960s which put man on the moon. These projects combined a) some of the most astonishing intellects the world has seen of which a subset were also brilliant at navigating government (e.g von Neumann) and b) phenomenally successful practical managers: e.g General Groves on Manhattan Project, Bernard Schriever on ICBMs and George Mueller on Apollo.

The story we are told about the Manhattan Project focuses almost exclusively on the extraordinary collection of physicists and mathematicians at Los Alamos but they were a relatively small part of the whole story which involved an engineer building an unprecedented operation at multiple sites across America in secret and with extraordinary speed while many doubted the project was possible —  then coordinating multiple projects, integrating distributed expertise and delivering a functioning bomb.

If you read Groves’ fascinating book, Now It Can Be Told, and read a recent biography of him, in many important ways you will acquire what is effectively cutting-edge knowledge today about making huge endeavours work — ‘cutting-edge’ because almost nobody has learned from this (see below). If you are one of the many MPs aspiring to be not just Prime Minister but a Prime Minister who gets important things done, there are very few books that would repay careful study as much as Groves’. If you do then you could avoid joining the list of Major, Blair, Brown, Cameron and May who bungle around for a few years before being spat out to write very similar accounts about how they struggled to ‘find the levers of power’, couldn’t get officials to do what they want, and never understood how to get things done.

Screenshot 2019-02-22 13.13.41

Systems management is generally relevant to the question: how best to manage very big complex projects? It was relevant to the referendum (Victoria Woodcock was Vote Leave’s George Mueller). It is relevant to the Brexit negotiations and the appalling management process between May/Hammond/Heywood/Robbins et al, which has been a case study in how not to manage a complex project (Parliament also deserves much blame for never scrutinising this process). It is relevant to China’s internal development and the US-China geopolitical struggle. It is relevant to questions like ‘how to avoid nuclear war’ and ‘how would you build a Manhattan Project for safe AGI?’. It is relevant to how you could develop a high performance team in Downing Street that could end the current farce. The same issues and lessons crop up in every account of a Presidency and the role of the Chief of Staff. If you want to change Whitehall from 1) ‘failure is normal’ to 2) ‘align incentives with predictive accuracy, operational excellence and high performance’, then systems management provides an extremely valuable anti-checklist for Whitehall.

Given vital principles were established more than half a century ago that were proved to do things much faster and more effectively than usual, it would be natural to assume that these lessons became integrated in training and practice both in the worlds of management and politics/government. This did not happen. In fact, these lessons have been ‘unlearned’.

General Groves was pushed out of the Pentagon (‘too difficult’). The ICBM project, conducted in extreme panic post-Sputnik, had to re-create an organisation outside the Pentagon and re-learn Groves’ lessons a decade later. NASA was a mess until Mueller took over and imported the lessons from Manhattan and ICBMs. After Apollo’s success in 1969, Mueller left and NASA reverted to being a ‘normal’ organisation and forgot his successful approach. (The plans Mueller left for developing a manned lunar base, space commercialisation, and man on Mars by the end of the 1980s were also tragically abandoned.)

While Mueller was putting man on the moon, MacNamara’s ‘Whizz Kids’ in the Pentagon, who took America into the Vietnam War, were dismantling the successful approach to systems management claiming that it was ‘wasteful’ and they could do it ‘more efficiently’. Their approach was a disaster and not just regarding Vietnam. The combination of certain definitions of ‘efficiency’ and new legal processes ensured that procurement was routinely over-budget, over-schedule, over-promising, and generated more and more scandals. Regardless of failure the MacNamara approach metastasised across the Pentagon. Incentives are so disastrously misaligned that almost every attempt at reform makes these problems worse and lawyers and lobbyists get richer. Of course, if lawmakers knew how the Manhattan Project and Apollo were done — the lack of ‘legal process’, things happening with a mere handshake instead of years of reviews enriching lawyers! — they would be stunned.

Successes since the 1960s have often been freaks (e.g the F-16, Boyd’s brainchild) or ‘black’ projects (e.g stealth) and often conducted in SkunkWorks-style operations outside normal laws. It is striking that US classified special forces, JSOC (equivalent to SAS/SBS etc), routinely use a special process to procure technologies outside the normal law to avoid the delays. This connects to George Mueller saying late in life that Apollo would be impossible with the current legal/procurement system and it could only be done as a ‘black’ program. 

The lessons of success have been so widely ‘unlearned’ throughout the government system that when Obama tried to roll out ObamaCare, it blew up. When they investigated, the answer was: we didn’t use systems management so the parts didn’t connect and we never tested this properly. Remember: Obama had the support of the vast majority of Silicon Valley expertise but this did not avert disaster. All anyone had to do was read Groves’ book and call Sam Altman or Patrick Collison and they could have provided the expertise to do it properly but none of Obama’s staff or responsible officials did.

The UK is the same. MPs constantly repeat the absurd SW1 mantra that ‘there’s no money’ while handing out a quarter of a TRILLION pounds every year on procurement and contracting. I engaged with this many times in the Department for Education 2010-14. The Whitehall procurement system is embedded in the dominant framework of EU law (the EU law is bad but UK officials have made it worse). It is complex, slow and wasteful. It hugely favours large established companies with powerful political connections — true corporate looters. The likes of Carillion and lawyers love it because they gain from the complexity, delays, and waste. It is horrific for SMEs to navigate and few can afford even to try to participate. The officials in charge of multi-billion processes are mostly mediocre, often appalling. In the MoD corruption adds to the problems.

Because of mangled incentives and reinforcing culture, the senior civil service does not care about this and does not try to improve. Total failure is totally irrelevant to the senior civil service and is absolutely no reason to change behaviour even if it means thousands of people killed and many billions wasted. Occasionally incidents like Carillion blow up and the same stories are written and the same quotes given — ‘unbelievable’, ‘scandal’, ‘incompetence’, ‘heads will roll’. Nothing changes. The closed and dysfunctional Whitehall system fights to stay closed and dysfunctional. The media caravan soon rolls on. ‘Reform’ in response to botches and scandals almost inevitably makes things even slower and more expensive — even more focus on process rather than outcomes, with the real focus being ‘we can claim to have acted properly because of our Potemkin process’. Nobody is incentivised to care about high performance and error-correction. The MPs ignore it all. Select Committees issue press releases about ‘incompetence’ but never expose the likes of Heywood to persistent investigation to figure out what has really happened and why. Nobody cares.

This culture has been encouraged by the most senior leaders. The recent Cabinet Secretary Jeremy Heywood assured us all that the civil service could easily cope with Brexit and  the civil service would handle Brexit fine and ‘definitely on digital, project management we’ve got nothing to learn from the private sector’. His predecessor, O’Donnell, made similar asinine comments. The fact that Heywood could make such a laughable claim after years of presiding over expensive debacle after expensive debacle and be universally praised by Insiders tells you all you need to know about ‘the blind leading the blind’ in Westminster. Heywood was a brilliant courtier-fixer but he didn’t care about management and operational excellence. Whitehall now incentivises the promotion of courtier-fixers, not great managers like Groves and Mueller. Management, like science, is regarded contemptuously as something for the lower orders to think about, not the ‘strategists’ at the top.

Long-term leadership from the likes of O’Donnell and Heywood is why officials know that practically nobody is ever held accountable regardless of the scale of failure. Being in charge of massive screwups is no barrier to promotion. Operational excellence is no requirement for promotion. You will often see the official in charge of some debacle walking to the tube at 4pm (‘compressed hours’ old boy) while the debacle is live on TV (I know because I saw this regularly in the DfE). The senior civil service now operates like a protected caste to preserve its power and privileges regardless of who the ignorant plebs vote for.

You can see how crazy the incentives are when you consider elections. If you look back at recent British elections the difference in the spending plans between the two sides has been a tiny fraction of the £250 billion p/a procurement and contracting budget — yet nobody ever really talks about this budget, it is the great unmentionable subject in Westminster! There’s the odd slogan about ‘let’s cut waste’ but the public rightly ignores this and assumes both sides will do nothing about it out of a mix of ignorance, incompetence and flawed incentives so big powerful companies continue to loot the taxpayer. Look at both parties now just letting the HS2 debacle grow and grow with the budget out of control, the schedule out of control, officials briefing ludicrously that the ‘high speed’ rail will be SLOWED DOWN to reduce costs and so on, all while an army of privileged looters, lobbyists, and lawyers hoover up taxpayer cash. 

And now, when Brexit means the entire legal basis for procurement is changing, do these MPs, ministers and officials finally examine it and see how they could improve? No of course not! The top priority for Heywood et al viz Brexit and procurement has been to get hapless ministers to lock Britain into the same nightmare system even after we leave the EU — nothing must disrupt the gravy train! There’s been a lot of talk about £350 million per week for the NHS since the referendum. I could find this in days and in ways that would have strong public support. But nobody is even trying to do this and if some minister took a serious interest, they would soon find all sorts of things going wrong for them until the PermSec has a quiet word and the natural order is restored…

To put the failures of politicians and official in context, it is fascinating that most of the commercial world also ignores the crucial lessons from Groves et al! Most commercial megaprojects are over-schedule, over-budget, and over-promise. The data shows that there has been little improvement over decades. (Cf. What You Should Know About Megaprojects, and Why, Flyvbjerg). And look at this  2019 article in Harvard Business Review which, remarkably, argues that managers in modern MBA programmes are taught NOT TO VALUE OPERATIONAL EXCELLENCE! ‘Operational effectiveness — doing the same thing as other companies but doing it exceptionally well — is not a path to sustainable advantage in the competitive universe’, elite managers are taught. The authors have looked at a company data and concluded that, shock horror, operational excellence turns out to be vital after all! They conclude:

‘[T]he management community may have badly underestimated the benefits of core management practices [and] it’s unwise to teach future leaders that strategic decision making and basic management processes are unrelated.’ [!]

The study of management, like politics, is not a field with genuine expertise. Like other social sciences there is widespread ‘cargo cult science’, fads and charlatans drowning out core lessons. This makes it easier to understand the failure of politicians: when elite business schools now teach students NOT to value operational excellence, when supposed management gurus like MacNamara actually push things in a worse direction, then it is less surprising people like Cameron and Heywood don’t know know which way to turn. Imagine the normal politician or senior official in Washington or London. They have almost no exposure to genuinely brilliant managers or very well run organisations. Their exposure is overwhelmingly to ‘normal’ CEOs of public companies and normal bureaucracies. As the most successful investors in world history, Buffett and Munger, have pointed out for over 50 years, many of these corporate CEOs, the supposedly ‘serious people’, don’t know what they are doing and have terrible incentives.

But surely if someone recently created something unarguably massively world-changing,  like inventing the internet and personal computing, then everyone would pay attention, right? WRONG! I wrote this (2018) about the extraordinary ARPA-PARC episode, which created much of the ecosystem for interactive personal computing and the internet and provided a model for how to conduct high-risk-high-payoff technology research.

There is almost no research funded on ARPA-PARC principles worldwide. ARPA was deliberately made less like what it was like when it created the internet. The man most responsible for PARC’s success, Robert Taylor, was fired and the most effective team in the history of computing research was disbanded. XEROX notoriously could not overcome its internal incentive problems and let Steve Jobs and Bill Gates develop the ideas. Although politicians love giving speeches about ‘innovation’ and launching projects for PR, governments subsequently almost completely ignored the lessons of how to create superproductive processes and there are almost zero examples of the ARPA-PARC approach in the world today (an interesting partial exception is Janelia). Whitehall, as a subset of its general vandalism towards science, has successfully resisted all attempts at learning from ARPA for decades and this has been helped by the attitude of leading scientists themselves whose incentives push them toward supporting objectively bad funding models. In science as well as politics, incentives can be destructive and stop learning. As Alan Kay, one of the crucial PARC researchers, wrote:

‘The most interesting thing has been the contrast between appreciation/exploitation of the inventions/contributions versus the almost complete lack of curiosity and interest in the processes that produced them… [I]n most processes today — and sadly in most important areas of technology research — the administrators seem to prefer to be completely in control of mediocre processes to being “out of control” with superproductive processes.They are trying to “avoid failure” rather than trying to “capture the heavens”.’

Or as George Mueller said later in life about the institutional imperative and project failures:

‘Fascinating that the same problems recur time after time, in almost every program, and that the management of the program, whether it happened to be government or industry, continues to avoid reality.

So, on one hand, radical improvements in non-military spheres would be a wonderful free lunch. We simply apply old lessons, scale them up with technology and there are massive savings for free.

But wouldn’t it be ironic if we don’t do this — instead, we keep our dysfunctional systems for non-military spheres and carry on the waste, failure and corruption but we channel the Cold War and, in the atmosphere of an arms race, America and China apply the lessons from Groves, Schreiver and Mueller but to military AI procurement?!

Not everybody has unlearned the lessons from Groves and Mueller…

*

China: a culture of learning from systems management

‘All stable processes we shall predict. All unstable processes we shall control.’ von Neumann.

In Science there was an interesting article on Qian Xuesen, the godfather of China’s nuclear and space programs which also had a profound affect on ideas about government. Qian studied in California at Caltech where he worked with the Hungarian mathematician Theodore von Kármán who co-founded the Jet Propulsion Laboratory (JPL) which worked on rockets after 1945.

In the West, systems engineering’s heyday has long passed. But in China, the discipline is deeply integrated into national planning. The city of Wuhan is preparing to host in August the International Conference on Control Science and Systems Engineering, which focuses on topics such as autonomous transportation and the “control analysis of social and human systems.” Systems engineers have had a hand in projects as diverse as hydropower dam construction and China’s social credit system, a vast effort aimed at using big data to track citizens’ behavior. Systems theory “doesn’t just solve natural sciences problems, social science problems, and engineering technology problems,” explains Xue Huifeng, director of the China Aerospace Laboratory of Social System Engineering (CALSSE) and president of the China Academy of Aerospace Systems Science and Engineering in Beijing. “It also solves governance problems.”

The field has resonated with Chinese President Xi Jinping, who in 2013 said that “comprehensively deepening reform is a complex systems engineering problem.” So important is the discipline to the Chinese Communist Party that cadres in its Central Party School in Beijing are required to study it. By applying systems engineering to challenges such as maintaining social stability, the Chinese government aims to “not just understand reality or predict reality, but to control reality,” says Rogier Creemers, a scholar of Chinese law at the Leiden University Institute for Area Studies in the Netherlands…

‘In a building flanked by military guards, systems scientists from CALSSE sit around a large conference table, explaining to Science the complex diagrams behind their studies on controlling systems. The researchers have helped model resource management and other processes in smart cities powered by artificial intelligence. Xue, who oversees a project named for Qian at CALSSE, traces his work back to the U.S.-educated scientist. “You should not forget your original starting point,” he says…

‘The Chinese government claims to have wired hundreds of cities with sensors that collect data on topics including city service usage and crime. At the opening ceremony of China’s 19th Party Congress last fall, Xi said smart cities were part of a “deep integration of the internet, big data, and artificial intelligence with the real economy.”… Xue and colleagues, for example, are working on how smart cities can manage water resources. In Guangdong province, the researchers are evaluating how to develop a standardized approach for monitoring water use that might be extended to other smart cities.

‘But Xue says that smart cities are as much about preserving societal stability as streamlining transportation flows and mitigating air pollution. Samantha Hoffman, a consultant with the International Institute for Strategic Studies in London, says the program is tied to long-standing efforts to build a digital surveillance infrastructure and is “specifically there for social control reasons” (Science, 9 February, p. 628). The smart cities initiative builds on 1990s systems engineering projects — the “golden” projects — aimed at dividing cities into geographic grids for monitoring, she adds.

‘Layered onto the smart cities project is another systems engineering effort: China’s social credit system. In 2014, the country’s State Council outlined a plan to compile data on individuals, government officials, and companies into a nationwide tracking system by 2020. The goal is to shape behavior by using a mixture of carrots and sticks. In some citywide and commercial pilot projects already underway, individuals can be dinged for transgressions such as spreading rumors online. People who receive poor marks in the national system may eventually be barred from travel and denied access to social services, according to government documents…

‘Government documents refer to the social credit system as a “social systems engineering project.” Details about which systems engineers consulted on the project are scant. But one theory that may have proved useful is Qian’s “open complex giant system,” Zhu says. A quarter-century ago, Qian proposed that society is a system comprising millions of subsystems: individual persons, in human parlance. Maintaining control in such a system is challenging because people have diverse backgrounds, hold a broad spectrum of opinions, and communicate using a variety of media, he wrote in 1993 in the Journal of Systems Engineering and Electronics. His answer sounds like an early road map for the social credit system: to use then-embryonic tools such as artificial intelligence to collect and synthesize reams of data. According to published papers, China’s hard systems scientists also use approaches derived from Qian’s work to monitor public opinion and gauge crowd behavior

‘Hard systems engineering worked well for rocket science, but not for more complex social problems, Gu says: “We realized we needed to change our approach.” He felt strongly that any methods used in China had to be grounded in Chinese culture.

‘The duo came up with what it called the WSR approach: It integrated wuli, an investigation of facts and future scenarios; shili, the mathematical and conceptual models used to organize systems; and renli. Though influenced by U.K. systems thinking, the approach was decidedly eastern, its precepts inspired by the emphasis on social relationships in Chinese culture. Instead of shunning mathematical approaches, WSR tried to integrate them with softer inquiries, such as taking stock of what groups a project would benefit or harm. WSR has since been used to calculate wait times for large events in China and to determine how China’s universities perform, among other projects…

‘Zhu … recently wrote that systems science in China is “under a rationalistic grip, with the ‘scientific’ leg long and the democratic leg short.” Zhu says he has no doubt that systems scientists can make projects such as the social credit system more effective. However, he cautions, “Systems approaches should not be just a convenient tool in the expert’s hands for realizing the party’s wills. They should be a powerful weapon in people’s hands for building a fair, just, prosperous society.”’

In Open Complex Giant System (1993), Qian Xuesen compares the study of physics, where large complex systems can be studied using the phenomenally successful tools of  statistical mechanics, and the study of society which has no such methods. He describes an overall approach in which fields spanning physical sciences, study of the mind, medicine, geoscience and so on must be integrated in a sort of uber-field he calls ‘social systems engineering‘.

‘Studies and practices have clearly proved that the only feasible and effective way to treat an open complex giant system is a metasynthesis from the qualitative to the quantitative, i.e. the meta—synthetic engineering method. This method has been extracted, generalized and abstracted from practical studies…’

This involves integrating: scientific theories, data, quantitative models, qualitative practical expert experience into ‘models built from empirical data and reference material, with hundreds and thousands of parameters’ then simulated.

This is quantitative knowledge arising from qualitative understanding. Thus metasynthesis from qualitative to quantitative approach is to unite organically the expert group, data, all sorts of information, and the computer technology, and to unite scien- tific theory of various disciplines and human experience and knowledge.’

He gives some examples and gives this diagram as a high level summary:

Screenshot 2019-02-22 17.31.33

So, China is combining:

  • A massive ~$150 billion data science/AI investment program with the goal of global leadership in the science/technology and economic dominance.
  • A massive investment program in associated science/technology such as quantum information/computing.
  • A massive domestic surveillance program combining AI, facial recognition, genetic identification, the ‘social credit system’ and so on.
  • A massive anti-access/area denial military program aimed at America/Taiwan.
  • A massive technology espionage program that, for example, successfully stole the software codes for the F-35.
  • A massive innovation ecosystem that rivals Silicon Valley and may eclipse it (cf. this fascinating documentary on Shenzhen).
  • The use of proven systems management techniques for integrating principles of effective action to predict and manage complex systems at large scale.

America led the development of AI technologies and has the huge assets of its universities, a tradition (weakening) of welcoming scientists (since they opened Princeton to Einstein, von Neumann and Gödel in the 1930s), and the ecosystem of places like Silicon Valley.

It is plausible that China could find a way within 15 years to find some nonlinear asymmetries that provide an edge while, channeling Marshal Ogarkov, it outthinks the Pentagon in management and operations.

*

A few interesting recent straws in the AI/robotics wind

I blogged recently about Judea Pearl. He is one of the most important scholars in the field of causal reasoning. He wrote a short paper about the limits of state-of-the-art AI systems using ‘deep learning’ neural networks — such as the AlphaGo system which recently conquered the game of GO — and how these systems could be improved. Humans can interrogate stored representations of their environment with counter-factual questions: how to instantiate this in machines? (Also economists, NB. Pearl’s statement that ‘I can hardly name a handful (<6) of economists who can answer even one causal question posed in ucla.in/2mhxKdO‘.)

In an interview he said this about self-aware robots:

‘If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans. The next step will be that machines will postulate such models on their own and will verify and refine them based on empirical evidence. That is what happened to science; we started with a geocentric model, with circles and epicycles, and ended up with a heliocentric model with its ellipses.

We’re going to have robots with free will, absolutely. We have to understand how to program them and what we gain out of it. For some reason, evolution has found this sensation of free will to be computationally desirable… Evidently, it serves some computational function.

‘I think the first evidence will be if robots start communicating with each other counterfactually, like “You should have done better.” If a team of robots playing soccer starts to communicate in this language, then we’ll know that they have a sensation of free will. “You should have passed me the ball — I was waiting for you and you didn’t!” “You should have” means you could have controlled whatever urges made you do what you did, and you didn’t.

[When will robots be evil?] When it appears that the robot follows the advice of some software components and not others, when the robot ignores the advice of other components that are maintaining norms of behavior that have been programmed into them or are expected to be there on the basis of past learning. And the robot stops following them.’

A DARPA project recently published this on self-aware robots.

‘A robot that learns what it is, from scratch, with zero prior knowledge of physics, geometry, or motor dynamics. Initially the robot does not know if it is a spider, a snake, an arm—it has no clue what its shape is. After a brief period of “babbling,” and within about a day of intensive computing, their robot creates a self-simulation. The robot can then use that self-simulator internally to contemplate and adapt to different situations, handling new tasks as well as detecting and repairing damage in its own body

‘Initially, the robot moved randomly and collected approximately one thousand trajectories, each comprising one hundred points. The robot then used deep learning, a modern machine learning technique, to create a self-model. The first self-models were quite inaccurate, and the robot did not know what it was, or how its joints were connected. But after less than 35 hours of training, the self-model became consistent with the physical robot to within about four centimeters…

‘Lipson … notes that self-imaging is key to enabling robots to move away from the confinements of so-called “narrow-AI” towards more general abilities. “This is perhaps what a newborn child does in its crib, as it learns what it is,” he says. “We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans. While our robot’s ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness.”

‘Lipson believes that robotics and AI may offer a fresh window into the age-old puzzle of consciousness. “Philosophers, psychologists, and cognitive scientists have been pondering the nature self-awareness for millennia, but have made relatively little progress,” he observes. “We still cloak our lack of understanding with subjective terms like ‘canvas of reality,’ but robots now force us to translate these vague notions into concrete algorithms and mechanisms.”

‘Lipson and Kwiatkowski are aware of the ethical implications. “Self-awareness will lead to more resilient and adaptive systems, but also implies some loss of control,” they warn. “It’s a powerful technology, but it should be handled with care.”’

Robot paper HERE.

Press release HERE.

Recently, OpenAI, one of the world leaders in AI founded by Sam Altman and Elon Musk, announced:

‘… a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training… The model is chameleon-like — it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing… Our model is capable of generating samples from a variety of prompts that feel close to human quality and show coherence over a page or more of text… These samples have substantial policy implications: large language models are becoming increasingly easy to steer towards scalable, customized, coherent text generation, which in turn could be used in a number of beneficial as well as malicious ways.’ (bold added).

Screenshot 2019-02-15 11.48.37

OpenAI has not released the full model yet because they take safety issues seriously. Cf. this for a discussion of some safety issues and links. As the author says re some of the complaints about OpenAI not releasing the full model, when you find normal cyber security flaws you do not publish the problem immediately — that is a ‘zero day attack’ and we should not ‘promote a norm that zero-day threats are OK in AI.’ Quite. It’s also interesting that it would probably only take ~$100,000 for a resourceful individual to re-create the full model quite quickly.

A few weeks ago, Deep Mind showed that their approach to beating human champions at GO can also beat the world’s best players at StarCraft, a game of IMperfect information which is much closer to real life human competitions than perfect information games like chess and GO. OpenAI has shown something similar with a similar game, DOTA.

 

*

Moore’s Law: what if a country spends 1-10% GDP pushing such curves?

The march of Moore’s Law is entangled in many predictions. It is true that in some ways Moore’s Law has flattened out recently…

Screenshot 2018-03-12 11.55.21

… BUT specialised chips developed for machine learning and other adaptations have actually kept it going. This chart shows how it actually started long before Moore and has been remarkably steady for ~120 years (NVIDIA in the top right is specialised for deep learning)…

Screenshot 2018-03-12 11.56.15

NB. This is a logarithmic scale so makes progress seem much less dramatic than the ~20 orders of magnitude it represents.

  • Since Von Neumann and Turing led the development of the modern computer in the 1940s, the price of computation has got ~x10 cheaper every five years (so x100 per decade), so over ~75 years that’s a factor of about a thousand trillion (1015).
  • The industry seems confident the graph above will continue roughly as it has for at least another decade, though not because of continued transistor doubling rates which has reached such a tiny nanometer scale that quantum effects will soon interfere with engineering. This means ~100-fold improvement before 2030 and combined with the ecosystem of entrepreneurs/VC/science investment etc this will bring many major disruptions even without significant progress with general intelligence.
  • Dominant companies like Apple, Amazon, Google, Baidu, Alibaba etc (NB. no big EU players) have extremely strong incentives to keep this trend going given the impact of mobile computing / the cloud etc on their revenues.
  • Computers will be ~10,000 times more powerful than today for the same price if this chart holds for another 20 years and ~1 million times more powerful for the same price than today if it holds for another 30 years. Today’s multi-billion dollar supercomputer performance would be available for ~$1,000, just as the supercomputer power of a few decades ago is now available in your smartphone.

But there is another dimension to this trend. Look at this graph below. It shows the total amount of compute, in petaflop/s-days, that was used to train some selected AI projects using neural networks / deep learning.

‘Since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). Since 2012, this metric has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase)… The chart shows the total amount of compute, in petaflop/s-days, that was used to train selected results that are relatively well known, used a lot of compute for their time, and gave enough information to estimate the compute used. A petaflop/s-day (pfs-day) consists of performing 1015neural net operations per second for one day, or a total of about 1020operations. ‘ (Cf. OpenAI blog.)

Screenshot 2018-05-19 17.04.04

The AlphaZero project in the top right is the recent Deep Mind project in which an AI system (a successor to the original AlphaGo that first beat human GO champions) zoomed by centuries of human knowledge on GO and chess in about one day of training.

Many dramatic breakthroughs in machine learning, particularly using neural networks (NNs), are open source. They are scaling up very fast. They will be networked together into ‘networks of networks’ and will become x10, x100, x1,000 more powerful. These NNs will keep demonstrating better than human performance in relatively narrowly defined tasks (like winning games) but these narrow definitions will widen unpredictably.

OpenAI’s blog showing the above graph concludes:

‘Overall, given the data above, the precedent for exponential trends in computing, work on ML specific hardware, and the economic incentives at play, we think it’d be a mistake to be confident this trend won’t continue in the short term. Past trends are not sufficient to predict how long the trend will continue into the future, or what will happen while it continues. But even the reasonable potential for rapid increases in capabilities means it is critical to start addressing both safety and malicious use of AI today. Foresight is essential to responsible policymaking and responsible technological development, and we must get out ahead of these trends rather than belatedly reacting to them.’ (Bold added)

This recent analysis of the extremely rapid growth of deep learning systems tries to estimate how long this rapid growth can continue and what interesting milestones may fall. It considers 1) the rate of growth of cost, 2) the cost of current experiments, and 3) the maximum amount that can be spent on an experiment in the future. Its rough answers are:

  1. ‘The cost of the largest experiments is increasing by an order of magnitude every 1.1 – 1.4 years.
  2. ‘The largest current experiment, AlphaGo Zero, probably cost about $10M.’
  3. On the basis of the Manhattan Project costing ~1% of GDP, that gives ~$200 billion for one AI experiment. Given the growth rate, we could expect a $200B experiment in 5-6 years.
  4. ‘There is a range of estimates for how many floating point operations per second are required to simulate a human brain for one second. Those collected by AI Impacts have a median of 1018 FLOPS (corresponding roughly to a whole-brain simulation using Hodgkin-Huxley neurons)’. [NB. many experts think 1018 is off by orders of magnitude and it could easily be x1,000 or more higher.]
  5. ‘So for the shortest estimates … we have already reached enough compute to pass the human-childhood milestone. For the median estimate, and the Hodgkin-Huxley estimates, we will have reached the milestone within 3.5 years.’
  6. We will not reach the bigger estimates (~1025FLOPS) within the 10 year window.
  7. ‘The AI-Compute trend is an extraordinarily fast trend that economic forces (absent large increases in GDP) cannot sustain beyond 3.5-10 more years. Yet the trend is also fast enough that if it is sustained for even a few years from now, it will sweep past some compute milestones that could plausibly correspond to the requirements for AGI, including the amount of compute required to simulate a human brain thinking for eighteen years, using Hodgkin Huxley neurons.’

I can’t comment on the technical aspects of this but one political/historical point. I think this analysis is wrong about the Manhattan Project (MP). His argument is the MP represents a reasonable upper-bound for what America might spend. But the MP was not constrained by money — it was mainly constrained by theoretical and engineering challenges, constraints of non-financial resources and so on. Having studied General Groves’ book (who ran the MP), he does not say money was a problem — in fact, one of the extraordinary aspects of the story is the extreme (to today’s eyes) measures he took to ensure money was not a problem. If more than 1% GDP had been needed, he’d have got it (until the intelligence came in from Europe that the Nazi programme was not threatening).

This is an important analogy. America and China are investing very heavily in AI but nobody knows — are there places at the edge of ‘breakthroughs with relatively narrow applications’ where suddenly you push ‘a bit’ and you get lollapalooza results with general intelligence? What if someone thinks — if I ‘only’ need to add some hardware and I can muster, say, 100 billion dollars to buy it, maybe I could take over the world? What if they’re right?

I think it is therefore more plausible to use the US defence budget at the height of the Cold War as a ‘reasonable estimate’ for what America might spend if they feel they are in an existential struggle. Washington knows that China is putting vast resources into AI research. If it starts taking over from Deep Mind and OpenAI as the place where the edge-of-the-art is discovered, then it WILL soon be seen as an existential struggle and there would rapidly be political pressures for a 1950s/1960s style ‘extreme’ response. So a reasonable upper bound might be at least 5-8 times bigger than 1% of GDP.

Further, unlike the nuclear race, an AGI race carries implications of not just ‘destroy global civilisation and most people’ but ‘potentially destroys ABSOLUTELY EVERYTHING not just on earth but, given time and the speed of light, everywhere’ — i.e potentially all molecules re-assembled in the pursuit of some malign energy-information optimisation process. Once people realise just how bad AGI could go if the alignment problem is not solved (see below), would it not be reasonable to assume that even more money than ~8% GDP will be found if/when this becomes a near-term fear of politicians?

Some in Silicon Valley who already have many billions at their disposal are already calculating numbers for these budgets. Surely people in Chinese intelligence are doodling the same as they listen to the week’s audio of Larry talking to Demis…?

*

General intelligence and safety

‘[R]ational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. Those drives will lead to anti-social and dangerous behaviour if not explicitly countered. The current computing infrastructure would be very vulnerable to unconstrained systems with these drives.’ Omohundro.

Shane Legg, co-founder and chief scientist of Deep Mind, said publicly a few years ago that there is a 50% probability that we will achieve human level AI by 2028, a 90% probability by 2050, and ‘I think human extinction will probably occur‘. Given Deep Mind’s progress since he said this it is surely unlikely he thinks the odds now are lower than 50% by 2028. Some at the leading edge of the field agree.

‘I think that within a few years we’ll be able to build an NN-based [neural network] AI (an NNAI) that incrementally learns to become at least as smart as a little animal, curiously and creatively learning to plan, reason and decompose a wide variety of problems into quickly solvable sub-problems. Once animal-level AI has been achieved, the move towards human-level AI may be small: it took billions of years to evolve smart animals, but only a few millions of years on top of that to evolve humans. Technological evolution is much faster than biological evolution, because dead ends are weeded out much more quickly. Once we have animal-level AI, a few years or decades later we may have human-level AI, with truly limitless applications. Every business will change and all of civilisation will change…

In 2050 there will be trillions of self-replicating robot factories on the asteroid belt. A few million years later, AI will colonise the galaxy. Humans are not going to play a big role there, but that’s ok. We should be proud of being part of a grand process that transcends humankind.’ Schmidhuber, one of the pioneers of ML, 2016.

Others have said they believe that estimates of AGI within 15-30 years are unlikely to be right. Two of the smartest people I’ve ever spoken to are physicists who understand the technical details and know the key researchers and think that dozens of Nobel Prize scale ideas will probably be needed before AGI happens and it is more likely that the current wave of enthusiasm with machine learning/neural networks will repeat previous cycles in science (e.g with quantum computing 20 years ago) — great enthusiasm, the feeling that all barriers are quickly falling, then an increasingly obvious plateau, spreading disillusion, a search for new ideas, then a revival of hope and so on. They would bet more on a 50-80 year than a 20 year scale.

Of top people I have spoken to and/or followed their predictions, it’s clear that there is a consensus that mainstream economic analysis (which is the foundation of politicians’ and media discussion) seriously underestimates the scale and speed of social/economic/military/political disruption that narrow AI/automation will soon cause. But predictions on AGI are unsurprisingly all over the place.

Chart: predictions on AGI timelines (When Will AI Exceed Human Performance? Evidence from AI Experts)

Screenshot 2019-02-28 10.00.31

Screenshot 2019-02-28 10.22.40

Many argue there even if Moore’s Law continues for 30 years (millionfold performance improvement) this may mean nothing significant for general intelligence, even if narrow AI transforms the world in many ways. Some experts think that estimates of the human brain’s computational capacity widely believed in the computer science world are actually orders of magnitude wrong. We still don’t know much about basics of the brain such as how long-term memories are formed. Maybe the brain’s processes will be much more resistant to understanding than ‘optimists’ assume.

But maybe relatively few big new ideas are needed to create world-changing capabilities. ‘Just’ applying great engineering and more resources to existing ideas allowed Deep Mind to blow past human performance metrics. I obviously cannot judge competing expert views but from a political perspective we know for sure that there is inherent uncertainty about how we discover new knowledge and this means we are bound to be surprised in all sorts of ways. We know that even brilliant researchers working right at the edge of progress are often clueless about what will happen quite soon and cannot reliably judge ‘is it less than 1% or more like 20% probability?’ questions. For example:

‘In 1901, two years before helping build the first heavier-than-air flyer, Wilbur Wright told his brother that powered flight was fifty years away. In 1939, three years before he personally oversaw the first critical chain reaction in a pile of uranium bricks, Enrico Fermi voiced 90% confidence that it was impossible to use uranium to sustain a fission chain reaction.’ (Yudkowsky)

Fermi’s experience suggests we should be extremely careful and put more resources into thinking very hard about how to minimise risks viz both narrow and general AI.

Those right at the edge of genetic engineering, such as George Church and Kevin Esvelt, are pushing for their field to be forcibly opened up to make it safer. As they argue, the current scientific approach and incentive system is essentially a ‘blind search algorithm’ in which small teams work in secret without being able to predict the consequences of their work and cannot be warned by those who do understand. A blind search algorithm is a bad approach for things like bioweapons that can destroy billions of lives and it is what we now have. The same argument applies to AGI.

We also know that political people and governments are slow to cope with major technological disruptions. Just look at TV. It’s been dominating politics since the 1950s. It is roughly 70 years old. Many politicians still do not understand it well. The UK state and political parties are in many ways much less sophisticated in its use of TV than groups like Hezbollah. This is even more true of social media. Also look at how unfounded conspiracy theories about fake news and social media viz the referendum and Trump have gripped much of the ‘educated’ class that thinks they see through fake news that fools the uneducated! Journalists are awarded THE ORWELL AWARD(!) for spreading fake news about fake news (and it’s not ‘lies’, they actually believe what they say)! (My experience is it’s much easier to fool people about politics if they have a degree than if they don’t because those with a degree tend to spend so much more energy fooling themselves.) This is not encouraging particularly if one considers that politicians are directly incentivised to understand technologies like TV and internet polling for their own short-term interests yet most don’t.

From cars to planes it has taken time for us to work out how to adapt to new things that can kill us. Given that 1) conventional research is ‘a blind search algorithm’, 2) our politicians are behind the curve on 70 year-old technologies and 3) there is little prospect of this changing without huge changes to conventional models of politics, we must ask another question about secrecy v openness and centralised vs decentralised architectures.

One of the leaders of the 3D printing / FabLab revolution wrote this comparing the closed v open models of security:

‘The history of the Internet has shown that security through obscurity doesn’t work. Systems that have kept their inner workings a secret in the name of security have consistently proved more vulnerable than those that have allowed themselves to be examined — and challenged — by outsiders. The open protocols and programs used to protect Internet communications are the result of ongoing development and testing by a large expert community. Another historical lesson is that people, not technology, are the most common weakness when it comes to security. No matter how secure a system is, someone who has access to it can always be corrupted, wittingly or otherwise. Centralized control introduces a point of vulnerability that is not present in a distributed system.’ (Bold added)

As we saw above, the centralised approach has been a disaster for nuclear weapons and we survived by fluke. Overall the history of nuclear security is surely a very relevant and bad signal for AI safety. I would bet a lot that Deep Mind et al are all hacked and spied on by China and Russia (at least) so I think it’s safest to plan on the assumption that dangerous breakthroughs will leak almost instantly and could be applied by the sort of people who spy for intel agencies. So it is natural to ask, should we take an open/decentralised approach towards possible AGI?

(Tangential thought experiment: if you were in charge of an organisation like the KGB, why would you not hack hedge funds like Renaissance Technologies and use the information for your own ‘black’ hedge fund and thus dodge the need for arguments over funding (a ‘virtuous’ circle of espionage, free money, resources for more effective R&D and espionage plus it minimises the need for irritating interactions with politicians)? How hard would it be to detect such activity IF done with intelligent modesty? Given someone can hack the NSA without their identity being revealed, why would they not be hacking Renaissance and Deep Mind, with a bit of help from a Milla Jovovich lookalike whose reading a book on n-dimensional string theory at the bar when that exhausted physics PhD with the access codes staggers in to relax?)

This seems to collide with another big problem — the alignment problem.

Stuart Russell, one of the world’s leading researchers, is one of those who has been very forceful about the fundamental importance of this: how do we GUARANTEE that entities more intelligent than us are aligned with humanity’s interests?

‘One [view] is: It’ll never happen, which is like saying we are driving towards the cliff but we’re bound to run out of gas before we get there. And that doesn’t seem like a good way to manage the affairs of the human race. And the other [view] is: Not to worry — we will just build robots that collaborate with us and we’ll be in human-robot teams. Which begs the question: If your robot doesn’t agree with your objectives, how do you form a team with it?’ .

Eliezer Yudkowsky, one of the few working on the alignment problem, described the difficulty:

‘How do you encode the goal functions of an A.I. such that it has an Off switch and it wants there to be an Off switch and it won’t try to eliminate the Off switch and it will let you press the Off switch, but it won’t jump ahead and press the Off switch itself? And if it self-modifies, will it self-modify in such a way as to keep the Off switch? We’re trying to work on that. It’s not easy… When you’re building something smarter than you, you have to get it right on the first try.

So, we know centralised systems are very vulnerable and decentralised systems have advantages, but with AGI we also have to fear that we have no room for the trial-and-error of decentralised internet style security architectures — ‘you have to get it right on the first try’. Are we snookered?! And of course there is no guarantee it is even possible to solve the alignment problem. When you hear people in this field describing ideas about ‘abstracting human ethics and encoding them’ one wonders if solving the alignment problem might prove even harder than AGI — maybe only an AGI could solve it…

Given the media debate is dominated by endless pictures of the Terminator and politicians are what they are, researchers are, understandably, extremely worried about what might happen if the political-media system makes a sudden transition from complacency to panic. After all, consider the global reaction if reputable scientists suddenly announced they have discovered plausible signals that super-intelligent aliens will arrive on earth within 30 years: even when softened by caveats, such a warning would obviously transform our culture (in many ways positively!). As Peter Thiel has said, creating true AGI is a close equivalent to the ‘super-intelligent aliens arriving on earth’ scenario and the most important questions are not economic but political, and in particular: are they friendly and can we stop them eliminating us by design, bad luck, or indifference?

Further, in my experience extremely smart technical people are often naive about politics. They greatly over-estimate the abilities of prime ministers and presidents. They greatly under-estimate the incentive problems and the degree of focus that is required to get ANYTHING done in politics. They greatly exaggerate the potential for ‘rational argument’ to change minds and wrongly assume somewhere at the top of power ‘there must be’ a group of really smart people working on very dangerous problems who have real clout. Further, everybody thinks they understand ‘communication’ but almost nobody does. We can see from recent events that even the very best engineering companies like Facebook and Google can not just make huge mistakes with the political/communication world but not learn (Facebook hiring Clegg was a sign of deep ignorance inside Facebook about their true problems). So it’s hard to be optimistic about the technical people educating the political people even assuming the technical people make progress with safety.

Hypothesis: 1) minimising nuclear/bio/AI risks and the potential for disastrous climate change requires a few very big things to change roughly simultaneously (‘normal’ political action will not be enough) and 2) this will require a weird alliance between a) technical people, b) political ‘renegades’, c) the public to ‘surround’ political Insiders locked into existing incentives:

  1. Different ‘models for effective action’ among powerful people, which will only happen if either (A) some freak individual/group pops up, probably in a crisis environment or (B) somehow incentives are hacked. (A) can’t be relied on and (B) is very hard.
  2. A new institution with global reach that can win global trust and support is needed. The UN is worse than useless for these purposes.
  3. Public opinion will have to be mobilised to overcome the resistance of political Insiders, for example, regarding the potential for technology to bring very large gains ‘to me’ and simultaneously avert extreme dangers. This connects to the very widespread view that a) the existing economic model is extremely unfair and b) this model is sustained by a loose alliance of political elites and corporate looters who get richer by screwing the rest of us.

I have an idea about a specific project, mixing engineering/economics/psychology/politics, that might do this and will blog on it separately.

I suspect almost any idea that could do 1-3 will seem at least weird but without big changes, we are simply waiting for the law of averages to do its thing. We may have decades for AGI and climate change but we could collide with the WMD law of averages tomorrow so, impractical as this sounds, it seems to me people have to try new things and risk failure and ridicule.

Please leave comments/corrections below…

Further reading

An excellent essay by Ian Hogarth, AI nationalism, which covers some of the same ground but is written by someone with deep connections to the field whereas I am extremely non-expert but interested.

AI safety is one of those subjects that is taken extremely seriously by a tiny number of people that has almost zero overlap with the policy/government world. If interested, then follow @ESYudkowsky. Cf. Intelligence Explosion Microeconomics, Yudkowsky.

Drones go to work, Chris Anderson (one of the pioneers of commercial drones). This explains the economics and how drones are transforming industries.

Meditations on Moloch, Scott Alexander. This is an extremely good essay in general about deep problems with our institutions but it touches on AI too.

Autonomous technology and the greater human good. Omohundro. ‘Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. Those drives will lead to anti-social and dangerous behaviour if not explicitly countered. The current computing infrastructure would be very vulnerable to unconstrained systems with these drives. We describe the use of formal methods to create provably safe but limited autonomous systems. We then discuss harmful systems and how to stop them. We conclude with a description of the ‘Safe-AI Scaffolding Strategy’ for creating powerful safe systems with a high confidence of safety at each stage of development.’ I strongly recommend reading this paper if interested in this blog.

Can intelligence explode? Hutter.

Read this 1955 essay by von Neumann ‘Can we survive technology?. VN was involved in the Manhattan Project, inventing computer science, game theory and much more. This essay explored the essential problem that the scale and speed of technological change have suddenly blown past political institutions. ‘For progress there is no cure…’

The recent Science piece on Qian Xuesen and systems management is HERE.

Qian Xuesen – Open Complex Giant System, 1993.

I wrote this (2018) about the extraordinary ARPA-PARC episode, which created much of the ecosystem for interactive personal computing and the internet and provided a model for how to conduct high-risk-high-payoff technology research.

I wrote this Jan 2017 on  systems management, von Neumann, Apollo, Mueller etc. It provides a checklist for how to improve Whitehall systematically and deliver complex projects like Brexit.

The Hollow Men (2014) that summarised the main problems of Westminster and Whitehall.

For some pre-history on computers, cf. The birth of computational thinking (some of the history of computing devices before the Gödel/Turing/von Neumann revolution) and for the next phase in the story — some of the history of ideas about mathematical foundations and logic such as the papers by Gödel and Turing in the 1930s — cf. The crisis of mathematical paradoxes, Gödel, Turing and the basis of computing.

My review of Allison’s book on the US-China contest and some thoughts on how Bismarck would see it.

On ‘Expertise’ from fighting and physics to economics, politics and government.

I blogged a few links to AI papers HERE.

Unrecognised simplicities of effective action #1: expertise and a quadrillion dollar business

‘The combination of physics and politics could render the surface of the earth uninhabitable.’ John von Neumann.

Introduction

This series of blogs considers:

  • the difference between fields with genuine expertise, such as fighting and physics, and fields dominated by bogus expertise, such as politics and economic forecasting;
  • the big big problem we face – the world is ‘undersized and underorganised’ because of a collision between four forces: 1) our technological civilisation is inherently fragile and vulnerable to shocks, 2) the knowledge it generates is inherently dangerous, 3) our evolved instincts predispose us to aggression and misunderstanding, and 4) there is a profound mismatch between the scale and speed of destruction our knowledge can cause and the quality of individual and institutional decision-making in ‘mission critical’ institutions – our institutions are similar to those that failed so spectacularly in summer 1914 yet they face crises moving at least ~103 times faster and involving ~106 times more destructive power able to kill ~1010 people;
  • what classic texts and case studies suggest about the unrecognised simplicities of effective action to improve the selection, education, training, and management of vital decision-makers to improve dramatically, reliably, and quantifiably the quality of individual and institutional decisions (particularly 1) the ability to make accurate predictions and b) the quality of feedback);
  • how we can change incentives to aim a much bigger fraction of the most able people at the most important problems;
  • what tools and technologies can help decision-makers cope with complexity.

[I’ve tweaked a couple of things in response to this blog by physicist Steve Hsu.]

*

Summary of the big big problem

The investor Peter Thiel (founder of PayPal and Palantir, early investor in Facebook) asks people in job interviews: what billion (109) dollar business is nobody building? The most successful investor in world history, Warren Buffett, illustrated what a quadrillion (1015) dollar business might look like in his 50th anniversary letter to Berkshire Hathaway investors.

‘There is, however, one clear, present and enduring danger to Berkshire against which Charlie and I are powerless. That threat to Berkshire is also the major threat our citizenry faces: a “successful” … cyber, biological, nuclear or chemical attack on the United States… The probability of such mass destruction in any given year is likely very small… Nevertheless, what’s a small probability in a short period approaches certainty in the longer run. (If there is only one chance in thirty of an event occurring in a given year, the likelihood of it occurring at least once in a century is 96.6%.) The added bad news is that there will forever be people and organizations and perhaps even nations that would like to inflict maximum damage on our country. Their means of doing so have increased exponentially during my lifetime. “Innovation” has its dark side.

‘There is no way for American corporations or their investors to shed this risk. If an event occurs in the U.S. that leads to mass devastation, the value of all equity investments will almost certainly be decimated.

‘No one knows what “the day after” will look like. I think, however, that Einstein’s 1949 appraisal remains apt: “I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.”’

Politics is profoundly nonlinear. (I have written a series of blogs about complexity and prediction HERE which are useful background for those interested.) Changing the course of European history via the referendum only involved about 10 crucial people controlling ~£107  while its effects over ten years could be on the scale of ~108 – 10people and ~£1012: like many episodes in history the resources put into it are extremely nonlinear in relation to the potential branching histories it creates. Errors dealing with Germany in 1914 and 1939 were costly on the scale of ~100,000,000 (108) lives. If we carry on with normal human history – that is, international relations defined as out-groups competing violently – and combine this with modern technology then it is extremely likely that we will have a disaster on the scale of billions (109) or even all humans (~1010). The ultimate disaster would kill about 100 times more people than our failure with Germany. Our destructive power is already much more than 100 times greater than it was then: nuclear weapons increased destructiveness by roughly a factor of a million.

Even if we dodge this particular bullet there are many others lurking. New genetic engineering techniques such as CRISPR allow radical possibilities for re-engineering organisms including humans in ways thought of as science fiction only a decade ago. We will soon be able to remake human nature itself. CRISPR-enabled ‘gene drives’ enable us to make changes to the germ-line of organisms permanent such that changes spread through the entire wild population, including making species extinct on demand. Unlike nuclear weapons such technologies are not complex, expensive, and able to be kept secret for a long time. The world’s leading experts predict that people will be making them cheaply at home soon – perhaps they already are. These developments have been driven by exponential progress much faster than Moore’s Law reducing the cost of DNA sequencing per genome from ~$108 to ~$10in roughly 15 years.

screenshot-2017-01-16-12-24-13

It is already practically possible to deploy a cheap, autonomous, and anonymous drone with facial-recognition software and a one gram shaped-charge to identify a relevant face and blow it up. Military logic is driving autonomy. For example, 1) the explosion in the volume of drone surveillance video (from 71 hours in 2004 to 300,000 hours in 2011 to millions of hours now) requires automated analysis, and 2) jamming and spoofing of drones strongly incentivise a push for autonomy. It is unlikely that promises to ‘keep humans in the loop’ will be kept. It is likely that state and non-state actors will deploy low-cost drone swarms using machine learning to automate the ‘find-fix-finish’ cycle now controlled by humans. (See HERE for a video just released for one such program and imagine the capability when they carry their own communication and logistics network with them.)

In the medium-term, many billions are being spent on finding the secrets of general intelligence. We know this secret is encoded somewhere in the roughly 125 million ‘bits’ of information that is the rough difference between the genome that produces the human brain and the genome that produces the chimp brain. This search space is remarkably small – the equivalent of just 25 million English words or 30 copies of the King James Bible. There is no fundamental barrier to decoding this information and it is possible that the ultimate secret could be described relatively simply (cf. this great essay by physicist Michael Nielsen). One of the world’s leading experts has told me they think a large proportion of this problem could be solved in about a decade with a few tens of billions and something like an Apollo programme level of determination.

Not only is our destructive and disruptive power still getting bigger quickly – it is also getting cheaper and faster every year. The change in speed adds another dimension to the problem. In the period between the Archduke’s murder and the outbreak of World War I a month later it is striking how general failures of individuals and institutions were compounded by the way in which events moved much faster than the ‘mission critical’ institutions could cope with such that soon everyone was behind the pace, telegrams were read in the wrong order and so on. The crisis leading to World War I was about 30 days from the assassination to the start of general war – about 700 hours. The timescale for deciding what to do between receiving a warning of nuclear missile launch and deciding to launch yourself is less than half an hour and the President’s decision time is less than this, maybe just minutes. This is a speedup factor of at least 103.

Economic crises already occur far faster than human brains can cope with. The financial system has made a transition from people shouting at each other to a a system dominated by high frequency ‘algorithmic trading’ (HFT), i.e. machine intelligence applied to robot trading with vast volumes traded on a global spatial scale and a microsecond (10-6) temporal scale far beyond the monitoring, understanding, or control of regulators and politicians. There is even competition for computer trading bases in specific locations based on calculations of Special Relativity as the speed of light becomes a factor in minimising trade delays (cf. Relativistic statistical arbitrage, Wissner-Gross). ‘The Flash Crash’ of 9 May 2010 saw the Dow lose hundreds of points in minutes. Mini ‘flash crashes’ now blow up and die out faster than humans can notice. Given our institutions cannot cope with economic decisions made at ‘human speed’, a fortiori they cannot cope with decisions made at ‘robot speed’. There is scope for worse disasters than 2008 which would further damage the moral credibility of decentralised markets and provide huge chances for extremist political entrepreneurs to exploit. (* See endnote.)

What about the individuals and institutions that are supposed to cope with all this?

Our brains have not evolved much in thousands of years and are subject to all sorts of constraints including evolved heuristics that lead to misunderstanding, delusion, and violence particularly under pressure. There is a terrible mismatch between the sort of people that routinely dominate mission critical political institutions and the sort of people we need: high-ish IQ (we need more people >145 (+3SD) while almost everybody important is between 115-130 (+1 or 2SD)), a robust toolkit for not fooling yourself including quantitative problem-solving (almost totally absent at the apex of relevant institutions), determination, management skills, relevant experience, and ethics. While our ancestor chiefs at least had some intuitive feel for important variables like agriculture and cavalry our contemporary chiefs (and those in the media responsible for scrutiny of decisions) generally do not understand their equivalents, and are often less experienced in managing complex organisations than their predecessors.

The national institutions we have to deal with such crises are pretty similar to those that failed so spectacularly in summer 1914 yet they face crises moving at least ~103 times faster and involving ~106 times more destructive power able to kill ~1010 people. The international institutions developed post-1945 (UN, EU etc) contribute little to solving the biggest problems and in many ways make them worse. These institutions fail constantly and do not  – cannot – learn much.

If we keep having crises like we have experienced over the past century then this combination of problems pushes the probability of catastrophe towards ‘overwhelmingly likely’.

*

What Is To be Done? There’s plenty of room at the top

‘In a knowledge-rich world, progress does not lie in the direction of reading information faster, writing it faster, and storing more of it. Progress lies in the direction of extracting and exploiting the patterns of the world… And that progress will depend on … our ability to devise better and more powerful thinking programs for man and machine.’ Herbert Simon, Designing Organizations for an Information-rich World, 1969.

‘Fascinating that the same problems recur time after time, in almost every program, and that the management of the program, whether it happened to be government or industry, continues to avoid reality.’ George Mueller, pioneer of ‘systems engineering’ and ‘systems management’ and the man most responsible for the success of the 1969 moon landing.

Somehow the world has to make a series of extremely traumatic and dangerous transitions over the next 20 years. The main transition needed is:

Embed reliably the unrecognised simplicities of high performance teams (HPTs), including personnel selection and training, in ‘mission critical’ institutions while simultaneously developing a focused project that radically improves the prospects for international cooperation and new forms of political organisation beyond competing nation states.

Big progress on this problem would automatically and for free bring big progress on other big problems. It could improve (even save) billions of lives and save a quadrillion dollars (~$1015). If we avoid disasters then the error-correcting institutions of markets and science will, patchily, spread peace, prosperity, and learning. We will make big improvements with public services and other aspects of ‘normal’ government. We will have a healthier political culture in which representative institutions, markets serving the public (not looters), and international cooperation are stronger.

Can a big jump in performance – ‘better and more powerful thinking programs for man and machine’ – somehow be systematised?

Feynman once gave a talk titled ‘There’s plenty of room at the bottom’ about the huge performance improvements possible if we could learn to do engineering at the atomic scale – what is now called nanotechnology. There is also ‘plenty of room at the top’ of political structures for huge improvements in performance. As I explained recently, the victory of the Leave campaign owed more to the fundamental dysfunction of the British Establishment than it did to any brilliance from Vote Leave. Despite having the support of practically every force with power and money in the world (including the main broadcasters) and controlling the timing and legal regulation of the referendum, they blew it. This was good if you support Leave but just how easily the whole system could be taken down should be frightening for everybody .

Creating high performance teams is obviously hard but in what ways is it really hard? It is not hard in the same sense that some things are hard like discovering profound new mathematical knowledge. HPTs do not require profound new knowledge. We have been able to read the basic lessons in classics for over two thousand years. We can see relevant examples all around us of individuals and teams showing huge gains in effectiveness.

The real obstacle is not financial. The financial resources needed are remarkably low and the return on small investments could be incalculably vast. We could significantly improve the decisions of the most powerful 100 people in the UK or the world for less than a million dollars (~£106) and a decade-long project on a scale of just ~£107 could have dramatic effects.

The real obstacle is not a huge task of public persuasion – quite the opposite. A government that tried in a disciplined way to do this would attract huge public support. (I’ve polled some ideas and am confident about this.) Political parties are locked in a game that in trying to win in conventional ways leads to the public despising them. Ironically if a party (established or new) forgets this game and makes the public the target of extreme intelligent focus then it would not only make the world better but would trounce their opponents.

The real obstacle is not a need for breakthrough technologies though technology could help. As Colonel Boyd used to shout, ‘People, ideas, machines – in that order!’

The real obstacle is that although we can all learn and study HPTs it is extremely hard to put this learning to practical use and sustain it against all the forces of entropy that constantly operate to degrade high performance once the original people have gone. HPTs are episodic. They seem to come out of nowhere, shock people, then vanish with the rare individuals. People write about them and many talk about learning from them but in fact almost nobody ever learns from them – apart, perhaps, from those very rare people who did not need to learn – and nobody has found a method to embed this learning reliably and systematically in institutions that can maintain it. The Prussian General Staff remained operationally brilliant but in other ways went badly wrong after the death of the elder Moltke. When George Mueller left NASA it reverted to what it had been before he arrived – management chaos. All the best companies quickly go downhill after the departure of people like Bill Gates – even when such very able people have tried very very hard to avoid exactly this problem.

Charlie Munger, half of the most successful investment team in world history, has a great phrase he uses to explain their success that gets to the heart of this problem:

‘There isn’t one novel thought in all of how Berkshire [Hathaway] is run. It’s all about … exploiting unrecognized simplicities… It’s a community of like-minded people, and that makes most decisions into no-brainers. Warren [Buffett] and I aren’t prodigies. We can’t play chess blindfolded or be concert pianists. But the results are prodigious, because we have a temperamental advantage that more than compensates for a lack of IQ points.’

The simplicities that bring high performance in general, not just in investing, are largely unrecognised because they conflict with many evolved instincts and are therefore psychologically very hard to implement. The principles of the Buffett-Munger success are clear – they have even gone to great pains to explain them and what the rest of us should do – and the results are clear yet still almost nobody really listens to them and above average intelligence people instead constantly put their money into active fund management that is proved to destroy wealth every year!

Most people think they are already implementing these lessons and usually strongly reject the idea that they are not. This means that just explaining things is very unlikely to work:

‘I’d say the history that Charlie [Munger] and I have had of persuading decent, intelligent people who we thought were doing unintelligent things to change their course of action has been poor.’ Buffett.

Even more worrying, it is extremely hard to take over organisations that are not run right and make them excellent.

‘We really don’t believe in buying into organisations to change them.’ Buffett.

If people won’t listen to the world’s most successful investor in history on his own subject, and even he finds it too hard to take over failing businesses and turn them around, how likely is it that politicians and officials incentivised to keep things as they are will listen to ideas about how to do things better? How likely is it that a team can take over broken government institutions and make them dramatically better in a way that outlasts the people who do it? Bureaucracies are extraordinarily resistant to learning. Even after the debacles of 9/11 and the Iraq War, costing many lives and trillions of dollars, and even after the 2008 Crash, the security and financial bureaucracies in America and Europe are essentially the same and operate on the same principles.

Buffett’s success is partly due to his discipline in sticking within what he and Munger call their ‘circle of competence’. Within this circle they have proved the wisdom of avoiding trying to persuade people to change their minds and avoiding trying to fix broken institutions.

This option is not available in politics. The Enlightenment and the scientific revolution give us no choice but to try to persuade people and try to fix or replace broken institutions. In general ‘it is better to undertake revolution than undergo it’. How might we go about it? What can people who do not have any significant power inside the system do? What international projects are most likely to spark the sort of big changes in attitude we urgently need?

This is the first of a series. I will keep it separate from the series on the EU referendum though it is connected in the sense that I spent a year on the referendum in the belief that winning it was a necessary though not sufficient condition for Britain to play a part in improving the quality of government dramatically and improving the probability of avoiding the disasters that will happen if politics follows a normal path. I intended to implement some of these ideas in Downing Street if the Boris-Gove team had not blown up. The more I study this issue the more confident I am that dramatic improvements are possible and the more pessimistic I am that they will happen soon enough.

Please leave comments and corrections…

* A new transatlantic cable recently opened for financial trading. Its cost? £300 million. Its advantage? It shaves 2.6 milliseconds off the latency of financial trades. Innovative groups are discussing the application of military laser technology, unmanned drones circling the earth acting as routers, and even the use of neutrino communication (because neutrinos can go straight through the earth just as zillions pass through your body every second without colliding with its atoms) – cf. this recent survey in Nature.

‘Standin’ by the window, where the light is strong’: de-extinction, machine intelligence, the search for extra-solar life, autonomous drone swarms bombing Parliament, genetics & IQ, science & politics, and much more @ SciFoo 2014

‘SciFoo’ 8-10 August 2014, the Googleplex, Silicon Valley, California.

On Friday 8 August, I woke up in Big Sur (the coast of Northern California), looked out over the waves breaking on the wild empty coastline, munched a delicious Mexican breakfast at Deetjen’s, then drove north on Highway 1 towards Palo Alto where a few hours later I found myself looking through the windows of Google’s HQ at a glittering sunset in Silicon Valley.

I was going to ‘SciFoo’. SciFoo is a weekend science conference. It is hosted by Larry Page at Google’s HQ in Silicon Valley and organised by various people including the brilliant Timo Hannay from Digital Science.

I was invited because of my essay that became public last year (cf. HERE). Of the 200+ people, I was probably the only one who made zero positive contribution to the fascinating weekend and therefore wasted a place, so although it was a fantastic experience for me the organisers should not invite me back and I feel guilty about the person who could not go because I was there. At least I can let others know about some of the things discussed… (Although it was theoretically ‘on the record unless stated otherwise’, I could tell that many scientists were not thinking about this and so I have left out some things that I think they would not want attributed. Given they were not experienced politicians being interviewed but scientists at a scientific conference, I’m erring on the side of caution, particularly given the subjects discussed.)

It was very interesting to see many of the people whose work I mentioned in my essay and watch them interacting with each other – intellectually and psychologically / physically.

I will describe some of the things that struck me though, because there are about 7-10 sessions going on simultaneously, this is only a small snapshot.

In my essay, I discuss some of the background to many of these subjects so I will put references [in square brackets] so people can refer to it if they want.

Please note that below I am reporting what I think others were saying – unless it is clear, I am not giving my own views. On technical issues, I do not have my ‘own’ views – I do not have relevant skills. All I can do is judge where consensus lies and how strong it is. Many important issues involve asking at least 1) is there a strong scientific consensus on X among physical scientists with hard quantitative data to support their ideas (uber-example, the Standard Model of particle physics), b) what are the non-science issues, such as ‘what will it cost, who pays/suffers and why?’ On A, I can only try to judge what technically skilled people think. B is a different matter.

Whether you were there or not, please leave corrections / additions / questions in the comments box. Apologies for errors…

In a nutshell, a few likely scenarios / ideas, without spelling out caveats… 1) Extinct species are soon going to be brought back to life and the same technology will be used to modify existing species to help prevent them going extinct. 2) CRISPR  – a new gene editing technology – will be used to cure diseases and ‘enhance’ human performance but may also enable garage bio-hackers to make other species extinct. 3) With the launch of satellites in 2017/18, we may find signs of life by 2020 among the ~1011 exoplanets we now know exist just in our own galaxy though it will probably take 20-30 years, but the search will also soon get crowdsourced in a way schools can join in. 4) There is a reasonable chance we will have found many of the genes for IQ within a decade via BGI’s project, and the rich may use this information for embryo selection. 5) ‘Artificial neural networks’ are already outperforming humans on various pattern-recognition problems and will continue to advance rapidly. 6) Automation will push issues like a negative income tax onto the political agenda as millions lose their jobs to automation. 7) Autonomous drones will be used for assassinations in Europe and America shortly. 8) Read Neil Gershenfeld’s book ‘FAB’ if you haven’t and are interested in science education / 3D printing / computer science (or at least watch his TED talks). 9) Scientists are desperate to influence policy and politics but do not know how.

Biological engineering / computational biology / synthetic biology [Section 4]

George Church (Harvard), a world-leading biologist, spoke at a few sessions and his team’s research interests were much discussed.  (Don’t assume he said any specific thing below.)

The falling cost of DNA sequencing continues to spur all sorts of advances. It has fallen from a billion dollars per genome a decade ago to less than a thousand dollars now (a million-fold improvement), and the Pentagon is planning on it reaching $100 soon. We can also sequence cancer cells to track their evolution in the body.

CRISPR. CRISPR is a new (2012) and very hot technology that is a sort of ‘cut and paste’ gene editing tool. It allows much more precise and effective engineering of genomes. Labs across America are rushing to apply it to all sorts of problems. In March this year, it was used to correct faulty genes in mice and cure them of a liver condition. It plays a major part in many of the biological issues sketched below.

‘De-extinction’ (bringing extinct species back to life). People are now planning the practical steps for de-extinction to the extent that they are scoping out land in Siberia where woolly mammoths will roam. As well as creating whole organisms, they will also grow organs modified by particular genes to test what specific genes and combinations do. This is no longer sci-fi – it is being planned and is likely to happen. The buffalo population was recently re-built (Google serves buffalo burgers in its amazing kitchens) from a tiny population to hundreds of thousands and there seems no reason to think it is impossible to build a significant population from scratch.

What does this mean? You take the DNA from an animal, say a woolly mammoth buried in the ground, sequence it, then use the digitised genome to create an embryo and either grow it in a similar animal (e.g. elephant for a mammoth) or in an artificial womb. (I missed the bit explaining the rationale for some of the proposed projects but, apart from the scientific reasons, one rationale for the mammoth was described as a conservation effort to preserve the frozen tundra and prevent massive amounts of greenhouse gases being released from beneath it.)

There are also possibilities of using this technology for conservation. For example, one could re-engineer the Asian elephant so that it could survive in less hospitable climates (e.g. modify the genes that produce haemoglobin so it is viable in colder places).

Now that we have sequenced the genome for Neanderthals (and learned that humans interbred with them, so you have traces of their DNA – unless you’re an indigenous sub-Saharan African), there is no known physical reason why we could not bring a Neanderthal back to life once the technology has been refined on other animals. This obviously raises many ethical issues – e.g. if we did it, they would have to be given the same legal rights as us (one distinguished person said that if there were one in the room with us we would not notice, contra the pictures often used to illustrate them). It is assumed by many that this will happen (nobody questioned the assumption) – just as it seemed to be generally assumed that human cloning will happen – though probably not in a western country but somewhere with fewer legal restrictions, after the basic technologies have been refined. (The Harvard team gets emails from women volunteering to be the Neanderthal’s surrogate mum.)

‘Biohacking’. Biohacking is advancing faster than Moore’s Law. CRISPR editing will allow us to enhance ourselves. E.g. Tibetans have evolved much more efficient systems for coping with high altitude, and some Africans have much stronger bones than the rest of us (see below). Will we reengineer ourselves to obtain these advantages? CRISPR obviously also empowers all sorts of malevolent actors too – cf. this very recent paper (by Church et al). It may soon be possible for people in their garages to edit genomes and accidentally or deliberately drive species to extinction as well as attempt to release deadly pathogens. I could not understand why people were not more worried about this – I hope I was missing a lot. (Some had the attitude that ‘nature already does bio-terrorism’ so we should relax. I did not find this comforting and I’m sure I am in the majority so for anybody influential reading this I would strongly advise you not to use this argument in public advocacy or it is likely to accelerate calls for your labs to be shut down.)

‘Junk’. There is more and more analysis of what used to be called ‘junk DNA’. It is now clear that far from being ‘junk’ much of this has functions we do not understand. This connects to the issue that although we sequenced the human genome over a decade ago, the quality of the ‘reference’ version is not great and (it sounded like from the discussions) it needs upgrading.

‘Push button’ cheap DNA sequencers are around the corner. Might such devices become as ubiquitous as desktop printers? Why doesn’t someone create a ‘gene web browser’ that can cope with all the different data formats for genomes?

Privacy. There was a lot of talk about ‘do you want your genome on the web?’. I asked a quick informal pop quiz (someone else’s idea): there was unanimity that ‘I’d much rather my genome was on the web than my browsing history’. [UPDATE: n<10 and perhaps they were tongue in cheek!? One scientist pointed out in a session that when he informed his insurance company, after sequencing his own genome, that he had a very high risk of getting colon cancer, they raised his premiums. There are all sorts of reasons one would want to control genomic information and I was being a bit facetious.]

In many ways, computational biology and synthetic biology have that revolutionary feeling of the PC revolution in the 1970s – huge energy, massive potential for people without big resources to make big contributions, the young crowding in, the feeling of dramatic improvements imminent. Will this all seem ‘too risky’? It’s hard to know how the public will respond to risk. We put up with predictable annual carnage from car accidents but freak out over trivia. We ignore millions of deaths in the Congo but freak out over a handful in Israel/Gaza. My feeling is some of the scientists are too blasé about how the public will react to the risks, but I was wrong about how much fear there would be about the news that scientists recently deliberately engineered a much more dangerous version of an animal flu.

AI / machine learning / neuroscience [Section 5].

Artificial neural networks (NNs), now often referred to as ‘deep learning’, were first created 50 years ago but languished for a while when progress slowed. The field is now hot again. (Last year Google bought some companies leading the field, and a company, Boston Dynamics, that has had a long-term collaboration with DARPA.)

Jurgen Schmidhuber explained progress and how NNs have recently approached or surpassed human performance in various fields. E.g. recently NNs have surpassed human performance in recognising traffic signals (0.56% error rate for the best NN versus 1.16% for humans). Progress in all sorts of pattern recognition problems is clearly going to continue rapidly. E.g. NNs are now being used to automate a) the analysis of scans for cancer cells and b) the labelling of scans of human brains – so artificial neural networks are now scanning and labelling natural neural networks.

Steve Hsu has blogged about this session here:

http://infoproc.blogspot.co.uk/2014/08/neural-networks-and-deep-learning.html?m=1

Michael Nielsen is publishing an education project online for people to teach themselves the basics of neural networks. It is brilliant and I would strongly advise teachers reading this blog to consider introducing it into their schools and doing the course with the pupils.

http://neuralnetworksanddeeplearning.com

Neil Gershenfeld (MIT) gave a couple of presentations. One was on developments in computer science connecting: non-‘von Neumann architecture’, programmable matter, 3D printing, ‘the internet of things’ etc. [Cf. Section 3.] NB. IBM announced this month substantial progress in their quest for a new computer architecture that is ‘non-Von Neumann’: cf. this –

http://venturebeat.com/2014/08/07/ibms-synapse-marshals-the-power-of-the-human-brain-in-a-computer/view-all/

Another was on the idea of an ‘interspecies internet’. We now know many species can recognise each other, think, and communicate much better than we realised. He showed bonobos playing music with Peter Gabriel and dolphins communicating. He and others are plugging them into the internet. Some are doing this to help the general goal of figuring out how we might communicate with intelligent aliens – or how they might communicate with us.

(Gershenfeld’s book FAB led me to push 3D printing into the new National Curriculum and I would urge school science teachers to watch his TED talks and read this book. [INSERTED LATER: Some people have asked about this point. I (I thought obviously) did not mean I wrote the NC document. I meant – I pushed the subject into the discussions with the committees/drafters who wrote the NC. Experts in the field agreed it belonged. When it came out, this was not controversial. We also funded pilots with 3D printers so schools could get good advice about how to teach the subject well.] His point about 3D printers restoring the connection between thinking and making – lost post-Renaissance – is of great importance and could help end the foolishly entrenched ‘knowledge’ vs ‘skills’ and academic vs vocational trench wars. Gove actually gave a speech about this not long before he was moved and as far as I could tell it got less coverage than any speech he ever gave, thus proving the cliché about speeches on ‘skills’.)

There were a few presentations about ‘computational neuroscience’. I could not understand anything much as they were too technical. It was clear that there is deep concern among EU neuroscientists about the EU’s  huge funding for Henry Markram’s Human Brain Project. One leading neuroscientist said to me that the whole project is misguided as it does not have clear focused goals and the ‘overhype’ will lead to public anger in a few years. Apparently, the EU is reconsidering the project and its goals. I have no idea about the merits of these arguments. I have a general prejudice that, outside special circumstances, experience suggests that it is better to put funding into many pots and see what works, as DARPA does.

There are all sorts of crossovers between: AI / neuroscience / big data / NNs / algorithmic pattern recognition in other fields.

Peter Norvig, a leader in machine intelligence, said that he is more worried about the imminent social implications of continued advances making millions unemployed than he is about a sudden ‘Terminator / SKYNET’ scenario of a general purpose AI bootstrapping itself to greater than human intelligence and exterminating us all. Let’s hope so. It is obvious that this field is going to keep pushing boundaries – in open, commercial, and classified projects – so we are essentially going to be hoping for the best as we make more and more advances in AI. The idea of a ‘negative income tax’ – or some other form of essentially paying people X just to live – seems bound to return to the agenda. I think it could be a way around all sorts of welfare arguments. The main obstacle, it seems to me, is that people won’t accept paying for it if they think uncontrolled immigration will continue as it is now.

Space [Section 2]

There was great interest in various space projects and some senior people from NASA. There is much sadness at how NASA, despite many great people, has become a normal government institution – ie. caught in DC politics, very bureaucratic, and dysfunctional in various ways. On the other hand, many private ventures are now growing. E.g. Elon Musk is lowering the $/kg of getting material into orbit and planning a non-government Mars mission. As I said in my essay, really opening up space requires a space economy – not just pure science and research (such as putting telescopes on the far side of the moon, which we obviously should do). Columbus opened up America – not the Vikings.

There is another obvious motive. As Carl Sagan said, if the dinosaurs had had a space programme, they’d still be here. In the long-term we either develop tools for dealing with asteroids or we will be destroyed. We know this for sure. I think I heard that NASA is planning to park a small asteroid close to the moon around 2020 but I may have misheard / misunderstood.

Mario Livio led a great session on the search for life on exoplanets. The galaxy has ~1011 stars and there is ~1 planet on average per star. There are ~1011 galaxies, so a Fermi estimate is there are ~1022 planets – 10 billion trillion planets – in the observable universe (this number is roughly 1,000 times bigger than the number you get in the fable of putting a grain of rice on the first square of a chessboard and doubling on each subsequent square). Many of them are in the ‘habitable zone’ around stars.

In 2017/18, there are two satellites launching that will be able to do spectroscopy on exoplanets – i.e. examine their atmospheres and detect things like oxygen and water. ‘If we get lucky’, these satellites will find ‘bio-signatures’ of life. If they find life having looked at only a few planets, then it would mean that life is very common. ‘More likely’ is it will take 20-30 years and a new generation of space-based telescopes to find life. If planets are found with likely biosignatures, then it would make sense to turn SETI’s instruments towards them to see if they find anything. (However, we are already phasing out the use of radio waves for various communications – perhaps the use of radio waves is only a short window in the lifetime of a civilisation.) There are complex Bayesian arguments about what we might infer about our own likely future given various discoveries but I won’t go into those now. (E.g. if we find life is common but no traces of intelligent life, does this mean a) the evolution of complex life is not a common development from simple life; b) intelligent life is also common but it destroys itself; c) they’re hiding, etc.)

A very impressive (and helpful towards the ignorant like me) young scientist working on exoplanets called Oliver Guyon demonstrated a fascinating project to crowdsource the search for exoplanets by building a global network of automated cameras – PANOPTES (www.projectpanoptes.org). His team has built a simple system that can find exoplanets using normal digital cameras costing less than $1,000. They sit in a box connected to a 12V power supply, automatically take pictures of the night sky every few seconds, then email the data to the cloud. There, the data is aggregated and algorithms search for exoplanets. These units are cheap (can’t remember what he said but I think <$5,000). Everything is open-source, open-hardware. They will start shipping later this year and will make a brilliant school science project. Guyon has made the project with schools in mind so that assembling and operating the units will not require professional level skills. They are also exploring the next move to connect smartphone cameras.

Building the >15m diameter space telescopes we need to search for life seems to me an obvious priority for scientific budgets –  it is one of the handful of the most profound questions facing us.

There was an interesting cross-over discussion about ‘space and genetics’ in which people discussed various ways in which space exploration would encourage / require genetic modification. E.g.1 some sort of rocket fuel has recently been discovered to exist in large quantities on Mars. This is very handy but the substance is toxic. It might therefore make sense to modify humans going to live on Mars to be resistant. E.g.2 Space travel weakens bones. It has been discovered that mutations in the human population can improve bone strength by 8 standard deviations. This is a massive improvement – for comparison, 8 SDs in IQ covers people from severely mentally disabled to Nobel-winners. This was discovered by a team of scientists in Africa who noticed that people in a local tribe who got hit by cars did not suffer broken bones, so they sequenced the locals’ genomes. (Someone said there have already been successful clinical trials testing this discovery in a real drug to deal with osteoporosis.) E.g.3 Engineering E. Coli shows that just four mutations can improve resistance to radiation by ?1,000 times (can’t read my note).

Craig Venter and others are thinking about long-term projects to send ‘von Neumman-bots’ (self-replicating space drones) across the universe containing machines that could create biological life once they arrive somewhere interesting, thus avoiding the difficult problems of keeping humans alive for thousands of years on spaceships. (Nobel-winning physicist Gerard t’ Hooft explains the basic principles of this in his book Playing with planets.)

This paper (August 2014) summarises issues in the search for life:

http://www.pnas.org/content/early/2014/08/01/1304213111.full.pdf

Finding the genes for IQ and engineering possibilities [Section 5].

When my essay came out last year, there was a lot of mistaken reporting that encouraged many in the education world to grab the wrong end of the stick about IQ, though the BBC documentary about the controversy (cf. below) was excellent and a big step forward. It remains the case that very few people realise that in the last couple of years direct examination of DNA has now vindicated the consistent numbers on IQ heritability from decades of twin/adoption studies.

The rough heritability numbers for IQ are no longer in doubt among physical scientists who study this field: it is roughly 50% heritable at age ~18-20 and this number rises towards 70-80% for older adults. This is important because IQ is such a good predictor of the future – it is a better predictor than social class. E.g. The long-term Study of Mathematically Precocious Youth, which follows what has happened to children with 1:10,000 ability, shows among many things that a) a simple ‘noisy’ test administered at age 12-13 can make amazingly accurate predictions about their future, and b) achievements such as scientific breakthroughs correlate strongly with IQ. (If people looked at the data from SMPY, then I think some of the heat and noise in the debate  would fade but it is a sad fact that approximately zero senior powerful people in the English education world had even heard of this study before the furore over Plomin last year.)

Further, the environmental effects that are important are not the things that people assume. If you test the IQ of an adopted child in adulthood and the parents who adopted it, you find approximately zero correlation – all those anguished parenting discussions had approximately no measurable impact on IQ. (This does not mean that ‘parenting doesn’t matter’ – parents can transfer narrow skills such as playing the violin.) In the technical language, the environmental effects that are important are ‘non-shared’ environmental effects – i.e. they are things that two identical twins do not experience in the same way. We do not know what they are. It is reasonable to think that they are effectively random tiny events with nonlinear effects that we may never be able to track in detail – cf. this paper for a discussion of this issue in the context of epidemiology: http://ije.oxfordjournals.org/content/40/3/537.full.pdf+html

There remains widespread confusion on this subject among social scientists, education researchers, and the worlds of politics and the media where people were told misleading things in the 1980s and 1990s and do not realise that the debates have been transformed. To be fair, however, it was clear from this weekend that even many biologists do not know about new developments in this field so it is not surprising that political journalists and education researchers do not.

(An example of confusion in the political/media world… In my essay, I used the technical term ‘heritable’ which is a population statistic – not a statement about an individual. I also predicted that media coverage would confuse the subject (e.g. by saying things like ‘70% of your IQ comes from genes’). Sure enough some journalists claimed I said the opposite of what I actually said then they quoted scientists attacking me for making a mistake that not only did I not make but which I actually warned about. Possibly the most confused sentence of all those in the media about my essay was the line ‘wealth is more heritable than genes’, which was in Polly Toynbee’s column and accompanying headline in the Guardian. This sentence is a nonsense sentence as it completely mangles the meaning of the term ‘heritable’. Much prominent commentary from politicians and sociologists/economists on ‘social mobility’ is gibberish because of mistaken assumptions about genes and environment. The Endnote in my essay has links to work by Plomin, Hsu et al that explains it all properly. This interview with Plomin is excellent: http://www.spectator.co.uk/features/8970941/sorry-but-intelligence-really-is-in-the-genes/. This recent BBC radio programme is excellent and summarises the complex issues well: http://www.bbc.co.uk/programmes/b042q944/episodes/guide)

I had a fascinating discussion/tutorial at SciFoo with Steve Hsu. Steve Hsu is a professor of theoretical physics (and successful entrepreneur) with a long interest in IQ (he also runs a brilliant blog that will keep you up to speed on all sorts). He now works part time on the BGI project in China to discover the genes responsible for IQ.

IQ is very similar to height from the perspective of behavioural genetics. Height has the advantage that it is obviously easier to measure than IQ but it has roughly the same heritability. Large scale GWAS are already identifying some of the genes responsible for height. Hsu recently watched a talk by Fields Medallist Terry Tao and realised that a branch of maths could be used to examine the question – how many genomes do we need to scan to identify a substantial number of the genes for IQ? His answer: ‘roughly 10k moderately rare causal variants of mostly negative effect are responsible for normal population variation’ and finding them will require sequencing roughly a million genomes. The falling cost of sequencing DNA means that this is within reach. ‘At the time of this writing SNP genotyping costs are below $50 USD per individual, meaning that a single super-wealthy benefactor could independently fund a crash program for less than $100 million’ (Hsu).

The BGI project to find these genes has hit some snags recently (e.g. a US lawsuit between the two biggest suppliers of gene sequencing machines). However, it is now expected to start again soon. Hsu thinks that within a decade we could find many of the genes responsible for IQ. He has just put his fascinating paper on this subject on his blog (there is also a Q&A on p.27 that will be very useful for journalists):

http://infoproc.blogspot.co.uk/2014/08/genetic-architecture-of-intelligence.html

Just discovering a substantial fraction of the genes would be momentous in itself but there is more. It is already the case that farmers use genomes to make predictions about cows’ properties and behaviour (‘genotype to phenotype’ predictions). It is already the case that rich people could use in vitro fertilisation to select the egg which they think will be most advantageous, because they can sequence genomes of multiple eggs and examine each one to look for problems then pick the one they prefer. Once we identify a substantial number of IQ genes, there is no obvious reason why rich people will not select the egg that has the highest prediction for IQ. 

This clearly raises many big questions. If the poor cannot do the same, then the rich could quickly embed advantages and society could become not only more unequal but also based on biological classes. One response is that if this sort of thing does become possible, then a national health system should fund everybody to do this. (I.e. It would not mandate such a process but it would give everybody a choice of whether to make use of it.) Once the knowledge exists, it is hard to see what will stop some people making use of it and offering services to – at least – the super-rich.

It is vital to separate two things: a) the basic science of genetics and cognition (which must be allowed to develop), and b) the potential technological applications and their social implications. The latter will rightly make people deeply worried, given our history, and clearly require extremely serious public debate. One of the reasons I wrote my essay was to try to stimulate such debate on the biggest – and potentially most dangerous – scientific issues. By largely ignoring such issues, Westminster, Whitehall, and the political media are wasting the time we have to discuss them so technological breakthroughs will be unnecessarily  shocking when they come.

Hsu’s contribution to this research – and his insight when listening to Tao about how to apply a branch of mathematics to a problem – is also a good example of how the more abstract fields of maths and physics often make contributions to the messier study of biology and society. The famous mathematician von Neumann practically invented some new fields outside maths and made many contributions to others. The physicist-mathematician Freeman Dyson recently made a major contribution to Game Theory which had lain unnoticed for decades until he realised that a piece of maths could be applied to uncover new strategies (Google “Dyson zero determinant strategies” and cf. this good piece: http://www.americanscientist.org/issues/id.16112,y.0,no.,content.true,page.1,css.print/issue.aspx).

However, this also raises a difficult issue. There is a great deal of Hsu’s paper – and the subject of IQ and heritability generally – that I do not have the mathematical skills to understand. This will be true of a large fraction of education researchers in education departments – I would bet a large majority. This problem is similar for many other vital issues (and applies to MPs and their advisers) and requires general work on translating such research into forms that can be explained by the media.

Kathryn Ashbury also did a session on genes and education but I went to a conflicting one with George Church so unfortunately I missed it.

‘Big data’, simulations, and distributed systems [Section 6&7]

The rival to Markram’s Brain Project for mega EU funding was Dirk Helbing (ETH Zurich) and his project for new simulations to aid policy-making. Helbing was also at SciFoo and gave a couple of presentations. I will write separately about this.

Helbing says convincingly: ‘science must become a fifth pillar of democracies, besides legislation, executive, jurisdiction, and the public media’. Many in politics hope that technology will help them control things that now feel out of control. This is unlikely. The amount of data is growing at a faster rate than the power of processing and the complexity of networked systems grows factorially therefore top-down control will become less and less effective.

The alternative? ‘Distributed (self-)control, i.e. bottom-up self-regulation’. E.g. Helbing’s team has invented self-regulating traffic lights driven by traffic flows that can ‘outperform the classical top-down control by a conventional traffic center.’

‘Can we transfer and extend this principle to socio-economic systems? Indeed, we are now developing mechanisms to overcome coordination and cooperation failures, conflicts, and other age-old problems. This can be done with suitably designed social media and sensor networks for real-time measurements, which will eventually weave a Planetary Nervous System. Hence, we can finally realize the dream of self-regulating systems… [S]uitable institutions such as certain social media – combined with suitable reputation systems – can promote other-regarding decision-making. The quick spreading of social media and reputation systems, in fact, indicates the emergence of a superior organizational principle, which creates collective intelligence by harvesting the value of diversity…’

His project’s website is here:

http://www.futurict.eu

I wish MPs and spads in all parties would look at this project and Helbing’s work. It provides technologically viable and theoretically justifiable mechanisms to avoid the current sterile party debates about delivery of services. We must move from Whitehall control to distributed systems…

Science and politics

Unsurprisingly, there was a lot of grumbling about politicians, regulation, Washington gridlock, bureaucracy and so on.

Much of it is clearly justified. Some working in genetics had stories about how the regulations forbid them to tell people about imminently life threatening medical problems they discover. Others were bemoaning the lack of action on asteroid defence and climate change.

Some of these problems are inherently extremely difficult, as I discuss in my essay. On top of this, though, is the problem that many (most?) scientists do not know how to go about changing things.

It was interesting that some very eminent scientists, all much cleverer than ~100% of those in politics [INSERT: better to say ‘all with higher IQ than ~100% of those in politics’], have naive views about how politics works. In group discussions, there was little focused discussion about how they could influence politics better even though it is clearly a subject that they care about very much. (Gershenfeld said that scientists have recently launched a bid to take over various local government functions in Barcelona, which sounds interesting.)

A few times I nearly joined in the discussion but I thought it would disrupt things and distract them. In retrospect, I think this may have been a mistake and I should have spoken up. But also I am not articulate and I worried I would not be able to explain their errors and it would waste their time.

I will blog on this issue separately. A few simple observations…

To get things changed in politics, scientists need mechanisms a) to agree priorities in order to focus their actions on b) roadmaps with specifics. Generalised whining never works. The way to influence politicians is to make it easy for them to fall down certain paths without much thought, and this means having a general set of goals but also a detailed roadmap the politicians can apply, otherwise they will drift by default to the daily fog of chaos and moonlight.

Scientists also need to be prepared to put their heads above the parapet and face controversy. Many comments amounted to ‘why don’t politicians do the obviously rational thing without me having to take a risk of being embroiled in media horrors’. Sorry guys but this is not how it works.

Many academics are entirely focused on their research and do not want to lose time to politics. This is entirely reasonable. But if you won’t get involved you can have little influence other than lending your name to the efforts of others.

Working in the Department for Education, I have experienced in England that very few scientists were prepared to face controversy over the issue of A Levels (exams at 18) and university entry / undergraduate standards even though this problem directly affected their own research area. Many dozens sought me out 2007-14 to complain about existing systems. I can count on the fingers of one hand those who rolled the dice and did things in the public domain that could have caused them problems. I have heard many scientists complain about media reports but when I’ve said – ‘write a blog explaining why they’re wrong’, the answer is almost invariably ‘oh, the VC’s office would go mad’. If they won’t put their heads above the parapet on an issue that directly touches their own subject and career, how much are they likely to achieve in moving political debate in areas outside their own fields?

Provided scientists a) want to avoid controversy and b) are isolated, they cannot have the leverage they want. The way to minimise controversy is to combine in groups – for the evolutionary biologists reading this, think SHOALS! – so that each individual is less exposed. But you will only join a shoal if you agree a common purpose.

I’m going to do a blog on ‘How scientists can learn from Bismarck and Jean Monnet to influence politics‘. Monnet avoided immediate battles for power in favour of ‘preparing the future’ – i.e. having plans in his pocket for when crises hit and politicians were desperate. He created the EEC in this way. In the same way people find it extremely hard to operationalise the lessons of Thucydides or Bismarck, they do not operationalise the lessons from Monnet. It would be interesting if scientists did this in a disciplined way. In some ways, it seems to me vital if we are to avoid various disasters. It is also necessary, however, to expose scientists to the non-scientific factors in play.

Anyway, it would be worth exploring this question: can very high IQ people with certain personality traits (like von Neumann, not like Gödel) learn enough in half a day’s exposure to case studies of successful political action to enable them to change something significant in politics, provided someone else can do most of the admin donkey work? I’m willing to bet the answer is YES. Whether they will then take personal risks by ACTING is another question.

A physicist remarked: ‘we’re bitching about politicians but we can’t even sort out our own field of scientific publishing which is a mess’.

NB. for scientists who haven’t read anything I’ve read before, do not make the mistake of thinking I am defending politicians. If you read other stuff I’ve written you will see that I have made all the criticisms that you have. But that doesn’t mean that scientists cannot do much better than they are at influencing policy.

A few general comments

1. It has puzzled me for over a decade that a) one of the few things the UK still has that is world class is Oxbridge, b) we have the example of Silicon Valley and our own history of post-1945 bungling to compare it with (e.g. how the Pentagon treated von Neumann and how we treated Turing viz the issue of developing computer science), yet c) we persistently fail to develop venture capital-based hubs around Oxbridge on the scale they deserve. As I pottered down University Avenue in Palo Alto looking for a haircut, past venture capital offices that can provide billions in start-up investment, I thought: you’ve made a few half-hearted attempts to persuade people to do more on this, when you get home try again. So I will…

2. It was interesting to see how physicists have core mathematical skills that allow them to grasp fundamentals of other fields without prior study. Watching them reminded me of Mandelbrot’s comment that:

‘It is an extraordinary feature of science that the most diverse, seemingly unrelated, phenomena can be described with the same mathematical tools. The same quadratic equation with which the ancients drew right angles to build their temples can be used today by a banker to calculate the yield to maturity of a new, two-year bond. The same techniques of calculus developed by Newton and Leibniz two centuries ago to study the orbits of Mars and Mercury can be used today by a civil engineer to calculate the maximum stress on a new bridge… But the variety of natural phenomena is boundless while, despite all appearances to the contrary, the number of really distinct mathematical concepts and tools at our disposal is surprisingly small… When we explore the vast realm of natural and human behavior, we find the most useful tools of measurement and calculation are based on surprisingly few basic ideas.’

3. High status people have more confidence in asking basic / fundamental / possibly stupid questions. One can see people thinking ‘I thought that but didn’t say it in case people thought it was stupid and now the famous guy’s said it and everyone thinks he’s profound’. The famous guys don’t worry about looking stupid and they want to get down to fundamentals in fields outside their own.

4. I do not mean this critically but watching some of the participants I was reminded of Freeman Dyson’s comment:

‘I feel it myself, the glitter of nuclear weapons. It is irresistible if you come to them as a scientist. To feel it’s there in your hands. To release the energy that fuels the stars. To let it do your bidding. And to perform these miracles, to lift a million tons of rock into the sky, it is something that gives people an illusion of illimitable power, and it is in some ways responsible for all our troubles, I would say, this is what you might call ‘technical arrogance’ that overcomes people when they see what they can do with their minds.’ 

People talk about rationales for all sorts of things but looking in their eyes the fundamental driver seems to be – am I right, can I do it, do the patterns in my mind reflect something real? People like this are going to do new things if they can and they are cleverer than the regulators. As a community I think it is fair to say that outside odd fields like nuclear weapons research (which is odd because it still requires not only a large collection of highly skilled people but also a lot of money and all sorts of elements that are hard (but not impossible) for a non-state actor to acquire and use without detection), they believe that pushing the barriers of knowledge is right and inevitable. Fifteen years on from the publication by Silicon Valley legend Bill Joy of his famous essay (‘Why the future doesn’t need us’), it is clear that many of the things he feared have proceeded and there remains no coherent government approach or serious international discussion. (I am not suggesting that banning things is generally the way forward.)

5. The only field where there was a group of people openly lobbying for something to be made illegal was the field of autonomous lethal drones. (There is a remorseless logic that means that countermeasures against non-autonomous drones (e.g. GPS-spoofing) incentivises one to make one’s drones autonomous. They can move about waiting to spot someone’s face then destroy them without any need for human input.) However, the discussion confirmed my view that even if this might be a good idea – it is doomed, in the short-term at least. I wonder what is to stop someone sending a drone swarm across the river and bombing Parliament during PMQs. Given it will be possible to deploy autonomous drones anonymously, it seems there may be a new era of assassinations coming, apart from all the other implications of drones. Given one may need a drone swarm to defend against drone swarm, I can’t see them being outlawed any time soon. (Cf. Suarez’s Kill Decision for a great techno-thriller on the subject.)

(Also, I thought that this was an area where those involved in cutting edge issues could benefit from talking to historians. E.g. my understanding is that we filmed the use of anthrax on a Scottish island and delivered the footage to the Nazis with the message that we would anthrax Germany if they used chemical weapons – i.e. the lack of chemical warfare in WWII was a case of successful deterrence, not international law.)

6. A common comment is – ‘technology X [e.g. in vitro fertilisation] was denounced at the time but humans adapt to such changes amazingly fast, so technology Y will be just the same’. This is a reasonable argument in some ways but I cannot help but think that many will think de-extinction, engineered bio-weapons, or human clones are going to be perceived as qualitative changes far beyond things like in vitro fertilisation.

7. Daniel Suarez told me what his next techno-thriller is about but if I put it on my blog he will deploy an autonomous drone with face recognition AI to kill me, so I’m keeping quiet. If you haven’t read Daemon, read it – it’s a rare book that makes you laugh out loud about how clever the plot is.

8. Von Neumann was heavily involved not only in the Manhattan Project but also the birth of the modern computer, the creation of the hydrogen bomb, and nuclear strategy. Before his tragic early death, he wrote a brilliant essay about the political problem of dealing with advanced technology which should be compulsory reading for all politicians aspiring to lead. It summarises the main problems that we face – ‘for progress, there is no cure…’

http://features.blogs.fortune.cnn.com/2013/01/13/can-we-survive-technology/

As I said at the top, any participants please tell me where I went wrong, and thanks for such a wonderful weekend.