‘This is possibly the single largest design flaw contributing to the bad Nash equilibrium in which … many governments are stuck. Every individual high-functioning competent person knows they can’t make much difference by being one more face in that crowd.’ Eliezer Yudkowsky, AI expert, LessWrong etc.
‘[M]uch of our intellectual elite who think they have “the solutions” have actually cut themselves off from understanding the basis for much of the most important human progress.’ Michael Nielsen, physicist and one of the handful of most interesting people I’ve ever talked to.
‘People, ideas, machines — in that order.’ Colonel Boyd.
‘There isn’t one novel thought in all of how Berkshire [Hathaway] is run. It’s all about … exploiting unrecognized simplicities.’ Charlie Munger,Warren Buffett’s partner.
‘Two hands, it isn’t much considering how the world is infinite. Yet, all the same, two hands, they are a lot.’ Alexander Grothendieck, one of the great mathematicians.
There are many brilliant people in the civil service and politics. Over the past five months the No10 political team has been lucky to work with some fantastic officials. But there are also some profound problems at the core of how the British state makes decisions. This was seen by pundit-world as a very eccentric view in 2014. It is no longer seen as eccentric. Dealing with these deep problems is supported by many great officials, particularly younger ones, though of course there will naturally be many fears — some reasonable, most unreasonable.
Now there is a confluence of: a) Brexit requires many large changes in policy and in the structure of decision-making, b) some people in government are prepared to take risks to change things a lot, and c) a new government with a significant majority and little need to worry about short-term unpopularity while trying to make rapid progress with long-term problems.
There is a huge amount of low hanging fruit — trillion dollar bills lying on the street — in the intersection of:
the selection, education and training of people for high performance
decision-making institutions at the apex of government.
We want to hire an unusual set of people with different skills and backgrounds to work in Downing Street with the best officials, some as spads and perhaps some as officials. If you are already an official and you read this blog and think you fit one of these categories, get in touch.
The categories are roughly:
Data scientists and software developers
Junior researchers one of whom will also be my personal assistant
Weirdos and misfits with odd skills
We want to improve performance and make me much less important — and within a year largely redundant. At the moment I have to make decisions well outside what Charlie Munger calls my ‘circle of competence’ and we do not have the sort of expertise supporting the PM and ministers that is needed. This must change fast so we can properly serve the public.
A. Unusual mathematicians, physicists, computer scientists, data scientists
You must have exceptional academic qualifications from one of the world’s best universities or have done something that demonstrates equivalent (or greater) talents and skills. You do not need a PhD — as Alan Kay said, we are also interested in graduate students as ‘world-class researchers who don’t have PhDs yet’.
You should have the following:
PhD or MSc in maths or physics.
Outstanding mathematical skills are essential.
Experience of using analytical languages: e.g. Python, SQL, R.
Familiarity with data tools and technologies such as Postgres, Scikit Learn, NEO4J.
A few examples of papers that you will be considering:
Complex Contagions : A Decade in Review, 2017. This looks at a large number of studies on ‘what goes viral and why?’. A lot of studies in this field are dodgy (bad maths, don’t replicate etc), an important question is which ones are worth examining.
On the frequency and severity of interstate wars, 2019. ‘How can it be possible that the frequency and severity of interstate wars are so consistent with a stationary model, despite the enormous changes and obviously non-stationary dynamics in human population, in the number of recognized states, in commerce, communication, public health, and technology, and even in the modes of war itself? The fact that the absolute number and sizes of wars are plausibly stable in the face of these changes is a profound mystery for which we have no explanation.’ Does this claim stack up?
You should be able to explain to other mathematicians, physicists and computer scientists the ideas in such papers, discuss what could be useful for our projects, synthesise ideas for other data scientists, and apply them to practical problems. You won’t be expert on the maths used in all these papers but you should be confident that you could study it and understand it.
We will be using machine learning and associated tools so it is important you can program. You do not need software development levels of programming but it would be an advantage.
We are looking for great software developers who would love to work on these ideas, build tools and work with some great people. You should also look at some of Victor’s technical talks on programming languages and the history of computing.
You will be working with data scientists, designers and others.
C. Unusual economists
We are looking to hire some recent graduates in economics. You should a) have an outstanding record at a great university, b) understand conventional economic theories, c) be interested in arguments on the edge of the field — for example, work by physicists on ‘agent-based models’ or by the hedge fund Bridgewater on the failures/limitations of conventional macro theories/prediction, and d) have very strong maths and be interested in working with mathematicians, physicists, and computer scientists.
The ideal candidate might, for example, have a degree in maths and economics, worked at the LHC in one summer, worked with a quant fund another summer, and written software for a YC startup in a third summer!
We’ve found one of these but want at least one more.
von Neumann’s foundation of game theory and ‘expected utility’,
mainstream economic theories,
modern theories about auctions,
theoretical computer science (including problems like the complexity of probabilistic inference in Bayesian networks, which is in the NP–hard complexity class),
ideas on ‘computational rationality’ and meta-reasoning from AI, cognitive science and so on.
If these sort of things are interesting, then you will find this project interesting.
It’s a bonus if you can code but it isn’t necessary.
D. Great project managers.
If you think you are one of the a small group of people in the world who are truly GREAT at project management, then we want to talk to you. Victoria Woodcock ran Vote Leave — she was a truly awesome project manager and without her Cameron would certainly have won. We need people like this who have a 1 in 10,000 or higher level of skill and temperament.
The Oxford Handbook on Megaprojects points out that it is possible to quantify lessons from the failures of projects like high speed rail projects because almost all fail so there is a large enough sample to make statistical comparisons, whereas there can be no statistical analysis of successes because they are so rare.
It is extremely interesting that the lessons of Manhattan (1940s), ICBMs (1950s) and Apollo (1960s) remain absolutely cutting edge because it is so hard to apply them and almost nobody has managed to do it. The Pentagon systematically de-programmed itself from more effective approaches to less effective approaches from the mid-1960s, in the name of ‘efficiency’. Is this just another way of saying that people like General Groves and George Mueller are rarer than Fields Medallists?
Anyway — it is obvious that improving government requires vast improvements in project management. The first project will be improving the people and skills already here.
If you want an example of the sort of people we need to find in Britain, look at this on CC Myers — the legendary builders. SPEED. We urgently need people with these sort of skills and attitude. (If you think you are such a company and you could dual carriageway the A1 north of Newcastle in record time, then get in touch!)
E. Junior researchers
In many aspects of government, as in the tech world and investing, brains and temperament smash experience and seniority out of the park.
We want to hire some VERY clever young people either straight out of university or recently out with with extreme curiosity and capacity for hard work.
One of you will be a sort of personal assistant to me for a year — this will involve a mix of very interesting work and lots of uninteresting trivia that makes my life easier which you won’t enjoy. You will not have weekday date nights, you will sacrifice many weekends — frankly it will hard having a boy/girlfriend at all. It will be exhausting but interesting and if you cut it you will be involved in things at the age of ~21 that most people never see.
I don’t want confident public school bluffers. I want people who are much brighter than me who can work in an extreme environment. If you play office politics, you will be discovered and immediately binned.
In SW1 communication is generally treated as almost synonymous with ‘talking to the lobby’. This is partly why so much punditry is ‘narrative from noise’.
With no election for years and huge changes in the digital world, there is a chance and a need to do things very differently.
We’re particularly interested in deep experts on TV and digital. We also are interested in people who have worked in movies or on advertising campaigns. There are some very interesting possibilities in the intersection of technology and story telling — if you’ve done something weird, this may be the place for you.
I noticed in the recent campaign that the world of digital advertising has changed very fast since I was last involved in 2016. This is partly why so many journalists wrongly looked at things like Corbyn’s Facebook stats and thought Labour was doing better than us — the ecosystem evolves rapidly while political journalists are still behind the 2016 tech, hence why so many fell for Carole’s conspiracy theories. The digital people involved in the last campaign really knew what they are doing, which is incredibly rare in this world of charlatans and clients who don’t know what they should be buying. If you are interested in being right at the very edge of this field, join.
We have some extremely able people but we also must upgrade skills across the spad network.
G. Policy experts
One of the problems with the civil service is the way in which people are shuffled such that they either do not acquire expertise or they are moved out of areas they really know to do something else. One Friday, X is in charge of special needs education, the next week X is in charge of budgets.
There are, of course, general skills. Managing a large organisation involves some general skills. Whether it is Coca Cola or Apple, some things are very similar — how to deal with people, how to build great teams and so on. Experience is often over-rated. When Warren Buffett needed someone to turn around his insurance business he did not hire someone with experience in insurance:‘When Ajit entered Berkshire’s office on a Saturday in 1986, he did not have a day’s experience in the insurance business’ (Buffett).
Shuffling some people who are expected to be general managers is a natural thing but it is clear Whitehall does this too much while also not training general management skills properly. There are not enough people with deep expertise in specific fields.
If you want to work in the policy unit or a department and you really know your subject so that you could confidently argue about it with world-class experts, get in touch.
It’s also the case that wherever you are most of the best people are inevitably somewhere else. This means that governments must be much better at tapping distributed expertise. Of the top 20 people in the world who best understand the science of climate change and could advise us what to do with COP 2020, how many now work as a civil servant/spad or will become one in the next 5 years?
G. Super-talented weirdos
People in SW1 talk a lot about ‘diversity’ but they rarely mean ‘true cognitive diversity’. They are usually babbling about ‘gender identity diversity blah blah’. What SW1 needs is not more drivel about ‘identity’ and ‘diversity’ from Oxbridge humanities graduates but more genuine cognitive diversity.
We need some true wild cards, artists, people who never went to university and fought their way out of an appalling hell hole, weirdos from William Gibson novels like that girl hired by Bigend as a brand ‘diviner’ who feels sick at the sight of Tommy Hilfiger or that Chinese-Cuban free runner from a crime family hired by the KGB. If you want to figure out what characters around Putin might do, or how international criminal gangs might exploit holes in our border security, you don’t want more Oxbridge English graduates who chat about Lacan at dinner parties with TV producers and spread fake news about fake news.
By definition I don’t really know what I’m looking for but I want people around No10 to be on the lookout for such people.
We need to figure out how to use such people better without asking them to conform to the horrors of ‘Human Resources’ (which also obviously need a bonfire).
Send a max 1 page letter plus CV to email@example.com and put in the subject line ‘job/’ and add after the / one of: data, developer, econ, comms, projects, research, policy, misfit.
I’ll have to spend time helping you so don’t apply unless you can commit to at least 2 years.
I’ll bin you within weeks if you don’t fit — don’t complain later because I made it clear now.
I will try to answer as many as possible but last time I publicly asked for job applications in 2015 I was swamped and could not, so I can’t promise an answer. If you think I’ve insanely ignored you, persist for a while.
I will use this blog to throw out ideas. It’s important when dealing with large organisations to dart around at different levels, not be stuck with formal hierarchies. It will seem chaotic and ‘not proper No10 process’ to some. But the point of this government is to do things differently and better and this always looks messy. We do not care about trying to ‘control the narrative’ and all that New Labour junk and this government will not be run by ‘comms grid’.
As Paul Graham and Peter Thiel say, most ideas that seem bad are bad but great ideas also seem at first like bad ideas — otherwise someone would have already done them. Incentives and culture push people in normal government systems away from encouraging ‘ideas that seem bad’. Part of the point of a small, odd No10 team is to find and exploit, without worrying about media noise, what Andy Grove called ‘very high leverage ideas’ and these will almost inevitably seem bad to most.
I will post some random things over the next few weeks and see what bounces back — it is all upside, there’s no downside if you don’t mind a bit of noise and it’s a fast cheap way to find good ideas…
‘People, ideas, machines — in that order!’ Colonel Boyd.
‘The main thing that’s needed is simply the recognition of how important seeing is, and the will to do something about it.’ Bret Victor.
‘[T]he transfer of an entirely new and quite different framework for thinking about, designing, and using information systems … is immensely more difficult than transferring technology.’ Robert Taylor, one of the handful most responsible for the creation of the internet and personal computing, and in inspiration to Bret Victor.
‘[M]uch of our intellectual elite who think they have “the solutions” have actually cut themselves off from understanding the basis for much of the most important human progress.’ Michael Nielsen, physicist.
This blog looks at an intersection of decision-making, technology, high performance teams and government. It sketches some ideas of physicist Michael Nielsen about cognitive technologies and of computer visionary Bret Victor about the creation of dynamic tools to help understand complex systems and ‘argue with evidence’, such as ‘tools for authoring dynamic documents’, and ‘Seeing Rooms’ for decision-makers — i.e rooms designed to support decisions in complex environments. It compares normal Cabinet rooms, such as that used in summer 1914 or October 1962, with state-of-the-art Seeing Rooms. There is very powerful feedback between: a) creating dynamic tools to see complex systems deeper (to see inside, see across time, and see across possibilities), thus making it easier to work with reliable knowledge and interactive quantitative models, semi-automating error-correction etc, and b) the potential for big improvements in the performance of political and government decision-making.
It is relevant to Brexit and anybody thinking ‘how on earth do we escape this nightmare’ but 1) these ideas are not at all dependent on whether you support or oppose Brexit, about which reasonable people disagree, and 2) they are generally applicable to how to improve decision-making — for example, they are relevant to problems like ‘how to make decisions during a fast moving nuclear crisis’ which I blogged about recently, or if you are a journalist ‘what future media could look like to help improve debate of politics’. One of the tools Nielsen discusses is a tool to make memory a choice by embedding learning in long-term memory rather than, as it is for almost all of us, an accident. I know from my days working on education reform in government that it’s almost impossible to exaggerate how little those who work on education policy think about ‘how to improve learning’.
Fields make huge progress when they move from stories (e.g Icarus) and authority (e.g ‘witch doctor’) to evidence/experiment (e.g physics, wind tunnels) and quantitative models (e.g design of modern aircraft). Political ‘debate’ and the processes of government are largely what they have always been — largely conflict over stories and authorities where almost nobody even tries to keep track of the facts/arguments/models they’re supposedly arguing about, or tries to learn from evidence, or tries to infer useful principles from examples of extreme success/failure. We can see much better than people could in the past how to shift towards processes of government being ‘partially rational discussion over facts and models and learning from the best examples of organisational success‘. But one of the most fundamental and striking aspects of government is that practically nobody involved in it has the faintest interest in or knowledge of how to create high performance teams to make decisions amid uncertainty and complexity. This blindness is connected to another fundamental fact: critical institutions (including the senior civil service and the parties) are programmed to fight to stay dysfunctional, they fight to stay closed and avoid learning about high performance, they fight to exclude the most able people.
I wrote about some reasons for this before the referendum (cf. The Hollow Men). The Westminster and Whitehall response was along the lines of ‘natural party of government’, ‘Rolls Royce civil service’ blah blah. But the fact that Cameron, Heywood (the most powerful civil servant) et al did not understand many basic features of how the world works is why I and a few others gambled on the referendum — we knew that the systemic dysfunction of our institutions and the influence of grotesque incompetents provided an opportunity for extreme leverage.
Since then, after three years in which the parties, No10 and the senior civil service have imploded (after doing the opposite of what Vote Leave said should happen on every aspect of the negotiations) one thing has held steady — Insiders refuse to ask basic questions about the reasons for this implosion, such as: ‘why Heywood didn’t even put together a sane regular weekly meeting schedule and ministers didn’t even notice all the tricks with agendas/minutes etc’, how are decisions really made in No10, why are so many of the people below some cognitive threshold for understanding basic concepts (cf. the current GATT A24 madness), what does it say about Westminster that both the Adonis-Remainers and the Cash-ERGers have become more detached from reality while a large section of the best-educated have effectively run information operations against their own brains to convince themselves of fairy stories about Facebook, Russia and Brexit…
This blog is hopefully useful for some of those thinking about a) improving government around the world and/or b) ‘what comes after the coming collapse and reshaping of the British parties, and how to improve drastically the performance of critical institutions?’
Some old colleagues have said ‘Don’t put this stuff on the internet, we don’t want the second referendum mob looking at it.’ Don’t worry! Ideas like this have to be forced down people’s throats practically at gunpoint. Silicon Valley itself has barely absorbed Bret Victor’s ideas so how likely is it that there will be a rush to adopt them by the world of Blair and Grieve?! These guys can’t tell the difference between courtier-fixers and people with models for truly effective action like General Groves (HERE). Not one in a thousand will read a 10,000 word blog on the intersection of management and technology and the few who do will dismiss it as the babbling of a deluded fool, they won’t learn any more than they learned from the 2004 referendum or from Vote Leave. And if I’m wrong? Great. Things will improve fast and a second referendum based on both sides applying lessons from Bret Victor would be dynamite.
NB. Bret Victor’s project, Dynamic Land, is a non-profit. For an amount of money that a government department like the Department for Education loses weekly without any minister realising it’s lost (in the millions per week in my experience because the quality of financial control is so bad), it could provide crucial funding for Victor and help itself. Of course, any minister who proposed such a thing would be told by officials ‘this is illegal under EU procurement law and remember minister that we must obey EU procurement law forever regardless of Brexit’ — something I know from experience officials say to ministers whether it is legal or not when they don’t like something. And after all, ministers meekly accepted the Kafka-esque order from Heywood to prioritise duties of goodwill to the EU under A50 over preparations to leave A50, so habituated had Cameron’s children become to obeying the real deputy prime minister…
Below are 4 sections:
The value found in intersections of fields
Some ideas of Bret Victor
Some ideas of Michael Nielsen
1. Extreme value is often found in the intersection of fields
The legendary Colonel Boyd (he of the ‘OODA loop’) would shout at audiences ‘People, ideas, machines — in that order.‘ Fundamental political problems we face require large improvements in the quality of all three and, harder, systems to integrate all three. Such improvements require looking carefully at the intersection of roughly five entangled areas of study. Extreme value is often found at such intersections.
Explore what we know about the selection, education and training of people for high performance (individual/team/organisation) in different fields. We should be selecting people much deeper in the tails of the ability curve — people who are +3 (~1:1,000) or +4 (~1:30,000) standard deviations above average on intelligence, relentless effort, operational ability and so on (now practically entirely absent from the ’50 most powerful people in Britain’). We should train them in the general art of ‘thinking rationally’ and making decisions amid uncertainty (e.g Munger/Tetlock-style checklists, exercises on SlateStarCodex blog). We should train them in the practical reasons for normal ‘mega-project failure’ and case studies such as the Manhattan Project (General Groves), ICBMs (Bernard Schriever), Apollo (George Mueller), ARPA-PARC (Robert Taylor) that illustrate how the ‘unrecognised simplicities’ of high performance bring extreme success and make them work on such projects before they are responsible for billions rather than putting people like Cameron in charge (after no experience other than bluffing through PPE then PR). NB. China’s leaders have studied these episodes intensely while American and British institutions have actively ‘unlearned’ these lessons.
Explore the frontiers of the science of prediction across different fields from physics to weather forecasting to finance and epidemiology. For example, ideas from physics about early warning systems in physical systems have application in many fields, including questions like: to what extent is it possible to predict which news will persist over different timescales, or predict wars from news and social media? There is interesting work combining game theory, machine learning, and Red Teams to predict security threats and improve penetration testing (physical and cyber). The Tetlock/IARPA project showed dramatic performance improvements in political forecasting are possible, contra what people such as Kahneman had thought possible. A recent Nature article by Duncan Watts explained fundamental problems with the way normal social science treats prediction and suggested new approaches — which have been almost entirely ignored by mainstream economists/social scientists. There is vast scope for applying ideas and tools from the physical sciences and data science/AI — largely ignored by mainstream social science, political parties, government bureaucracies and media — to social/political/government problems (as Vote Leave showed in the referendum, though this has been almost totally obscured by all the fake news: clue — it was not ‘microtargeting’).
Explore technology and tools. For example, Bret Victor’s work and Michael Nielsen’s work on cognitive technologies. The edge of performance in politics/government will be defined by teams that can combine the ancient ‘unrecognised simplicities of high performance’ with edge-of-the-art technology. No10 is decades behind the pace in old technologies like TV, doesn’t understand simple tools like checklists, and is nowhere with advanced technologies.
Explore the frontiers of communication (e.g crisis management, applied psychology). Technology enables people to improve communication with unprecedented speed, scale and iterative testing. It also allows people to wreak chaos with high leverage. The technologies are already beyond the ability of traditional government centralised bureaucracies to cope with. They will develop rapidly such that most such centralised bureaucracies lose more and more control while a few high performance governments use the leverage they bring (c.f China’s combination of mass surveillance, AI, genetic identification, cellphone tracking etc as they desperately scramble to keep control). The better educated think that psychological manipulation is something that happens to ‘the uneducated masses’ but they are extremely deluded — in many ways people like FT pundits are much easier to manipulate, their education actually makes them more susceptible to manipulation, and historically they are the ones who fall for things like Russian fake news (cf. the Guardian and New York Times on Stalin/terror/famine in the 1930s) just as now they fall for fake news about fake news. Despite the centrality of communication to politics it is remarkable how little attention Insiders pay to what works — never mind the question ‘what could work much better?’. The fact that so much of the media believes total rubbish about social media and Brexit shows that the media is incapable of analysing the intersection of politics and technology but, although it is obviously bad that the media disinforms the public, the only rational planning assumption is that this problem will continue and even get worse. The media cannot explain either the use of TV or traditional polling well, these have been extremely important for over 70 years, and there is no trend towards improvement so a sound planning assumption is surely that the media will do even worse with new technologies and data science. This will provide large opportunities for good and evil. A new approach able to adapt to the environment an order of magnitude faster than now would disorient political opponents (desperately scrolling through Twitter) to such a degree — in Boyd’s terms it would ‘collapse their OODA loops’ — that it could create crucial political space for focus on the extremely hard process of rewiring government institutions which now seems impossible for Insiders to focus on given their psychological/operational immersion in the hysteria of 24 hour rolling news and the constant crises generated by dysfunctional bureaucracies.
Explore how to re-program political/government institutions at the apex of decision-making authority so that a) people are more incentivised to optimise things we want them to optimise, like error-correction and predictive accuracy, and less incentivised to optimise bureaucratic process, prestige, and signalling as our institutions now do; b) institutions are incentivised to build high performance teams rather than make this practically illegal at the apex of government; and c) we have ‘immune systems’ based on decentralisation and distributed control to minimise the inevitable failures of even the best people and teams.
Example 1: Red Teams and pre-mortems can combat groupthink and normal cognitive biases but they are practically nowhere in the formal structure of governments. There is huge scope for a Parliament-mandated small and extremely elite Red Team operating next to, and in some senses above, the Cabinet Office to ensure diversity of opinions, fight groupthink and other standard biases, make sure lessons are learned and so on. Cost: a few million that it would recoup within weeks by stopping blunders.
Example 2: prediction tournaments/markets could improve policy and project management, with people able to ‘short’ official delivery timetables — imagine being able to short Grayling’s transport announcements, for example. In many areas new markets could help — e.g markets to allow shorting of house prices to dampen bubbles, as Chris Dillow and others have suggested. The way in which the IARPA/Tetlock work has been ignored in SW1 is proof that MPs and civil servants are not actually interested in — or incentivised to be interested in — who is right, who is actually an ‘expert’, and so on. There are tools available if new people do want to take these things seriously. Cost: a few million at most, possibly thousands, that it would recoup within a year by stopping blunders.
Example 3: we need to consider projects that could bootstrap new international institutions that help solve more general coordination problems such as the risk of accidental nuclear war. The most obvious example of a project like this I can think of is a manned international lunar base which would be useful for a) basic science, b) the practical purposes of building urgently needed near-Earth infrastructure for space industrialisation, and c) to force the creation of new practical international institutions for cooperation between Great Powers. George Mueller’s team that put man on the moon in 1969 developed a plan to do this that would have been built by now if their plans had not been tragically abandoned in the 1970s. Jeff Bezos is explicitly trying to revive the Mueller vision and Britain should be helping him do it much faster. The old institutions like the UN and EU — built on early 20th Century assumptions about the performance of centralised bureaucracies — are incapable of solving global coordination problems. It seems to me more likely that institutions with qualities we need are much more likely to emerge out of solving big problems than out of think tank papers about reforming existing institutions. Cost = 10s/100s of billions, return = trillions, or near infinite if shifting our industrial/psychological frontiers into space drastically reduces the chances of widespread destruction.
A) Some fields have fantastic predictive models and there is a huge amount of high quality research, though there is a lot of low-hanging fruit in bringing methods from one field to another.
B) We know a lot about high performance including ‘systems management’ for complex projects but very few organisations use this knowledge and government institutions overwhelmingly try to ignore and suppress the knowledge we have.
C) Some fields have amazing tools for prediction and visualisation but very few organisations use these tools and almost nobody in government (where colour photocopying is a major challenge).
D) We know a lot about successful communication but very few organisations use this knowledge and most base action on false ideas. E.g political parties spend millions on spreading ideas but almost nothing on thinking about whether the messages are psychologically compelling or their methods/distribution work, and TV companies spend billions on news but almost nothing understanding what science says about how to convey complex ideas — hence why you see massively overpaid presenters like Evan Davis babbling metaphors like ‘economic takeoff’ in front of an airport while his crew films a plane ‘taking off’, or ‘the economy down the plughole’ with pictures of — a plughole.
E) Many thousands worldwide are thinking about all sorts of big government issues but very few can bring them together into coherent plans that a government can deliver and there is almost no application of things like Red Teams and prediction markets. E.g it is impossible to describe the extent to which politicians in Britain do not even consider ‘the timetable and process for turning announcement X into reality’ as something to think about — for people like Cameron and Blair the announcement IS the only reality and ‘management’ is a dirty word for junior people to think about while they focus on ‘strategy’. As I have pointed out elsewhere, it is fascinating that elite business schools have been collecting billions in fees to teach their students WRONGLY that operational excellence is NOT a source of competitive advantage, so it is no surprise that politicians and bureaucrats get this wrong.
But I can see almost nobody integrating the very best knowledge we have about A+B+C+D with E and I strongly suspect there are trillion dollar bills lying on the ground that could be grabbed for trivial cost — trillion dollar bills that people with power are not thinking about and are incentivised not to think about. I might be wrong but I would remind readers that Vote Leave was itself a bet on this proposition being right and I think its success should make people update their beliefs on the competence of elite political institutions and the possibilities for improvement.
Here I want to explore one set of intersections — the ideas of Bret Victor and Michael Nielsen.
2. Bret Victor: Cognitive technologies, dynamic tools, interactive quantitative models, Seeing Rooms — making it as easy to insert facts, data, and models in political discussion as it is to insert emoji
In the 1960s visionaries such as Joseph Licklider, Robert Taylor and Doug Engelbart developed a vision of networked interactive computing that provided the foundation not just for new technologies (the internet, PC etc) but for whole new industries. Licklider, Sutherland,Taylor et al provided a model (ARPA) for how science funding can work. Taylor provided a model (PARC) of how to manage a team of extremely talented people who turned a profound vision into reality. The original motivation for the vision of networked interactive computing was to help humans make good decisions in a complex world — or, ‘augmenting human intelligence’ and ‘man-machine symbiosis’. This story shows how to make big improvements in the world with very few resources if they are structured right: PARC involved ~25 key people and tens of millions over roughly a decade and generated trillions of dollars in value. If interested in the history and the super-productive processes behind the success of ARPA-PARC read THIS.
It’s fascinating that in many ways the original 1960s Licklider vision has still not been implemented. The Silicon Valley ecosystem developed parts of the vision but not others for complex reasons I don’t understand (cf. The Future of Programming). One of those who is trying to implement parts of the vision that have not been implemented is Bret Victor. Bret Victor is a rare thing: a genuine visionary in the computing world according to some of those ‘present at the creation’ of ARPA-PARC such as Alan Kay. His ideas lie at critical intersections between fields sketched above. Watch talks such as Inventing on Principle and Media for Thinking the Unthinkable and explore his current project, Dynamic Land in Berkeley.
Victor has described, and now demonstrates in Dynamic Land, how existing tools fail and what is possible. His core principle is that creators need an immediate connection to what they are creating. Current programming languages and tools are mostly based on very old ideas before computers even had screens and there was essentially no interactivity — they date from the era of punched cards. They do not allow users to interact dynamically. New dynamic tools enable us to think previously unthinkable thoughts and allow us to see and interact with complex systems: to see inside, see across time, and see across possibilities.
I strongly recommend spending a few days exploring his his whole website but I will summarise below his ideas on two things:
His ideas about how to build new dynamic tools for working with data and interactive models.
His ideas about transforming the physical spaces in which teams work so that dynamic tools are embedded in their environment — people work inside a tool.
Applying these ideas would radically improve how people make decisions in government and how the media reports politics/government.
Language and writing were cognitive technologies created thousands of years ago which enabled us to think previously unthinkable thoughts. Mathematical notation did the same over the past 1,000 years. For example, take a mathematics problem described by the 9th Century mathematician al-Khwarizmi (who gave us the word algorithm):
Once modern notation was invented, this could be written instead as:
x2 + 10x = 39
Michael Nielsen uses a similar analogy. Descartes and Fermat demonstrated that equations can be represented on a diagram and a diagram can be represented as an equation. This was a new cognitive technology, a new way of seeing and thinking: algebraic geometry. Changes to the ‘user interface’ of mathematics were critical to its evolution and allowed us to think unthinkable thoughts (Using Artificial Intelligence to Augment Human Intelligence, see below).
Similarly in the 18th Century, there was the creation of data graphics to demonstrate trade figures. Before this, people could only read huge tables. This is the first data graphic:
The Jedi of data visualisation, Edward Tufte, describes this extraordinary graphic of Napoleon’s invasion of Russia as ‘probably the best statistical graphic ever drawn’. It shows the losses of Napoleon’s army: from the Polish-Russian border, the thick band shows the size of the army at each position, the path of Napoleon’s winter retreat from Moscow is shown by the dark lower band, which is tied to temperature and time scales (you can see some of the disastrous icy river crossings famously described by Tolstoy). NB. The Cabinet makes life-and-death decisions now with far inferior technology to this from the 19th Century (see below).
If we look at contemporary scientific papers they represent extremely compressed information conveyed through a very old fashioned medium, the scientific journal. Printed journals are centuries old but the ‘modern’ internet versions are usually similarly static. They do not show the behaviour of systems in a visual interactive way so we can see the connections between changing values in the models and changes in behaviour of the system. There is no immediate connection. Everything is pretty much the same as a paper and pencil version of a paper. In Media for Thinking the Unthinkable, Victor shows how dynamic tools can transform normal static representations so systems can be explored with immediate feedback. This dramatically shows how much more richly and deeply ideas can be explored. With Victor’s tools we can interact with the systems described and immediately grasp important ideas that are hidden in normal media.
Picture: the very dense writing of a famous paper (by chance the paper itself is at the intersection of politics/technology and Watts has written excellent stuff on fake news but has been ignored because it does not fit what ‘the educated’ want to believe)
Picture: the same information presented differently. Victor’s tools make the information less compressed so there’s less work for the brain to do ‘decompressing’. They not only provide visualisations but the little ‘sliders’ over the graphics are to drag buttons and interact with the data so you see the connection between changing data and changing model. A dynamic tool transforms a scientific paper from ‘pencil and paper’ technology to modern interactive technology.
Victor explains in detail how policy analysis and public debate of climate change could be transformed. Leave aside the subject matter — of course it’s extremely important, anybody interested in this issue will gain from reading the whole thing and it would be great material for a school to use for an integrated science / economics / programming / politics project, but my focus is on his ideas about tools and thinking, not the specific subject matter.
Climate change is a great example to consider because it involves a) a lot of deep scientific knowledge, b) complex computer modelling which is understood in detail by a tiny fraction of 1% (and almost none of the social science trained ‘experts’ who are largely responsible for interpreting such models for politicians/journalists, cf HERE for the science of this), c) many complex political, economic, cultural issues, d) very tricky questions about how policy is discussed in mainstream culture, and e) the problem of how governments try to think about and act on important, complex, and long-term problems. Scientific knowledge is crucial but it cannot by itself answer the question: what to do? The ideas BV describes to transform the debate on climate change apply generally to how we approach all important political issues.
In the section Languages for technical computing, BV describes his overall philosophy (if you look at the original you will see dynamic graphics to help make each point but I can’t make them play on my blog — a good example of the failure of normal tools!):
‘The goal of my own research has been tools where scientists see what they’re doing in realtime, with immediate visual feedback and interactive exploration. I deeply believe that a sea change in invention and discovery is possible, once technologists are working in environments designed around:
ubiquitous visualization and in-context manipulation of the system being studied;
actively exploring system behavior across multiple levels of abstraction in parallel;
visually investigating system behavior by transforming, measuring, searching, abstracting;
seeing the values of all system variables, all at once, in context;
dynamic notations that embed simulation, and show the effects of parameter changes;
visually improvising special-purpose dynamic visualizations as needed.’
He then describes how the community of programming language developers have failed to create appropriate languages for scientists, which I won’t go into but which is fascinating.
He then describes the problem of how someone can usefully get to grips with a complex policy area involving technological elements.
‘How can an eager technologist find their way to sub-problems within other people’s projects where they might have a relevant idea? How can they be exposed to process problems common across many projects?… She wishes she could simply click on “gas turbines”, and explore the space:
What are open problems in the field?
Who’s working on which projects?
What are the fringe ideas?
What are the process bottlenecks?
What dominates cost? What limits adoption?
Why make improvements here? How would the world benefit?
‘None of this information is at her fingertips. Most isn’t even openly available — companies boast about successes, not roadblocks. For each topic, she would have to spend weeks tracking down and meeting with industry insiders. What she’d like is a tool that lets her skim across entire fields, browsing problems and discovering where she could be most useful…
‘Suppose my friend uncovers an interesting problem in gas turbines, and comes up with an idea for an improvement. Now what?
Is the improvement significant?
Is the solution technically feasible?
How much would the solution cost to produce?
How much would it need to cost to be viable?
Who would use it? What are their needs?
What metrics are even relevant?
‘Again, none of this information is at her fingertips, or even accessible. She’d have to spend weeks doing an analysis, tracking down relevant data, getting price quotes, talking to industry insiders.
‘What she’d like are tools for quickly estimating the answers to these questions, so she can fluidly explore the space of possibilities and identify ideas that have some hope of being important, feasible, and viable.
‘Consider the Plethora on-demand manufacturing service, which shows the mechanical designer an instant price quote, directly inside the CAD software, as they design a part in real-time. In what other ways could inventors be given rapid feedback while exploring ideas?’
Victor then describes a public debate over a public policy. Ideas were put forward. Everybody argued.
‘Who to believe? The real question is — why are readers and decision-makers forced to “believe” anything at all? Many claims made during the debate offered no numbers to back them up. Claims with numbers rarely provided context to interpret those numbers. And never — never! — were readers shown the calculations behind any numbers. Readers had to make up their minds on the basis of hand-waving, rhetoric, bombast.’
And there was no progress because nobody could really learn from the debate or even just be clear about exactly what was being proposed. Sound familiar?!! This is absolutely normal and Victor’s description applies to over 99% of public policy debates.
Victor then describes how you can take the policy argument he had sketched and change its nature. Instead of discussing words and stories, DISCUSS INTERACTIVE MODELS.
‘The reader can explore alternative scenarios, understand the tradeoffs involved, and come to an informed conclusion about whether any such proposal could be a good decision.
‘This is possible because the author is not just publishing words. The author has provided a model — a set of formulas and algorithms that calculate the consequences of a given scenario… Notice how the model’s assumptions are clearly visible, and can even be adjusted by the reader.
‘Readers are thus encouraged to examine and critique the model. If they disagree, they can modify it into a competing model with their own preferred assumptions, and use it to argue for their position. Model-driven material can be used as grounds for an informed debate about assumptions and tradeoffs.
‘Modeling leads naturally from the particular to the general. Instead of seeing an individual proposal as “right or wrong”, “bad or good”, people can see it as one point in a large space of possibilities. By exploring the model, they come to understand the landscape of that space, and are in a position to invent better ideas for all the proposals to come. Model-driven material can serve as a kind of enhanced imagination.‘
Victor then looks at some standard materials from those encouraging people to take personal action on climate change and concludes:
‘These are lists of proverbs. Little action items, mostly dequantified, entirely decontextualized. How significant is it to “eat wisely” and “trim your waste”? How does it compare to other sources of harm? How does it fit into the big picture? How many people would have to participate in order for there to be appreciable impact? How do you know that these aren’t token actions to assauge guilt?
‘And why trust them? Their rhetoric is catchy, but so is the horrific “denialist” rhetoric from the Cato Institute and similar. When the discussion is at the level of “trust me, I’m a scientist” and “look at the poor polar bears”, it becomes a matter of emotional appeal and faith, a form of religion.
‘Climate change is too important for us to operate on faith. Citizens need and deserve reading material which shows context — how significant suggested actions are in the big picture — and which embeds models — formulas and algorithms which calculate that significance, for different scenarios, from primary-source data and explicit assumptions.’
Even the supposed ‘pros’ — Insiders at the top of research fields in politically relevant areas — have to scramble around typing words into search engines, crawling around government websites, and scrolling through PDFs. Reliable data takes ages to find. Reliable models are even harder to find. Vast amounts of useful data and models exist but they cannot be found and used effectively because we lack the tools.
‘Authoring tools designed for arguing from evidence’
Why don’t we conduct public debates in the way his toy example does with interactive models? Why aren’t paragraphs in supposedly serious online newspapers written like this? Partly because of the culture, including the education of those who run governments and media organisations, but also because the resources for creating this sort of material don’t exist.
‘In order for model-driven material to become the norm, authors will need data, models, tools, and standards…
‘Suppose there were good access to good data and good models. How would an author write a document incorporating them? Today, even the most modernwritingtools are designed around typing in words, not facts. These tools are suitable for promoting preconceived ideas, but provide no help in ensuring that words reflect reality, or any plausible model of reality. They encourage authors to fool themselves, and fool others…
‘Imagine an authoring tool designed for arguing from evidence. I don’t mean merely juxtaposing a document and reference material, but literally “autocompleting” sourced facts directly into the document. Perhaps the tool would have built-in connections to fact databases and model repositories, not unlike the built-in spelling dictionary. What if it were as easy to insert facts, data, and models as it is to insert emoji and cat photos?
‘Furthermore, the point of embedding a model is that the reader can explore scenarios within the context of the document. This requires tools for authoring “dynamic documents” — documents whose contents change as the reader explores the model. Such tools are pretty much non-existent.’
These sorts of tools for authoring dynamic documents should be seen as foundational technology like the integrated circuit or the internet.
‘Foundational technology appears essential only in retrospect. Looking forward, these things have the character of “unknown unknowns” — they are rarely sought out (or funded!) as a solution to any specific problem. They appear out of the blue, initially seem niche, and eventually become relevant to everything.
‘They may be hard to predict, but they have some common characteristics. One is that they scale well. Integrated circuits and the internet both scaled their “basic idea” from a dozen elements to a billion. Another is that they are purpose-agnostic. They are “material” or “infrastructure”, not applications.’
Victor ends with a very potent comment — that much of what we observe is ‘rearranging app icons on the deck of the Titanic’. Commercial incentives drive people towards trying to create ‘the next Facebook’ — not fixing big social problems. I will address this below.
If you are an arts graduate interested in these subjects but not expert (like me), here is an example that will be more familiar… If you look at any big historical subject, such as ‘why/how did World War I start?’ and examine leading scholarship carefully, you will see that all the leading books on such subjects provide false chronologies and mix facts with errors such that it is impossible for a careful reader to be sure about crucial things. It is routine for famous historians to write that ‘X happened because Y’ when Y happened after X. Part of the problem is culture but this could potentially be improved by tools. A very crude example: why doesn’t Kindle make it possible for readers to log factual errors, with users’ reliability ranked by others, so authors can easily check potential errors and fix them in online versions of books? Even better, this could be part of a larger system to develop gold standard chronologies with each ‘fact’ linked to original sources and so on. This would improve the reliability of historical analysis and it would create an ‘anti-entropy’ ratchet — now, entropy means that errors spread across all books on a subject and there is no mechanism to reverse this…
‘Seeing Rooms’: macro-tools to help make decisions
Victor also discusses another fundamental issue: the rooms/spaces in which most modern work and thinking occurs are not well-suited to the problems being tackled and we could do much better. Victor is addressing advanced manufacturing and robotics but his argument applies just as powerfully, perhaps more powerfully, to government analysis and decision-making.
Now, ‘software based tools are trapped in tiny rectangles’. We have very sophisticated tools but they all sit on computer screens on desks, just as you are reading this blog.
In contrast, ‘Real-world tools are in rooms where workers think with their bodies.’ Traditional crafts occur in spatial environments designed for that purpose. Workers walk around, use their hands, and think spatially. ‘The room becomes a macro-tool they’re embedded inside, an extension of the body.’ These rooms act like tools to help them understand their problems in detail and make good decisions.
Picture: rooms designed for the problems being tackled
The wave of 3D printing has developed ‘maker rooms’ and ‘Fab Labs’ where people work with a set of tools that are too expensive for an individual. The room is itself a network of tools. This approach is revolutionising manufacturing.
Why is this useful?
‘Modern projects have complex behavior… Understanding requires seeing and the best seeing tools are rooms.’ This is obviously particularly true of politics and government.
Here is a photo of a recent NASA mission control room. The room is set up so that all relevant people can see relevant data and models at different scales and preserve a common picture of what is important. NASA pioneered thinking about such rooms and the technology and tools needed in the 1960s.
Here are pictures of two control rooms for power grids.
Here is a panoramic photo of the unified control centre for the Large Hadron Collider – the biggest of ‘big data’ projects. Notice details like how they have removed all pillars so nothing interrupts visual communication between teams.
Now contrast these rooms with rooms from politics.
Here is the Cabinet room. I have been in this room. There are effectively no tools. In the 19th Century at least Lord Salisbury used the fireplace as a tool. He would walk around the table, gather sensitive papers, and burn them at the end of meetings. The fire is now blocked. The only other tool, the clock, did not work when I was last there. Over a century, the physical space in which politicians make decisions affecting potentially billions of lives has deteriorated.
British Cabinet room practically as it was July 1914
Here are JFK and EXCOM making decisions during the Cuban Missile Crisis that moved much faster than July 1914, compressing decisions leading to the destruction of global civilisation potentially into just minutes.
Here is the only photo in the public domain of the room known as ‘COBRA’ (Cabinet Office Briefing Room) where a shifting set of characters at the apex of power in Britain meet to discuss crises.
Notice how poor it is compared to NASA, the LHC etc. There has clearly been no attempt to learn from our best examples about how to use the room as a tool. The screens at the end are a late add-on to a room that is essentially indistinguishable from the room in which Prime Minister Asquith sat in July 1914 while doodling notes to his girlfriend as he got bored. I would be surprised if the video technology used is as good as what is commercially available cheaper, the justification will be ‘security’, and I would bet that many of the decisions about the operation of this room would not survive scrutiny from experts in how to construct such rooms.
I have not attended a COBRA meeting but I’ve spoken to many who have. The meetings, as you would expect looking at this room, are often normal political meetings. That is:
aims are unclear,
assumptions are not made explicit,
there is no use of advanced tools,
there is no use of quantitative models,
discussions are often dominated by lawyers so many actions are deemed ‘unlawful’ without proper scrutiny (and this device is routinely used by officials to stop discussion of options they dislike for non-legal reasons),
there is constant confusion between policy, politics and PR then the cast disperses without clarity about what was discussed and agreed.
It has a few more screens but the picture is essentially the same: there are no interactive tools beyond the ability to speak and see someone at a distance which was invented back in the 1950s/1960s in the pioneering programs of SAGE (automated air defence) and Apollo (man on the moon). Tools to help thinking in powerful ways are not taken seriously. It is largely the same, and decisions are made the same, as in the Cuban Missile Crisis. In some ways the use of technology now makes management worse as it encourages Presidents and their staff to try to micromanage things they should not be managing, often in response to or fear of the media.
Individual ministers’ officers are also hopeless. The computers are old and rubbish. Even colour printing is often a battle. Walls are for kids’ pictures. In the DfE officials resented even giving us paper maps of where schools were and only did it when bullied by the private office. It was impossible for officials to work on interactive documents. They had no technology even for sharing documents in a way that was then (2011) normal even in low-performing organisations. Using GoogleDocs was ‘against the rules’. (I’m told this has slightly improved.) The whole structure of ‘submissions’ and ‘red boxes’ is hopeless. It is extremely bureaucratic and slow. It prevents serious analysis of quantitative models. It reinforces the lack of proper scientific thinking in policy analysis. It guarantees confusion as ministers scribble notes and private offices interpret rushed comments by exhausted ministers after dinner instead of having proper face-to-face meetings that get to the heart of problems and resolve conflicts quickly. The whole approach reinforces the abject failure of the senior civil service to think about high performance project management.
Of course, most of the problems with the standards of policy and management in the civil service are low or no-tech problems — they involve the ‘unrecognised simplicities’ that are independent of, and prior to, the use of technology — but all these things negatively reinforce each other. Anybody who wants to do things much better is scuppered by Whitehall’s entangled disaster zone of personnel, training, management, incentives and tools.
Dynamic Land: ‘amazing’
I won’t go into this in detail. Dynamic Land is in a building in Berkeley. I visited last year. It is Victor’s attempt to turn the ideas above into a sort of living laboratory. It is a large connected set of rooms that have computing embedded in surfaces. For example, you can scribble equations on a bit of paper, cameras in the ceiling read your scribbles automatically, turn them into code, and execute them — for example, by producing graphics. You can then physically interact with models that appear on the table or wall while the cameras watch your hands and instantly turn gestures into new code and change the graphics or whatever you are doing. Victor has put these cutting edge tools into a space and made it open to the Berkeley community. This is all hard to explain/understand because you haven’t seen anything like it even in sci-fi films (it’s telling the media still uses the 15 year-old Minority Report as its sci-fi illustration for such things).
This video gives a little taste. I visited with a physicist who works on the cutting edge of data science/AI. I was amazed but I know nothing about such things — I was interested to see his reaction as he scribbled gravitational equations on paper and watched the cameras turn them into models on the table in real-time, then he changed parameters and watched the graphics change in real-time on the table (projected from the ceiling): ‘Ohmygod, this is just obviously the future, absolutely amazing.’ The thought immediately struck us: imagine the implications of having policy discussions with such tools instead of the usual terrible meetings. Imagine discussing HS2 budgets or possible post-Brexit trading arrangements with the models running like this for decision-makers to interact with.
Video of Dynamic Land: the bits of coloured paper are ‘code’, graphics are projected from the ceiling
In his essay Thought as a Technology, Nielsen describes the feedback between thought and interfaces:
‘In extreme cases, to use such an interface is to enter a new world, containing objects and actions unlike any you’ve previously seen. At first these elements seem strange. But as they become familiar, you internalize the elements of this world. Eventually, you become fluent, discovering powerful and surprising idioms, emergent patterns hidden within the interface. You begin to think with the interface, learning patterns of thought that would formerly have seemed strange, but which become second nature. The interface begins to disappear, becoming part of your consciousness. You have been, in some measure, transformed.’
He describes how normal language and computer interfaces are cognitive technologies:
‘Language is an example of a cognitive technology: an external artifact, designed by humans, which can be internalized, and used as a substrate for cognition. That technology is made up of many individual pieces – words and phrases, in the case of language – which become basic elements of cognition. These elements of cognition are things we can think with…
‘In a similar way to language, maps etc, a computer interface can be a cognitive technology. To master an interface requires internalizing the objects and operations in the interface; they become elements of cognition. A sufficiently imaginative interface designer can invent entirely new elements of cognition… In general, what makes an interface transformational is when it introduces new elements of cognition that enable new modes of thought. More concretely, such an interface makes it easy to have insights or make discoveries that were formerly difficult or impossible. At the highest level, it will enable discoveries (or other forms of creativity) that go beyond all previous human achievement.’
Nielsen describes how powerful ways of thinking among mathematicians and physicists are hidden from view and not part of textbooks and normal teaching.
‘The reason is that traditional media are poorly adapted to working with such representations… If experts often develop their own representations, why do they sometimes not share those representations? To answer that question, suppose you think hard about a subject for several years… Eventually you push up against the limits of existing representations. If you’re strongly motivated – perhaps by the desire to solve a research problem – you may begin inventing new representations, to provide insights difficult through conventional means. You are effectively acting as your own interface designer. But the new representations you develop may be held entirely in your mind, and so are not constrained by traditional static media forms. Or even if based on static media, they may break social norms about what is an “acceptable” argument. Whatever the reason, they may be difficult to communicate using traditional media. And so they remain private, or are only discussed informally with expert colleagues.’
If we can create interfaces that reify deep principles, then ‘mastering the subject begins to coincide with mastering the interface.’ He gives the example of Photoshop which builds in many deep principles of image manipulation.
‘As you master interface elements such as layers, the clone stamp, and brushes, you’re well along the way to becoming an expert in image manipulation… By contrast, the interface to Microsoft Word contains few deep principles about writing, and as a result it is possible to master Word‘s interface without becoming a passable writer. This isn’t so much a criticism of Word, as it is a reflection of the fact that we have relatively few really strong and precise ideas about how to write well.’
He then describes what he calls ‘the cognitive outsourcing model’: ‘we specify a problem, send it to our device, which solves the problem, perhaps in a way we-the-user don’t understand, and sends back a solution.’ E.g we ask Google a question and Google sends us an answer.
This is how most of us think about the idea of augmenting the human intellect but it is not the best approach. ‘Rather than just solving problems expressed in terms we already understand, the goal is to change the thoughts we can think.’
‘One challenge in such work is that the outcomes are so difficult to imagine. What new elements of cognition can we invent? How will they affect the way human beings think? We cannot know until they’ve been invented.
‘As an analogy, compare today’s attempts to go to Mars with the exploration of the oceans during the great age of discovery. These appear similar, but while going to Mars is a specific, concrete goal, the seafarers of the 15th through 18th centuries didn’t know what they would find. They set out in flimsy boats, with vague plans, hoping to find something worth the risks. In that sense, it was even more difficult than today’s attempts on Mars.
‘Something similar is going on with intelligence augmentation. There are many worthwhile goals in technology, with very specific ends in mind. Things like artificial intelligence and life extension are solid, concrete goals. By contrast, new elements of cognition are harder to imagine, and seem vague by comparison. By definition, they’re ways of thinking which haven’t yet been invented. There’s no omniscient problem-solving box or life-extension pill to imagine. We cannot say a priori what new elements of cognition will look like, or what they will bring. But what we can do is ask good questions, and explore boldly.
In another essay, Using Artificial Intelligence to Augment Human Intelligence, Nielsen points out that breakthroughs in creating powerful new cognitive technologies such as musical notation or Descartes’ invention of algebraic geometry are rare but ‘modern computers are a meta-medium enabling the rapid invention of many new cognitive technologies‘ and, further, AI will help us ‘invent new cognitive technologies which transform the way we think.’
Further, historically powerful new cognitive technologies, such as ‘Feynman diagrams’, have often appeared strange at first. We should not assume that new interfaces should be ‘user friendly’. Powerful interfaces that repay mastery may require sacrifices.
‘The purpose of the best interfaces isn’t to be user-friendly in some shallow sense. It’s to be user-friendly in a much stronger sense, reifying deep principles about the world, making them the working conditions in which users live and create. At that point what once appeared strange can instead becomes comfortable and familiar, part of the pattern of thought…
‘Unfortunately, many in the AI community greatly underestimate the depth of interface design, often regarding it as a simple problem, mostly about making things pretty or easy-to-use. In this view, interface design is a problem to be handed off to others, while the hard work is to train some machine learning system.
‘This view is incorrect. At its deepest, interface design means developing the fundamental primitives human beings think and create with. This is a problem whose intellectual genesis goes back to the inventors of the alphabet, of cartography, and of musical notation, as well as modern giants such as Descartes, Playfair, Feynman, Engelbart, and Kay. It is one of the hardest, most important and most fundamental problems humanity grapples with.
‘As discussed earlier, in one common view of AI our computers will continue to get better at solving problems, but human beings will remain largely unchanged. In a second common view, human beings will be modified at the hardware level, perhaps directly through neural interfaces, or indirectly through whole brain emulation.
‘We’ve described a third view, in which AIs actually change humanity, helping us invent new cognitive technologies, which expand the range of human thought. Perhaps one day those cognitive technologies will, in turn, speed up the development of AI, in a virtuous feedback cycle:
‘It would not be a Singularity in machines. Rather, it would be a Singularity in humanity’s range of thought… The long-term test of success will be the development of tools which are widely used by creators. Are artists using these tools to develop remarkable new styles? Are scientists in other fields using them to develop understanding in ways not otherwise possible?’
I would add: are governments using these tools to help them think in ways we already know are more powerful and to explore new ways of making decisions and shaping the complex systems on which we rely?
Nielsen also wrote this fascinating essay ‘Augmenting long-term memory’. This involves a computer tool (Anki) to aid long-term memory using ‘spaced repetition’ — i.e testing yourself at intervals which is shown to counter the normal (for most people) process of forgetting. This allows humans to turn memory into a choice so we can decide what to remember and achieve it systematically (without a ‘weird/extreme gift’ which is how memory is normally treated). (It’s fascinating that educated Greeks 2,500 years ago could build sophisticated mnemonic systems allowing them to remember vast amounts while almost all educated people now have no idea about such techniques.)
‘[It] incorporates new user interface ideas to help you remember what you read… this essay isn’t just a conventional essay, it’s also a new medium, a mnemonic medium which integrates spaced-repetition testing. The medium itself makes memory a choice… This essay will likely take you an hour or two to read. In a conventional essay, you’d forget most of what you learned over the next few weeks, perhaps retaining a handful of ideas. But with spaced-repetition testing built into the medium, a small additional commitment of time means you will remember all the core material of the essay. Doing this won’t be difficult, it will be easier than the initial read. Furthermore, you’ll be able to read other material which builds on these ideas; it will open up an entire world…
‘Mastering new subjects requires internalizing the basic terminology and ideas of the subject. The mnemonic medium should radically speed up this memory step, converting it from a challenging obstruction into a routine step. Frankly, I believe it would accelerate human progress if all the deepest ideas of our civilization were available in a form like this.’
This obviously has very important implications for education policy. It also shows how computers could be used to improve learning — something that has generally been a failure since the great hopes at PARC in the 1970s. I have used Anki since reading Nielsen’s blog and I can feel it making a big difference to my mind/thoughts — how often is this true of things you read? DOWNLOAD ANKI NOW AND USE IT!
We need similarly creative experiments with new mediums that are designed to improve standards of high stakes decision-making.
We could create systems for those making decisions about m/billions of lives and b/trillions of dollars, such as Downing Street or The White House, that integrate inter alia:
Cognitive toolkits compressing already existing useful knowledge such as checklists for rational thinking developed by the likes of Tetlock, Munger, Yudkowsky et al.
A Nielsen/Victor research program on ‘Seeing Rooms’, interface design, authoring tools, and cognitive technologies. Start with bunging a few million to Victor immediately in return for allowing some people to study what he is doing and apply it in Whitehall, then grow from there.
An alpha data science/AI operation — tapping into the world’s best minds including having someone like David Deutsch or Tim Gowers as a sort of ‘chief rationalist’ in the Cabinet (with Scott Alexander as deputy!) — to support rational decision-making where this is possible and explain when it is not possible (just as useful).
Tetlock/Hanson prediction tournaments could easily and cheaply be extended to consider ‘clusters’ of issues around themes like Brexit to improve policy and project management.
Groves/Mueller style ‘systems management’ integrated with the data science team.
Legally entrenched Red Teams where incentives are aligned to overcoming groupthink and error-correction of the most powerful. Warren Buffett points out that public companies considering an acquisition should employ a Red Team whose fees are dependent on the deal NOT going ahead. This is the sort of idea we need in No10.
Researchers could see the real operating environment of decision-makers at the apex of power, the sort of problems they need to solve under pressure, and the constraints of existing centralised systems. They could start with the safe level of ‘tools that we already know work really well’ — i.e things like cognitive toolkits and Red Teams — while experimenting with new tools and new ways of thinking.
Hedge funds like Bridgewater and some other interesting organisations think about such ideas though without the sophistication of Victor’s approach. The world of MPs, officials, the Institute for Government (a cheerleader for ‘carry on failing’), and pundits will not engage with these ideas if left to their own devices.
This is not the place to go into how to change this. We know that the normal approach is doomed to produce the normal results and normal results applied to things like repeated WMD crises means disaster sooner or later. As Buffett points out, ‘If there is only one chance in thirty of an event occurring in a given year, the likelihood of it occurring at least once in a century is 96.6%.’ It is not necessary to hope in order to persevere: optimism of the will, pessimism of the intellect…
A final thought…
A very interesting comment that I have heard from some of the most important scientists involved in the creation of advanced technologies is that ‘artists see things first’ — that is, artists glimpse possibilities before most technologists and long before most businessmen and politicians.
Pixar came from a weird combination of George Lucas getting divorced and the visionary Alan Kay suggesting to Steve Jobs that he buy a tiny special effects unit from Lucas, which Jobs did with completely wrong expectations about what would happen. For unexpected reasons this tiny unit turned into a huge success — as Jobs put it later, he was ‘sort of snookered’ into creating Pixar. Now Alan Kay says he struggles to get tech billionaires to understand the importance of Victor’s ideas.
The same story repeats: genuinely new ideas that could create huge value always seem so odd that almost all people in almost all organisations cannot see new possibilities. If this is true in Silicon Valley, how much more true is it in Whitehall or Washington…
If one were setting up a new party in Britain, one could incorporate some of these ideas. This would of course also require recruiting very different types of people to the norm in politics. The closed nature of Westminster/Whitehall combined with first-past-the-post means it is very hard to solve the coordination problem of how to break into this system with a new way of doing things. Even those interested in principle don’t want to commit to a 10-year (?) project that might get them blasted on the front pages. Vote Leave hacked the referendum but such opportunities are much rarer than VC-funded ‘unicorns’. On the other hand, arguably what is happening now is a once in 50 or 100 year crisis and such crises also are the waves that can be ridden to change things normally unchangeable. A second referendum in 2020 is quite possible (or two referendums under PM Corbyn, propped up by the SNP?) and might be the ideal launchpad for a completely new sort of entity, not least because if it happens the Conservative Party may well not exist in any meaningful sense (whether there is or isn’t another referendum). It’s very hard to create a wave and it’s much easier to ride one. It’s more likely in a few years you will see some of the above ideas in novels or movies or video games than in government — their pickup in places like hedge funds and intelligence services will be discrete — but you never know…
Ps. While I have talked to Michael Nielsen and Bret Victor about their ideas, in no way should this blog be taken as their involvement in anything to do with my ideas or plans or agreement with anything written above. I did not show this to them or even tell them I was writing about their work, we do not work together in any way, I have just read and listened to their work over a few years and thought about how their ideas could improve government.
‘People, ideas, machines — in that order!’ Colonel Boyd
‘[R]ational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. Those drives will lead to anti-social and dangerous behaviour if not explicitly countered. The current computing infrastructure would be very vulnerable to unconstrained systems with these drives.’ Omohundro.
‘For progress there is no cure…’ von Neumann
This blog sketches a few recent developments connecting AI and issues around ‘systems management’ and government procurement.
The biggest problem for governments with new technologies is that the limiting factor on applying new technologies is not the technology but management and operational ideas which are extremely hard to change fast. This has been proved repeatedly: eg. the tank in the 1920s-30s or the development of ‘precision strike’ in the 1970s. These problems are directly relevant to the application of AI by militaries and intelligence services. The Pentagon’s recent crash program, Project Maven, discussed below, was an attempt to grapple with these issues.
‘The good news is that Project Maven has delivered a game-changing AI capability… The bad news is that Project Maven’s success is clear proof that existing AI technology is ready to revolutionize many national security missions… The project’s success was enabled by its organizational structure.‘
This blog sketches some connections between:
The example of ‘precision strike’ in the 1970s, Marshal Ogarkov and Andy Marshall, implications for now — ‘anti-access / area denial’ (A2/AD), ‘Air-Sea Battle’ etc.
Development of ‘precision strike’ to lethal autonomous cheap drone swarms hunting humans cowering underground.
Adding AI to already broken nuclear systems and doctrines, hacking the NSA etc — mix coke, Milla Jovovich and some alpha engineers and you get…?
A few thoughts on ‘systems management’ and procurement, lessons from the Manhattan Project etc.
The Chinese attitude to ‘systems management’ and Qian Xuesen, combined with AI, mass surveillance, ‘social credit’ etc.
A few recent miscellaneous episodes such as an interesting DARPA demo on ‘self-aware’ robots.
Charts on Moore’s Law: what scale would a ‘Manhattan Project for AGI’ be?
AGI safety — the alignment problem, the dangers of science as a ‘blind search algorithm’, closed vs open security architectures etc.
A theme of this blog since before the referendum campaign has been that thinking about organisational structure/dynamics can bring what Warren Buffett calls ‘lollapalooza’ results. What seems to be very esoteric and disconnected from ‘practical politics’ (studying things like the management of the Manhattan Project and Apollo) turns out to be extraordinarily practical (gives you models for creating super-productive processes).
Part of the reason lollapalooza results are possible is that almost nobody near the apex of power believes the paragraph above is true and they actively fight to stop people learning from extreme successes so there is gold lying on the ground waiting to be picked up for trivial costs. Nudging reality down an alternative branch of history in summer 2016 only cost ~£106 so the ‘return on investment’ if you think about altered GDP, technology, hundreds of millions of lives over decades and so on was truly lollapalooza. Politics is not like the stock market where you need to be an extreme outlier like Buffett/Munger to find such inefficiencies and results consistently. The stock market is an exploitable market where being right means you get rich and you help the overall system error-correct which makes it harder to be right (the mechanism pushes prices close to random, they’re not quite random but few can exploit the non-randomness). Politics/government is not like this. Billionaires who want to influence politics could get better ‘returns on investment’ than from early stage Amazon.
This blog is not directly about Brexit at all but if you are thinking — how could we escape this nightmare and turn government institutions from hopeless to high performance and what should we focus on to replace the vision of ‘influencing the EU’ that has been blown up by Brexit? — it will be of interest. Lessons that have been lying around for over half a century could have pushed the Brexit negotiations in a completely different direction and still could do but require an extremely different ‘model of effective action’ to dominant models in Westminster.
Project Maven: new organisational approaches for rapid deployment of AI to war / hybrid-war
The quotes below are from a piece in The Bulletin of Atomic Scientists about a recent AI project by the Pentagon. The most interesting aspect is not the technical details but the management approach and implications for Pentagon-style bureaucraties.
‘Project Maven is a crash Defense Department program that was designed to deliver AI technologies … to an active combat theater within six months from when the project received funding… Technologies developed through Project Maven have already been successfully deployed in the fight against ISIS. Despite their rapid development and deployment, these technologies are getting strong praise from their military intelligence users. For the US national security community, Project Maven’s frankly incredible success foreshadows enormous opportunities ahead — as well as enormous organizational, ethical, and strategic challenges.
‘In late April, Robert Work — then the deputy secretary of the Defense Department — wrote a memo establishing the Algorithmic Warfare Cross-Functional Team, also known as Project Maven. The team had only six members to start with, but its small size belied the significance of its charter… Project Maven is the first time the Defense Department has sought to deploy deep learning and neural networks, at the level of state-of-the-art commercial AI, in department operations in a combat theater…
‘Every day, US spy planes and satellites collect more raw data than the Defense Department could analyze even if its whole workforce spent their entire lives on it. As its AI beachhead, the department chose Project Maven, which focuses on analysis of full-motion video data from tactical aerial drone platforms… These drone platforms and their full-motion video sensors play a major role in the conflict against ISIS across the globe. The tactical and medium-altitude video sensors of the Scan Eagle, MQ-1C, and MQ-9 produce imagery that more or less resembles what you see on Google Earth. A single drone with these sensors produces many terabytes of data every day. Before AI was incorporated into analysis of this data, it took a team of analysts working 24 hours a day to exploit only a fraction of one drone’s sensor data.
‘The Defense Department spent tens of billions of dollars developing and fielding these sensors and platforms, and the capabilities they offer are remarkable. Whenever a roadside bomb detonates in Iraq, the analysts can simply rewind the video feed to watch who planted it there, when they planted it, where they came from, and where they went. Unfortunately, most of the imagery analysis involves tedious work—people look at screens to count cars, individuals, or activities, and then type their counts into a PowerPoint presentation or Excel spreadsheet. Worse, most of the sensor data just disappears — it’s never looked at — even though the department has been hiring analysts as fast as it can for years… Plenty of higher-value analysis work will be available for these service members and contractors once low-level counting activity is fully automated.
‘The six founding members of Project Maven, though they were assigned to run an AI project, were not experts in AI or even computer science. Rather, their first task was building partnerships, both with AI experts in industry and academia and with the Defense Department’s communities of drone sensor analysts… AI experts and organizations who are interested in helping the US national security mission often find that the department’s contracting procedures are so slow, costly, and painful that they just don’t want to bother. Project Maven’s team — with the help of Defense Information Unit Experimental, an organization set up to accelerate the department’s adoption of commercial technologies — managed to attract the support of some of the top talent in the AI field (the vast majority of which lies outside the traditional defense contracting base). Figuring out how to effectively engage the tech sector on a project basis is itself a remarkable achievement…
‘Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI. A traditional defense acquisition process lasts multiple years, with separate organizations defining the functions that acquisitions must perform, or handling technology development, production, or operational deployment. Each of these organizations must complete its activities before results are handed off to the next organization. When it comes to digital technologies, this approach often results in systems that perform poorly and are obsolete even before they are fielded.
‘Project Maven has taken a different approach, one modeled after project management techniques in the commercial tech sector: Product prototypes and underlying infrastructure are developed iteratively, and tested by the user community on an ongoing basis. Developers can tailor their solutions to end-user needs, and end users can prepare their organizations to make rapid and effective use of AI capabilities. Key activities in AI system development — labeling data, developing AI-computational infrastructure, developing and integrating neural net algorithms, and receiving user feedback — are all run iteratively and in parallel…
‘In Maven’s case, humans had to individually label more than 150,000 images in order to establish the first training data sets; the group hopes to have 1 million images in the training data set by the end of January. Such large training data sets are needed for ensuring robust performance across the huge diversity of possible operating conditions, including different altitudes, density of tracked objects, image resolution, view angles, and so on. Throughout the Defense Department, every AI successor to Project Maven will need a strategy for acquiring and labeling a large training data set…
‘From their users, Maven’s developers found out quickly when they were headed down the wrong track — and could correct course. Only this approach could have provided a high-quality, field-ready capability in the six months between the start of the project’s funding and the operational use of its output. In early December, just over six months from the start of the project, Maven’s first algorithms were fielded to defense intelligence analysts to support real drone missions in the fight against ISIS.
‘The good news is that Project Maven has delivered a game-changing AI capability… The bad news is that Project Maven’s success is clear proof that existing AI technology is ready to revolutionize many national security missions…
‘The project’s success was enabled by its organizational structure: a small, operationally focused, cross-functional team that was empowered to develop external partnerships, leverage existing infrastructure and platforms, and engage with user communities iteratively during development. AI needs to be woven throughout the fabric of the Defense Department, and many existing department institutions will have to adopt project management structures similar to Maven’s if they are to run effective AI acquisition programs. Moreover, the department must develop concepts of operations to effectively use AI capabilities—and train its military officers and warfighters in effective use of these capabilities…
‘Already the satellite imagery analysis community is working on its own version of Project Maven. Next up will be migrating drone imagery analysis beyond the campaign to defeat ISIS and into other segments of the Defense Department that use drone imagery platforms. After that, Project Maven copycats will likely be established for other types of sensor platforms and intelligence data, including analysis of radar, signals intelligence, and even digital document analysis… In October 2016, Michael Rogers (head of both the agency and US Cyber Command) said “Artificial Intelligence and machine learning … [are] foundational to the future of cybersecurity. … It is not the if, it’s only the when to me.”
‘The US national security community is right to pursue greater utilization of AI capabilities. The global security landscape — in which both Russia and China are racing to adapt AI for espionage and warfare — essentially demands this. Both Robert Work and former Google CEO Eric Schmidt have said that leadership in AI technology is critical to the future of economic and military power and that continued US leadership is far from guaranteed. Still, the Defense Department must explore this new technological landscape with a clear understanding of the risks involved…
‘The stakes are relatively low when AI is merely counting the number of cars filmed by a drone camera, but drone surveillance data can also be used to determine whether an individual is directly engaging in hostilities and is thereby potentially subject to direct attack. As AI systems become more capable and are deployed across more applications, they will engender ever more difficult ethical and legal dilemmas.
‘US military and intelligence agencies will have to develop effective technological and organizational safeguards to ensure that Washington’s military use of AI is consistent with national values. They will have to do so in a way that retains the trust of elected officials, the American people, and Washington’s allies. The arms-race aspect of artificial intelligence certainly doesn’t make this task any easier…
‘The Defense Department must develop and field AI systems that are reliably safe when the stakes are life and death — and when adversaries are constantly seeking to find or create vulnerabilities in these systems.
‘Moreover, the department must develop a national security strategy that focuses on establishing US advantages even though, in the current global security environment, the ability to implement advanced AI algorithms diffuses quickly. When the department and its contractors developed stealth and precision-guided weapons technology in the 1970s, they laid the foundation for a monopoly, nearly four decades long, on technologies that essentially guaranteed victory in any non-nuclear war. By contrast, today’s best AI tech comes from commercial and academic communities that make much of their research freely available online. In any event, these communities are far removed from the Defense Department’s traditional technology circles. For now at least, the best AI research is still emerging from the United States and allied countries, butChina’s national AI strategy, released in July, poses a credible challenge to US technology leadership.’
Project Maven shows recurring lessons from history. Speed and adaptability are crucial to success in conflict and can be helped by new technologies. So is the capacity for new operational ideas about using new technologies. These ideas depend on unusual people. Bureaucracies naturally slow things down (for some good but mostly bad reasons), crush new ideas, and exclude unusual people in order to defend established interests. The limiting factor for the Pentagon in deploying advanced technology to conflict in a useful time period was not new technical ideas — overcoming its own bureaucracy was harder than overcoming enemy action. This is absolutely normal in conflict (e.g it was true of the 2016 referendum where dealing with internal problems was at least an order of magnitude harder and more costly than dealing with Cameron).
As Colonel Boyd used to shout to military audiences, ‘People, ideas, machines — in that order!’
DARPA, ‘precision strike’, the ‘Revolution in Military Affairs’ and bureaucracies
The Project Maven experience is similar to the famous example of the tank. Everybody could see tanks were possible from the end of World War I but over 20 years Britain and France were hampered by their own bureaucracies in thinking about the operational implications and how to use them most effectively. Some in Britain and France did point out the possibilities but the possibilities were not absorbed into official planning. Powerful bureaucratic interests reinforced the normal sort of blindness to new possibilities. Innovative thinking flourished, relatively, in Germany where people like Guderian and von Manstein could see the possibilities for a very big increase in speed turning into a huge nonlinear advantage — possibilities applied to the ‘von Manstein plan’ that shocked the world in 1940. This was partly because the destruction of German forces after 1918 meant everything had to be built from scratch and this connects to another lesson about successful innovation: in the military, as in business, it is more likely if a new entity is given the job, as with the Manhattan Project to develop nuclear weapons. The consequences were devastating for the world in 1940 but, lucky for us, the nature of the Nazi regime meant that it made very similar errors itself, e.g regarding the importance of air power in general and long range bombers in particular. (This history is obviously very complex but this crude summary is roughly right about the main point)
There was a similar story with the technological developments mainly sparked by DARPA in the 1970s including stealth (developed in a classified program by the legendary ‘Skunk Works’, tested at ‘Area 51’), global positioning system (GPS), ‘precision strike’ long-range conventional weapons, drones, advanced wide-area sensors, computerised command and control (C2), and new intelligence, reconnaissance and surveillance capabilities (ISR). The hope was that together these capabilities could automate the location and destruction of long-range targets and greatly improve simultaneously the precision, destructiveness, and speed of operations.
The approach became known in America as ‘deep-strike architectures’ (DSA) and in the Soviet Union as ‘reconnaissance-strike complexes’ (RUK).The Soviet Marshal Ogarkov realised that these developments, based on America’s superior ability to develop micro-electronics and computers, constituted what he called a ‘Military-Technical Revolution’ (MTR) and was an existential threat to the Soviet Union. He wrote about them from the late 1970s. (The KGB successfully stole much of the technology but the Soviet system still could not compete.) His writings were analysed in America particularly by Andy Marshall at the Pentagon’s Office of Net Assessment (ONA) and others. ONA’s analyses of what they started calling the Revolution in Military Affairs (RMA) in turn affected Pentagon decisions. In 1991 the Gulf War demonstrated some of these technologies just as the Soviet Union was imploding. In 1992 the ONA wrote a very influential report (The Military-Technical Revolution) which, unusually, they made public (almost all ONA documents remain classified).
The ~1978 Assault Breaker concept
Soviet depiction of Assault Breaker (Sergeyev, ‘Reconnaissance-Strike Complexes,’ Red Star, 1985)
In many ways Marshal Ogarkov thought more deeply about how to develop the Pentagon’s own technologies than the Pentagon did, hampered by the normal problems that the operationalising of new ideas threatened established bureaucratic interests, including the Pentagon’s procurement system. These problems have continued. It is hard to overstate the scale of waste and corruption in the Pentagon’s horrific procurement system (see below).
China has studied this episode intensely. It has integrated lessons into their ‘anti-access / area denial’ (A2/AD) efforts to limit American power projection in East Asia. America’s response to A2/AD is the ‘Air-Sea Battle’ concept. As Marshal Ogarkov predicted in the 1970s the ‘revolution’ has evolved into opposing ‘reconnaissance-strike complexes’ facing each other with each side striving to deploy near-nuclear force using extremely precise conventional weapons from far away, all increasingly complicated by possibilities for cyberwar to destroy the infrastructure on which all this depends and information operations to alter the enemy population’s perception (very Sun Tzu!).
Graphic: Operational risks of conventional US approach vs A2/AD (CSBA, 2016)
The penetration of the CIA by the KGB, the failure of the CIA to provide good predictions, the general American failure to understand the Soviet economy, doctrine and so on despite many billions spent over decades, the attempts by the Office of Net Assessment to correct institutional failings, the bureaucratic rivalries and so on — all this is a fascinating subject and one can see why China studies it so closely.
From experimental drones in the 1970s to drone swarms deployed via iPhone
The next step for reconnaissance-strike is the application of advanced robotics and artificial intelligence which could bring further order(s) of magnitude performance improvements, cost reductions, and increases in tempo.This is central to the US-China military contest. It will also affect everyone else as much of the technology becomes available to Third World states and small terrorist groups.
I wrote in 2004 about the farce of the UK aircraft carrier procurement story (and many others have warned similarly). Regardless of elections, the farce has continued to squander billions of pounds, enriching some of the worst corporate looters and corrupting public life via the revolving door of officials/lobbyists. Scrutiny by our MPs has been contemptible. They have built platforms that already cannot be sent to a serious war against a serious enemy. A teenager will be able to deploy a drone from their smartphone to sink one of these multi-billion dollar platforms. Such a teenager could already take out the stage of a Downing Street photo op with a little imagination and initiative, as I wrote about years ago
The drone industry is no longer dependent on its DARPA roots and is no longer tied to the economics of the Pentagon’s research budgets and procurement timetables. It is driven by the economics of the extremely rapidly developing smartphone market including Moore’s Law, plummeting costs for sensors and so on. Further, there are great advantages of autonomy including avoiding jamming counter-measures. Kalashnikov has just unveiled its drone version of the AK-47: a cheap anonymous suicide drone that flies to the target and blows itself up — it’s so cheap you don’t care. So you have a combination of exponentially increasing capabilities, exponentially falling costs, greater reliability, greater lethality, greater autonomy, and anonymity (if you’re careful and buy them through cut-outs etc). Then with a bit of added sophistication you add AI face recognition etc. Then you add an increasing capacity to organise many of these units at scale in a swarm, all running off your iPhone — and consider how effective swarming tactics were for people like Alexander the Great.
This is why one of the world’s leading AI researchers, Stuart Russell (professor of computer science at Berkeley) has made this warning:
‘The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them. For instance, as flying robots become smaller, their manoeuvrability increases and their ability to be targeted decreases… Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless…
‘A very, very small quadcopter, one inch in diameter can carry a one- or two-gram shaped charge. You can order them from a drone manufacturer in China. You can program the code to say: “Here are thousands of photographs of the kinds of things I want to target.” A one-gram shaped charge can punch a hole in nine millimeters of steel, so presumably you can also punch a hole in someone’s head. You can fit about three million of those in a semi-tractor-trailer. You can drive up I-95 with three trucks and have 10 million weapons attacking New York City. They don’t have to be very effective, only 5 or 10% of them have to find the target.
‘There will be manufacturers producing millions of these weapons that people will be able to buy just like you can buy guns now, except millions of guns don’t matter unless you have a million soldiers. You need only three guys to write the program and launch them. So you can just imagine that in many parts of the world humans will be hunted. They will be cowering underground in shelters and devising techniques so that they don’t get detected. This is the ever-present cloud of lethal autonomous weapons… There are really no technological breakthroughs that are required. Every one of the component technologies is available in some form commercially… It’s really a matter of just how much resources are invested in it.’
There is some talk in London of ‘what if there is an AI arms race’ but there is already an AI/automation arms race between companies and between countries — it’s just that Europe is barely relevant to the cutting edge of it. Europe wants to be a world player but it has totally failed to generate anything approaching what is happening in coastal America and China. Brussels spends its time on posturing, publishing documents about ‘AI and trust’, whining, spreading fake news about fake news (while ignoring experts like Duncan Watts), trying to damage Silicon Valley companies rather than considering how to nourish European entities with real capabilities, and imposing bad regulation like GDPR (that ironically was intended to harm Google/Facebook but actually helped them in some ways because Brussels doesn’t understand them).
Britain had a valuable asset, Deep Mind, and let Google buy it for trivial money without the powers-that-be in Whitehall understanding its significance — it is relevant but it is not under British control. Britain has other valuable assets — for example, it is a potential strategic asset to have the AI centre, financial centre, and political centre all in London, IF politicians cared and wanted to nourish AI research and companies. Very obviously, right now we have a MP/official class that is unfit to do this even if they had the vaguest idea what to do, which almost none do (there is a flash of hope on genomics/AI).
Unlike during the Cold War when the Soviet Union could not compete in critical industries such as semi-conductors and consumer electronics, China can compete, is competing, and in some areas is already ahead.
The automation arms race is already hitting all sorts of low skilled jobs from baristas to factory cleaning, some of which will be largely eliminated much more quickly than economists and politicians expect. Many agricultural jobs are being rapidly eliminated as are jobs in fields like mining and drilling. Look at a modern mine and you will see driverless trucks on the ground and drones overhead. The implications for millions who make a living from driving is now well known. (This also has obvious implications for the wisdom of allowing millions of un-skilled immigrants and one of the oddities of Silicon Valley is that people there simultaneously argue a) politicians are clueless about the impact of automation on unskilled people and b) politicians should allow millions more unskilled immigrants into the country — an example of how technical people are not always as rational about politics as they think they are.)
This automation arms race will affect different countries at different speeds depending on their exposure to fields that are ripe for disruption sooner or later. If countries cannot tax those companies that lead in AI, they will have narrower options. They may even be forced into a sort of colony status. Those who think this is an exaggeration should look at China’s recent deals in Africa where countries are handing over vast amounts of data to China on extremely unfavourable terms. Huge server farms in China are processing facial recognition data on millions of Africans who have no idea their personal data has been handed over. The western media focuses on Facebook with almost no coverage of these issues.
In the extreme case, a significant lead in AI for country X could lead to a self-reinforcing cycle in which it increasingly dominates economically, scientifically, and militarily and perhaps cannot be caught as Ian Hogarth has argued and to which Putin recently alluded.
China’s investment in AI — more data = better product = more users = more revenue = better talent + more data in a beautiful flywheel…
China has about x3 number of internet users than America but the gap in internet and mobile usage is much larger. ‘In China, people use their mobile phones to pay for goods 50 times more often than Americans. Food delivery volume in China is 10 times more than that of the United States. And shared bicycle usage is 300 times that of the US. This proliferation of data — with more people generating far more information than any other country – is the fuel for improving China’s AI’ (report).
China’s AI policy priority is clear. The ‘Next Generation Artificial Intelligence Development Plan‘ announced in July 2017 states that China should catch America by 2020 and be the global leader by 2030. Xi Jinping emphasises this repeatedly.
Some implications for entangling AI with WMD — take a Milla Jovovich lookalike then add some alpha engineers…
It is important to consider nuclear safety when thinking about AI safety.
This matches research just published in the Bulletin of Atomic Scientists on the most secure (Level 3/enhanced and Level 4) bio-labs. It is now clear that laboratories conducting research on viruses that could cause a global pandemic are extremely dangerous. I am not aware of any mainstream media in Britain reporting this (story here).
Further, the systems for coping with nuclear crises have failed repeatedly. They are extremely vulnerable to false alarms, malicious attacks or even freaks like, famously, a bear (yes, a bear) triggering false alarms. We have repeatedly escaped accidental nuclear war because of flukes such as odd individuals not passing on ‘launch’ warnings or simply refusing to act. The US National Security Adviser has sat at the end of his bed looking at his sleeping wife ‘knowing’ she won’t wake up while pondering his advice to the President on a counterattack that will destroy half the world, only to be told minutes later the launch warning was the product of a catastrophic error. These problems have not been dealt with. We don’t know how bad this problem is: many details are classified and many incidents are totally unreported.
Further, the end of the Cold War gave many politicians and policy people in the West the completely false idea that established ideas about deterrence had been vindicated but they have not been vindicated (cf. Payne’s Fallacies of Cold War deterrence and The Great American Gamble). Senior decision-makers are confident that their very dangerous ideas are ‘rational’.
US and Russian nukes remain on ‘launch on warning’ — i.e a hair trigger — so the vulnerabilities could recur any time. Threats to use them are explicitly contemplated over crises such as Taiwan and Kashmir. Nuclear weapons have proliferated and are very likely to proliferate further. There are now thousands of people, including North Korean and Pakistani scientists, who understand the technology. And there is a large network of scientists involved in the classified Soviet bio-weapon programme that was largely unknown to western intelligence services before the end of the Cold War and has dispersed across the world.
Yes, you’re right to ask ‘why don’t I read about this stuff in the mainstream media?’. There is very little media coverage of reports on things like nuclear safety and pretty much nobody with real power pays any attention to all this. If those at the apex of power don’t take nuclear safety seriously, why would you think they are on top of anything? Markets and science have done wondrous things but they cannot by themselves fix such crazy incentive problems with government institutions.
Government procurement — ‘the horror, the horror’
The problem of ‘rational procurement’ is incredibly hard to solve and even during existential conflicts problems with incentives recur. If state agencies, out of fear of what opponents might be doing, create organisations that escape most normal bureaucratic constraints, then AI will escalate in importance to the military and intelligence services even more rapidly than it already is. It is possible that China will build organisations to deploy AI to war/pseudo-war/hybrid-war faster and better than America.
In January 2017 I wrote about systems engineering and systems management — an approach for delivering extremely complex and technically challenging projects. (It was already clear the Brexit negotiations were botched, that Heywood, Hammond et al had effectively destroyed any sort of serious negotiating position, and I suggested Westminster/Whitehall had to learn from successful management of complex projects to avert what would otherwise be a debacle.) These ideas were born with the Manhattan Project to build the first nuclear bomb, the ICBMs project in the 1950s, and the Apollo program in the 1960s which put man on the moon. These projects combined a) some of the most astonishing intellects the world has seen of which a subset were also brilliant at navigating government (e.g von Neumann) and b) phenomenally successful practical managers: e.g General Groves on Manhattan Project, Bernard Schriever on ICBMs and George Mueller on Apollo.
The story we are told about the Manhattan Project focuses almost exclusively on the extraordinary collection of physicists and mathematicians at Los Alamos but they were a relatively small part of the whole story which involved an engineer building an unprecedented operation at multiple sites across America in secret and with extraordinary speed while many doubted the project was possible — then coordinating multiple projects, integrating distributed expertise and delivering a functioning bomb.
If you read Groves’ fascinating book, Now It Can Be Told, and read a recent biography of him, in many important ways you will acquire what is effectively cutting-edge knowledge today about making huge endeavours work — ‘cutting-edge’ because almost nobody has learned from this (see below). If you are one of the many MPs aspiring to be not just Prime Minister but a Prime Minister who gets important things done, there are very few books that would repay careful study as much as Groves’. If you do then you could avoid joining the list of Major, Blair, Brown, Cameron and May who bungle around for a few years before being spat out to write very similar accounts about how they struggled to ‘find the levers of power’, couldn’t get officials to do what they want, and never understood how to get things done.
Systems management is generally relevant to the question: how best to manage very big complex projects? It was relevant to the referendum (Victoria Woodcock was Vote Leave’s George Mueller). It is relevant to the Brexit negotiations and the appalling management process between May/Hammond/Heywood/Robbins et al, which has been a case study in how not to manage a complex project (Parliament also deserves much blame for never scrutinising this process). It is relevant to China’s internal development and the US-China geopolitical struggle. It is relevant to questions like ‘how to avoid nuclear war’ and ‘how would you build a Manhattan Project for safe AGI?’. It is relevant to how you could develop a high performance team in Downing Street that could end the current farce. The same issues and lessons crop up in every account of a Presidency and the role of the Chief of Staff. If you want to change Whitehall from 1) ‘failure is normal’ to 2) ‘align incentives with predictive accuracy, operational excellence and high performance’, then systems management provides an extremely valuable anti-checklist for Whitehall.
Given vital principles were established more than half a century ago that were proved to do things much faster and more effectively than usual, it would be natural to assume that these lessons became integrated in training and practice both in the worlds of management and politics/government. This did not happen. In fact, these lessons have been ‘unlearned’.
General Groves was pushed out of the Pentagon (‘too difficult’). The ICBM project, conducted in extreme panic post-Sputnik, had to re-create an organisation outside the Pentagon and re-learn Groves’ lessons a decade later. NASA was a mess until Mueller took over and imported the lessons from Manhattan and ICBMs. After Apollo’s success in 1969, Mueller left and NASA reverted to being a ‘normal’ organisation and forgot his successful approach. (The plans Mueller left for developing a manned lunar base, space commercialisation, and man on Mars by the end of the 1980s were also tragically abandoned.)
While Mueller was putting man on the moon, MacNamara’s ‘Whizz Kids’ in the Pentagon, who took America into the Vietnam War, were dismantling the successful approach to systems management claiming that it was ‘wasteful’ and they could do it ‘more efficiently’. Their approach was a disaster and not just regarding Vietnam. The combination of certain definitions of ‘efficiency’ and new legal processes ensured that procurement was routinely over-budget, over-schedule, over-promising, and generated more and more scandals. Regardless of failure the MacNamara approach metastasised across the Pentagon. Incentives are so disastrously misaligned that almost every attempt at reform makes these problems worse and lawyers and lobbyists get richer. Of course, if lawmakers knew how the Manhattan Project and Apollo were done — the lack of ‘legal process’, things happening with a mere handshake instead of years of reviews enriching lawyers! — they would be stunned.
Successes since the 1960s have often been freaks (e.g the F-16, Boyd’s brainchild) or ‘black’ projects (e.g stealth) and often conducted in SkunkWorks-style operations outside normal laws. It is striking that US classified special forces, JSOC (equivalent to SAS/SBS etc), routinely use a special process to procure technologies outside the normal law to avoid the delays. This connects to George Mueller saying late in life that Apollo would be impossible with the current legal/procurement system and it could only be done as a ‘black’ program.
The lessons of success have been so widely ‘unlearned’ throughout the government system that when Obama tried to roll out ObamaCare, it blew up. When they investigated, the answer was: we didn’t use systems management so the parts didn’t connect and we never tested this properly. Remember: Obama had the support of the vast majority of Silicon Valley expertise but this did not avert disaster. All anyone had to do was read Groves’ book and call Sam Altman or Patrick Collison and they could have provided the expertise to do it properly but none of Obama’s staff or responsible officials did.
The UK is the same. MPs constantly repeat the absurd SW1 mantra that ‘there’s no money’ while handing out a quarter of a TRILLION pounds every year on procurement and contracting. I engaged with this many times in the Department for Education 2010-14. The Whitehall procurement system is embedded in the dominant framework of EU law (the EU law is bad but UK officials have made it worse). It is complex, slow and wasteful. It hugely favours large established companies with powerful political connections — true corporate looters. The likes of Carillion and lawyers love it because they gain from the complexity, delays, and waste. It is horrific for SMEs to navigate and few can afford even to try to participate. The officials in charge of multi-billion processes are mostly mediocre, often appalling. In the MoD corruption adds to the problems.
Because of mangled incentives and reinforcing culture, the senior civil service does not care about this and does not try to improve. Total failure is totally irrelevant to the senior civil service and is absolutely no reason to change behaviour even if it means thousands of people killed and many billions wasted. Occasionally incidents like Carillion blow up and the same stories are written and the same quotes given — ‘unbelievable’, ‘scandal’, ‘incompetence’, ‘heads will roll’. Nothing changes. The closed and dysfunctional Whitehall system fights to stay closed and dysfunctional. The media caravan soon rolls on. ‘Reform’ in response to botches and scandals almost inevitably makes things even slower and more expensive — even more focus on process rather than outcomes, with the real focus being ‘we can claim to have acted properly because of our Potemkin process’. Nobody is incentivised to care about high performance and error-correction. The MPs ignore it all. Select Committees issue press releases about ‘incompetence’ but never expose the likes of Heywood to persistent investigation to figure out what has really happened and why. Nobody cares.
This culture has been encouraged by the most senior leaders. The recent Cabinet Secretary Jeremy Heywood assured us all that the civil service could easily cope with Brexit and the civil service would handle Brexit fine and ‘definitely on digital, project management we’ve got nothing to learn from the private sector’. His predecessor, O’Donnell, made similar asinine comments.The fact that Heywood could make such a laughable claim after years of presiding over expensive debacle after expensive debacle and be universally praised by Insiders tells you all you need to know about ‘the blind leading the blind’ in Westminster. Heywood was a brilliant courtier-fixer but he didn’t care about management and operational excellence. Whitehall now incentivises the promotion of courtier-fixers, not great managers like Groves and Mueller. Management, like science, isregarded contemptuously as something for the lower orders to think about, not the ‘strategists’ at the top.
Long-term leadership from the likes of O’Donnell and Heywood is why officials know that practically nobody is ever held accountable regardless of the scale of failure. Being in charge of massive screwups is no barrier to promotion. Operational excellence is no requirement for promotion. You will often see the official in charge of some debacle walking to the tube at 4pm (‘compressed hours’ old boy) while the debacle is live on TV (I know because I saw this regularly in the DfE). The senior civil service now operates like a protected caste to preserve its power and privileges regardless of who the ignorant plebs vote for.
You can see how crazy the incentives are when you consider elections. If you look back at recent British elections the difference in the spending plans between the two sides has been a tiny fraction of the £250 billion p/a procurement and contracting budget — yet nobody ever really talks about this budget, it is the great unmentionable subject in Westminster! There’s the odd slogan about ‘let’s cut waste’ but the public rightly ignores this and assumes both sides will do nothing about it out of a mix of ignorance, incompetence and flawed incentives so big powerful companies continue to loot the taxpayer. Look at both parties now just letting the HS2 debacle grow and grow with the budget out of control, the schedule out of control, officials briefing ludicrously that the ‘high speed’ rail will be SLOWED DOWN to reduce costs and so on, all while an army of privileged looters, lobbyists, and lawyers hoover up taxpayer cash.
And now, when Brexit means the entire legal basis for procurement is changing, do these MPs, ministers and officials finally examine it and see how they could improve? No of course not! The top priority for Heywood et al viz Brexit and procurement has been to get hapless ministers to lock Britain into the same nightmare system even after we leave the EU — nothing must disrupt the gravy train! There’s been a lot of talk about £350 million per week for the NHS since the referendum. I could find this in days and in ways that would have strong public support. But nobody is even trying to do this and if some minister took a serious interest, they would soon find all sorts of things going wrong for them until the PermSec has a quiet word and the natural order is restored…
‘[T]he management community may have badly underestimated the benefits of core management practices [and] it’s unwise to teach future leaders that strategic decision making and basic management processes are unrelated.’ [!]
The study of management, like politics, is not a field with genuine expertise. Like other social sciences there is widespread ‘cargo cult science’, fads and charlatans drowning out core lessons. This makes it easier to understand the failure of politicians: when elite business schools now teach students NOT to value operational excellence, when supposed management gurus like MacNamara actually push things in a worse direction, then it is less surprising people like Cameron and Heywood don’t know know which way to turn. Imagine the normal politician or senior official in Washington or London. They have almost no exposure to genuinely brilliant managers or very well run organisations. Their exposure is overwhelmingly to ‘normal’ CEOs of public companies and normal bureaucracies. As the most successful investors in world history, Buffett and Munger, have pointed out for over 50 years, many of these corporate CEOs, the supposedly ‘serious people’, don’t know what they are doing and have terrible incentives.
There is almost no research funded on ARPA-PARC principles worldwide. ARPA was deliberately made less like what it was like when it created the internet. The man most responsible for PARC’s success, Robert Taylor, was fired and the most effective team in the history of computing research was disbanded. XEROX notoriously could not overcome its internal incentive problems and let Steve Jobs and Bill Gates develop the ideas. Although politicians love giving speeches about ‘innovation’ and launching projects for PR, governments subsequently almost completely ignored the lessons of how to create superproductive processes and there are almost zero examples of the ARPA-PARC approach in the world today (an interesting partial exception is Janelia). Whitehall, as a subset of its general vandalism towards science, has successfully resisted all attempts at learning from ARPA for decades and this has been helped by the attitude of leading scientists themselves whose incentives push them toward supporting objectively bad funding models. In science as well as politics, incentives can be destructive and stop learning. As Alan Kay, one of the crucial PARC researchers, wrote:
‘The most interesting thing has been the contrast between appreciation/exploitation of the inventions/contributions versus the almost complete lack of curiosity and interest in the processes that produced them… [I]n most processes today — and sadly in most important areas of technology research — the administrators seem to prefer to be completely in control of mediocre processes to being “out of control” with superproductive processes.They are trying to “avoid failure” rather than trying to “capture the heavens”.’’
Or as George Mueller said later in life about the institutional imperative and project failures:
‘Fascinating that the same problems recur time after time, in almost every program, and that the management of the program, whether it happened to be government or industry, continues to avoid reality.’
So, on one hand, radical improvements in non-military spheres would be a wonderful free lunch. We simply apply old lessons, scale them up with technology and there are massive savings for free.
But wouldn’t it be ironic if we don’t do this — instead, we keep our dysfunctional systems for non-military spheres and carry on the waste, failure and corruption but we channel the Cold War and, in the atmosphere of an arms race, America and China apply the lessons from Groves, Schreiver and Mueller but to military AI procurement?!
Not everybody has unlearned the lessons from Groves and Mueller…
China: a culture of learning from systems management
‘All stable processes we shall predict. All unstable processes we shall control.’ von Neumann.
In Science there was an interesting article on Qian Xuesen, the godfather of China’s nuclear and space programs which also had a profound affect on ideas about government. Qian studied in California at Caltech where he worked with the Hungarian mathematician Theodore von Kármán who co-founded the Jet Propulsion Laboratory (JPL) which worked on rockets after 1945.
‘In the West, systems engineering’s heyday has long passed. But in China, the discipline is deeply integrated into national planning. The city of Wuhan is preparing to host in August the International Conference on Control Science and Systems Engineering, which focuses on topics such as autonomous transportation and the “control analysis of social and human systems.” Systems engineers have had a hand in projects as diverse as hydropower dam construction and China’s social credit system, a vast effort aimed at using big data to track citizens’ behavior. Systems theory “doesn’t just solve natural sciences problems, social science problems, and engineering technology problems,” explains Xue Huifeng, director of the China Aerospace Laboratory of Social System Engineering (CALSSE) and president of the China Academy of Aerospace Systems Science and Engineering in Beijing. “It also solves governance problems.”
‘The field has resonated with Chinese President Xi Jinping, who in 2013 said that “comprehensively deepening reform is a complex systems engineering problem.” So important is the discipline to the Chinese Communist Party that cadres in its Central Party School in Beijing are required to study it. By applying systems engineering to challenges such as maintaining social stability, the Chinese government aims to “not just understand reality or predict reality, but to control reality,” says Rogier Creemers, a scholar of Chinese law at the Leiden University Institute for Area Studies in the Netherlands…
‘In a building flanked by military guards, systems scientists from CALSSE sit around a large conference table, explaining to Science the complex diagrams behind their studies on controlling systems. The researchers have helped model resource management and other processes in smart cities powered by artificial intelligence. Xue, who oversees a project named for Qian at CALSSE, traces his work back to the U.S.-educated scientist. “You should not forget your original starting point,” he says…
‘The Chinese government claims to have wired hundreds of cities with sensors that collect data on topics including city service usage and crime. At the opening ceremony of China’s 19th Party Congress last fall, Xi said smart cities were part of a “deep integration of the internet, big data, and artificial intelligence with the real economy.”… Xue and colleagues, for example, are working on how smart cities can manage water resources. In Guangdong province, the researchers are evaluating how to develop a standardized approach for monitoring water use that might be extended to other smart cities.
‘But Xue says that smart cities are as much about preserving societal stability as streamlining transportation flows and mitigating air pollution. Samantha Hoffman, a consultant with the International Institute for Strategic Studies in London, says the program is tied to long-standing efforts to build a digital surveillance infrastructure and is “specifically there for social control reasons” (Science, 9 February, p. 628). The smart cities initiative builds on 1990s systems engineering projects — the “golden” projects — aimed at dividing cities into geographic grids for monitoring, she adds.
‘Layered onto the smart cities project is another systems engineering effort: China’s social credit system. In 2014, the country’s State Council outlined a plan to compile data on individuals, government officials, and companies into a nationwide tracking system by 2020. The goal is to shape behavior by using a mixture of carrots and sticks. In some citywide and commercial pilot projects already underway, individuals can be dinged for transgressions such as spreading rumors online. People who receive poor marks in the national system may eventually be barred from travel and denied access to social services, according to government documents…
‘Government documents refer to the social credit system as a “social systems engineering project.” Details about which systems engineers consulted on the project are scant. But one theory that may have proved useful is Qian’s “open complex giant system,” Zhu says. A quarter-century ago, Qian proposed that society is a system comprising millions of subsystems: individual persons, in human parlance. Maintaining control in such a system is challenging because people have diverse backgrounds, hold a broad spectrum of opinions, and communicate using a variety of media, he wrote in 1993 in the Journal of Systems Engineering and Electronics. His answer sounds like an early road map for the social credit system: to use then-embryonic tools such as artificial intelligence to collect and synthesize reams of data. According to published papers, China’s hard systems scientists also use approaches derived from Qian’s work to monitor public opinion and gauge crowd behavior…
‘Hard systems engineering worked well for rocket science, but not for more complex social problems, Gu says: “We realized we needed to change our approach.” He felt strongly that any methods used in China had to be grounded in Chinese culture.
‘The duo came up with what it called the WSR approach: It integrated wuli, an investigation of facts and future scenarios; shili, the mathematical and conceptual models used to organize systems; and renli. Though influenced by U.K. systems thinking, the approach was decidedly eastern, its precepts inspired by the emphasis on social relationships in Chinese culture. Instead of shunning mathematical approaches, WSR tried to integrate them with softer inquiries, such as taking stock of what groups a project would benefit or harm. WSR has since been used to calculate wait times for large events in China and to determine how China’s universities perform, among other projects…
‘Zhu … recently wrote that systems science in China is “under a rationalistic grip, with the ‘scientific’ leg long and the democratic leg short.” Zhu says he has no doubt that systems scientists can make projects such as the social credit system more effective. However, he cautions, “Systems approaches should not be just a convenient tool in the expert’s hands for realizing the party’s wills. They should be a powerful weapon in people’s hands for building a fair, just, prosperous society.”’
In Open Complex Giant System (1993), Qian Xuesen compares the study of physics, where large complex systems can be studied using the phenomenally successful tools of statistical mechanics, and the study of society which has no such methods. He describes an overall approach in which fields spanning physical sciences, study of the mind, medicine, geoscience and so on must be integrated in a sort of uber-field he calls ‘social systems engineering‘.
‘Studies and practices have clearly proved that the only feasible and effective way to treat an open complex giant system is a metasynthesis from the qualitative to the quantitative, i.e. the meta—synthetic engineering method. This method has been extracted, generalized and abstracted from practical studies…’
This involves integrating: scientific theories, data, quantitative models, qualitative practical expert experience into ‘models built from empirical data and reference material, with hundreds and thousands of parameters’ then simulated.
‘This is quantitative knowledge arising from qualitative understanding. Thus metasynthesis from qualitative to quantitative approach is to unite organically the expert group, data, all sorts of information, and the computer technology, and to unite scien- tific theory of various disciplines and human experience and knowledge.’
He gives some examples and gives this diagram as a high level summary:
So, China is combining:
A massive ~$150 billion data science/AI investment program with the goal of global leadership in the science/technology and economic dominance.
A massive investment program in associated science/technology such as quantum information/computing.
A massive domestic surveillance program combining AI, facial recognition, genetic identification, the ‘social credit system’ and so on.
A massive anti-access/area denial military program aimed at America/Taiwan.
A massive technology espionage program that, for example, successfully stole the software codes for the F-35.
The use of proven systems management techniques for integrating principles of effective action to predict and manage complex systems at large scale.
America led the development of AI technologies and has the huge assets of its universities, a tradition (weakening) of welcoming scientists (since they opened Princeton to Einstein, von Neumann and Gödel in the 1930s), and the ecosystem of places like Silicon Valley.
It is plausible that China could find a way within 15 years to find some nonlinear asymmetries that provide an edge while, channeling Marshal Ogarkov, it outthinks the Pentagon in management and operations.
A few interesting recent straws in the AI/robotics wind
I blogged recently about Judea Pearl. He is one of the most important scholars in the field of causal reasoning. He wrote a short paper about the limits of state-of-the-art AI systems using ‘deep learning’ neural networks — such as the AlphaGo system which recently conquered the game of GO — and how these systems could be improved. Humans can interrogate stored representations of their environment with counter-factual questions: how to instantiate this in machines? (Also economists, NB. Pearl’s statement that ‘I can hardly name a handful (<6) of economists who can answer even one causal question posed in ucla.in/2mhxKdO‘.)
In an interview he said this about self-aware robots:
‘If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans. The next step will be that machines will postulate such models on their own and will verify and refine them based on empirical evidence. That is what happened to science; we started with a geocentric model, with circles and epicycles, and ended up with a heliocentric model with its ellipses.
‘We’re going to have robots with free will, absolutely. We have to understand how to program them and what we gain out of it. For some reason, evolution has found this sensation of free will to be computationally desirable… Evidently, it serves some computational function.
‘I think the first evidence will be if robots start communicating with each other counterfactually, like “You should have done better.” If a team of robots playing soccer starts to communicate in this language, then we’ll know that they have a sensation of free will. “You should have passed me the ball — I was waiting for you and you didn’t!” “You should have” means you could have controlled whatever urges made you do what you did, and you didn’t.
[When will robots be evil?] When it appears that the robot follows the advice of some software components and not others, when the robot ignores the advice of other components that are maintaining norms of behavior that have been programmed into them or are expected to be there on the basis of past learning. And the robot stops following them.’
A DARPA project recently published this on self-aware robots.
‘A robot that learns what it is, from scratch, with zero prior knowledge of physics, geometry, or motor dynamics. Initially the robot does not know if it is a spider, a snake, an arm—it has no clue what its shape is. After a brief period of “babbling,” and within about a day of intensive computing, their robot creates a self-simulation. The robot can then use that self-simulator internally to contemplate and adapt to different situations, handling new tasks as well as detecting and repairing damage in its own body…
‘Initially, the robot moved randomly and collected approximately one thousand trajectories, each comprising one hundred points. The robot then used deep learning, a modern machine learning technique, to create a self-model. The first self-models were quite inaccurate, and the robot did not know what it was, or how its joints were connected. But after less than 35 hours of training, the self-model became consistent with the physical robot to within about four centimeters…
‘Lipson … notes that self-imaging is key to enabling robots to move away from the confinements of so-called “narrow-AI” towards more general abilities. “This is perhaps what a newborn child does in its crib, as it learns what it is,” he says. “We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans. While our robot’s ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness.”
‘Lipson believes that robotics and AI may offer a fresh window into the age-old puzzle of consciousness. “Philosophers, psychologists, and cognitive scientists have been pondering the nature self-awareness for millennia, but have made relatively little progress,” he observes. “We still cloak our lack of understanding with subjective terms like ‘canvas of reality,’ but robots now force us to translate these vague notions into concrete algorithms and mechanisms.”
‘Lipson and Kwiatkowski are aware of the ethical implications. “Self-awareness will lead to more resilient and adaptive systems, but also implies some loss of control,” they warn. “It’s a powerful technology, but it should be handled with care.”’
‘… a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training… The model is chameleon-like — it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing… Our model is capable of generating samples from a variety of prompts that feel close to human quality and show coherence over a page or more of text… These samples have substantial policy implications: large language models are becoming increasingly easy to steer towards scalable, customized, coherent text generation, which in turn could be used in a number of beneficial as well as malicious ways.’ (bold added).
OpenAI has not released the full model yet because they take safety issues seriously. Cf. this for a discussion of some safety issues and links. As the author says re some of the complaints about OpenAI not releasing the full model, when you find normal cyber security flaws you do not publish the problem immediately — that is a ‘zero day attack’ and we should not ‘promote a norm that zero-day threats are OK in AI.’ Quite. It’s also interesting that it would probably only take ~$100,000 for a resourceful individual to re-create the full model quite quickly.
A few weeks ago, Deep Mind showed that their approach to beating human champions at GO can also beat the world’s best players at StarCraft, a game of IMperfect information which is much closer to real life human competitions than perfect information games like chess and GO. OpenAI has shown something similar with a similar game, DOTA.
Moore’s Law: what if a country spends 1-10% GDP pushing such curves?
The march of Moore’s Law is entangled in many predictions. It is true that in some ways Moore’s Law has flattened out recently…
… BUT specialised chips developed for machine learning and other adaptations have actually kept it going. This chart shows how it actually started long before Moore and has been remarkably steady for ~120 years (NVIDIA in the top right is specialised for deep learning)…
NB. This is a logarithmic scale so makes progress seem much less dramatic than the ~20 orders of magnitude it represents.
Since Von Neumann and Turing led the development of the modern computer in the 1940s, the price of computation has got ~x10 cheaper every five years (so x100 per decade), so over ~75 years that’s a factor of about a thousand trillion (1015).
The industry seems confident the graph above will continue roughly as it has for at least another decade, though not because of continued transistor doubling rates which has reached such a tiny nanometer scale that quantum effects will soon interfere with engineering. This means ~100-fold improvement before 2030 and combined with the ecosystem of entrepreneurs/VC/science investment etc this will bring many major disruptions even without significant progress with general intelligence.
Dominant companies like Apple, Amazon, Google, Baidu, Alibaba etc (NB. no big EU players) have extremely strong incentives to keep this trend going given the impact of mobile computing / the cloud etc on their revenues.
Computers will be ~10,000 times more powerful than today for the same price if this chart holds for another20 years and ~1 million times more powerful for the same price than today if it holds for another 30 years. Today’s multi-billion dollar supercomputer performance would be available for ~$1,000, just as the supercomputer power of a few decades ago is now available in your smartphone.
But there is another dimension to this trend. Look at this graph below. It shows the total amount of compute, in petaflop/s-days, that was used to train some selected AI projects using neural networks / deep learning.
‘Since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). Since 2012, this metric has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase)… The chart shows the total amount of compute, in petaflop/s-days, that was used to train selected results that are relatively well known, used a lot of compute for their time, and gave enough information to estimate the compute used. A petaflop/s-day (pfs-day) consists of performing 1015neural net operations per second for one day, or a total of about 1020operations. ‘ (Cf. OpenAI blog.)
The AlphaZero project in the top right is the recent Deep Mind project in which an AI system (a successor to the original AlphaGo that first beat human GO champions) zoomed by centuries of human knowledge on GO and chess in about one day of training.
Many dramatic breakthroughs in machine learning, particularly using neural networks (NNs), are open source. They are scaling up very fast. They will be networked together into ‘networks of networks’ and will become x10, x100, x1,000 more powerful. These NNs will keep demonstrating better than human performance in relatively narrowly defined tasks (like winning games) but these narrow definitions will widen unpredictably.
OpenAI’s blog showing the above graph concludes:
‘Overall, given the data above, the precedent for exponential trends in computing, work on ML specific hardware, and the economic incentives at play, we think it’d be a mistake to be confident this trend won’t continue in the short term. Past trends are not sufficient to predict how long the trend will continue into the future, or what will happen while it continues. But even the reasonable potential for rapid increases in capabilities means it is critical to start addressing both safety and malicious use of AI today. Foresight is essential to responsible policymaking and responsible technological development, and we must get out ahead of these trends rather than belatedly reacting to them.’ (Bold added)
This recent analysis of the extremely rapid growth of deep learning systems tries to estimate how long this rapid growth can continue and what interesting milestones may fall. It considers 1) the rate of growth of cost, 2) the cost of current experiments, and 3) the maximum amount that can be spent on an experiment in the future. Its rough answers are:
‘The cost of the largest experiments is increasing by an order of magnitude every 1.1 – 1.4 years.
‘The largest current experiment, AlphaGo Zero, probably cost about $10M.’
On the basis of the Manhattan Project costing ~1% of GDP, that gives ~$200 billion for one AI experiment. Given the growth rate, we could expect a $200B experiment in 5-6 years.
‘There is a range of estimates for how many floating point operations per second are required to simulate a human brain for one second. Those collected by AI Impacts have a median of 1018 FLOPS (corresponding roughly to a whole-brain simulation using Hodgkin-Huxley neurons)’. [NB. many experts think 1018 is off by orders of magnitude and it could easily be x1,000 or more higher.]
‘So for the shortest estimates … we have already reached enough compute to pass the human-childhood milestone. For the median estimate, and the Hodgkin-Huxley estimates, we will have reached the milestone within 3.5 years.’
We will not reach the bigger estimates (~1025FLOPS) within the 10 year window.
‘The AI-Compute trend is an extraordinarily fast trend that economic forces (absent large increases in GDP) cannot sustain beyond 3.5-10 more years. Yet the trend is also fast enough that if it is sustained for even a few years from now, it will sweep past some compute milestones that could plausibly correspond to the requirements for AGI, including the amount of compute required to simulate a human brain thinking for eighteen years, using Hodgkin Huxley neurons.’
I can’t comment on the technical aspects of this but one political/historical point. I think this analysis is wrong about the Manhattan Project (MP). His argument is the MP represents a reasonable upper-bound for what America might spend. But the MP was not constrained by money — it was mainly constrained by theoretical and engineering challenges, constraints of non-financial resources and so on. Having studied General Groves’ book (who ran the MP), he does not say money was a problem — in fact, one of the extraordinary aspects of the story is the extreme (to today’s eyes) measures he took to ensure money was not a problem. If more than 1% GDP had been needed, he’d have got it (until the intelligence came in from Europe that the Nazi programme was not threatening).
This is an important analogy. America and China are investing very heavily in AI but nobody knows — are there places at the edge of ‘breakthroughs with relatively narrow applications’ where suddenly you push ‘a bit’ and you get lollapalooza results with general intelligence? What if someone thinks — if I ‘only’ need to add some hardware and I can muster, say, 100 billion dollars to buy it, maybe I could take over the world? What if they’re right?
I think it is therefore more plausible to use the US defence budget at the height of the Cold War as a ‘reasonable estimate’ for what America might spend if they feel they are in an existential struggle. Washington knows that China is putting vast resources into AI research. If it starts taking over from Deep Mind and OpenAI as the place where the edge-of-the-art is discovered, then it WILL soon be seen as an existential struggle and there would rapidly be political pressures for a 1950s/1960s style ‘extreme’ response. So a reasonable upper bound might be at least 5-8 times bigger than 1% of GDP.
Further, unlike the nuclear race, an AGI race carries implications of not just ‘destroy global civilisation and most people’ but ‘potentially destroys ABSOLUTELY EVERYTHING not just on earth but, given time and the speed of light, everywhere’ — i.e potentially all molecules re-assembled in the pursuit of some malign energy-information optimisation process. Once people realise just how bad AGI could go if the alignment problem is not solved (see below), would it not be reasonable to assume that even more money than ~8% GDP will be found if/when this becomes a near-term fear of politicians?
Some in Silicon Valley who already have many billions at their disposal are already calculating numbers for these budgets. Surely people in Chinese intelligence are doodling the same as they listen to the week’s audio of Larry talking to Demis…?
General intelligence and safety
‘[R]ational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. Those drives will lead to anti-social and dangerous behaviour if not explicitly countered. The current computing infrastructure would be very vulnerable to unconstrained systems with these drives.’ Omohundro.
Shane Legg, co-founder and chief scientist of Deep Mind, said publicly a few years ago that there is a 50% probability that we will achieve human level AI by 2028, a 90% probability by 2050, and ‘I think human extinction will probably occur‘. Given Deep Mind’s progress since he said this it is surely unlikely he thinks the odds now are lower than 50% by 2028. Some at the leading edge of the field agree.
‘I think that within a few years we’ll be able to build an NN-based [neural network] AI (an NNAI) that incrementally learns to become at least as smart as a little animal, curiously and creatively learning to plan, reason and decompose a wide variety of problems into quickly solvable sub-problems. Once animal-level AI has been achieved, the move towards human-level AI may be small: it took billions of years to evolve smart animals, but only a few millions of years on top of that to evolve humans. Technological evolution is much faster than biological evolution, because dead ends are weeded out much more quickly. Once we have animal-level AI, a few years or decades later we may have human-level AI, with truly limitless applications. Every business will change and all of civilisation will change…
‘In 2050 there will be trillions of self-replicating robot factories on the asteroid belt. A few million years later, AI will colonise the galaxy. Humans are not going to play a big role there, but that’s ok. We should be proud of being part of a grand process that transcends humankind.’ Schmidhuber, one of the pioneers of ML, 2016.
Others have said they believe that estimates of AGI within 15-30 years are unlikely to be right. Two of the smartest people I’ve ever spoken to are physicists who understand the technical details and know the key researchers and think that dozens of Nobel Prize scale ideas will probably be needed before AGI happens and it is more likely that the current wave of enthusiasm with machine learning/neural networks will repeat previous cycles in science (e.g with quantum computing 20 years ago) — great enthusiasm, the feeling that all barriers are quickly falling, then an increasingly obvious plateau, spreading disillusion, a search for new ideas, then a revival of hope and so on. They would bet more on a 50-80 year than a 20 year scale.
Of top people I have spoken to and/or followed their predictions, it’s clear that there is a consensus that mainstream economic analysis (which is the foundation of politicians’ and media discussion) seriously underestimates the scale and speed of social/economic/military/political disruption that narrow AI/automation will soon cause. But predictions on AGI are unsurprisingly all over the place.
Many argue there even if Moore’s Law continues for 30 years (millionfold performance improvement) this may mean nothing significant for general intelligence, even if narrow AI transforms the world in many ways. Some experts think that estimates of the human brain’s computational capacity widely believed in the computer science world are actually orders of magnitude wrong. We still don’t know much about basics of the brain such as how long-term memories are formed. Maybe the brain’s processes will be much more resistant to understanding than ‘optimists’ assume.
But maybe relatively few big new ideas are needed to create world-changing capabilities. ‘Just’ applying great engineering and more resources to existing ideas allowed Deep Mind to blow past human performance metrics. I obviously cannot judge competing expert views but from a political perspective we know for sure that there is inherent uncertainty about how we discover new knowledge and this means we are bound to be surprised in all sorts of ways. We know that even brilliant researchers working right at the edge of progress are often clueless about what will happen quite soon and cannot reliably judge ‘is it less than 1% or more like 20% probability?’ questions. For example:
‘In 1901, two years before helping build the first heavier-than-air flyer, Wilbur Wright told his brother that powered flight was fifty years away. In 1939, three years before he personally oversaw the first critical chain reaction in a pile of uranium bricks, Enrico Fermi voiced 90% confidence that it was impossible to use uranium to sustain a fission chain reaction.’ (Yudkowsky)
Fermi’s experience suggests we should be extremely careful and put more resources into thinking very hard about how to minimise risks viz both narrow and general AI.
Those right at the edge of genetic engineering, such as George Church and Kevin Esvelt, are pushing for their field to be forcibly opened up to make it safer. As they argue, the current scientific approach and incentive system is essentially a ‘blind search algorithm’ in which small teams work in secret without being able to predict the consequences of their work and cannot be warned by those who do understand. A blind search algorithm is a bad approach for things like bioweapons that can destroy billions of lives and it is what we now have. The same argument applies to AGI.
We also know that political people and governments are slow to cope with major technological disruptions. Just look at TV. It’s been dominating politics since the 1950s. It is roughly 70 years old. Many politicians still do not understand it well. The UK state and political parties are in many ways much less sophisticated in its use of TV than groups like Hezbollah. This is even more true of social media. Also look at how unfounded conspiracy theories about fake news and social media viz the referendum and Trump have gripped much of the ‘educated’ class that thinks they see through fake news that fools the uneducated! Journalists are awarded THE ORWELL AWARD(!) for spreading fake news about fake news (and it’s not ‘lies’, they actually believe what they say)! (My experience is it’s much easier to fool people about politics if they have a degree than if they don’t because those with a degree tend to spend so much more energy fooling themselves.) This is not encouraging particularly if one considers that politicians are directly incentivised to understand technologies like TV and internet polling for their own short-term interests yet most don’t.
From cars to planes it has taken time for us to work out how to adapt to new things that can kill us. Given that 1) conventional research is ‘a blind search algorithm’, 2) our politicians are behind the curve on 70 year-old technologies and 3) there is little prospect of this changing without huge changes to conventional models of politics, we must ask another question about secrecy v openness and centralised vs decentralised architectures.
One of the leaders of the 3D printing / FabLab revolution wrote this comparing the closed v open models of security:
‘The history of the Internet has shown that security through obscurity doesn’t work. Systems that have kept their inner workings a secret in the name of security have consistently proved more vulnerable than those that have allowed themselves to be examined — and challenged — by outsiders. The open protocols and programs used to protect Internet communications are the result of ongoing development and testing by a large expert community. Another historical lesson is that people, not technology, are the most common weakness when it comes to security. No matter how secure a system is, someone who has access to it can always be corrupted, wittingly or otherwise. Centralized control introduces a point of vulnerability that is not present in a distributed system.’ (Bold added)
As we saw above, the centralised approach has been a disaster for nuclear weapons and we survived by fluke. Overall the history of nuclear security is surely a very relevant and bad signal for AI safety. I would bet a lot that Deep Mind et al are all hacked and spied on by China and Russia (at least) so I think it’s safest to plan on the assumption that dangerous breakthroughs will leak almost instantly and could be applied by the sort of people who spy for intel agencies. So it is natural to ask, should we take an open/decentralised approach towards possible AGI?
(Tangential thought experiment: if you were in charge of an organisation like the KGB, why would you not hack hedge funds like Renaissance Technologies and use the information for your own ‘black’ hedge fund and thus dodge the need for arguments over funding (a ‘virtuous’ circle of espionage, free money, resources for more effective R&D and espionage plus it minimises the need for irritating interactions with politicians)? How hard would it be to detect such activity IF done with intelligent modesty? Given someone can hack the NSA without their identity being revealed, why would they not be hacking Renaissance and Deep Mind, with a bit of help from a Milla Jovovich lookalike whose reading a book on n-dimensional string theory at the bar when that exhausted physics PhD with the access codes staggers in to relax?)
This seems to collide with another big problem — the alignment problem.
Stuart Russell, one of the world’s leading researchers, is one of those who has been very forceful about the fundamental importance of this: how do we GUARANTEE that entities more intelligent than us are aligned with humanity’s interests?
‘One [view] is: It’ll never happen, which is like saying we are driving towards the cliff but we’re bound to run out of gas before we get there. And that doesn’t seem like a good way to manage the affairs of the human race. And the other [view] is: Not to worry — we will just build robots that collaborate with us and we’ll be in human-robot teams. Which begs the question: If your robot doesn’t agree with your objectives, how do you form a team with it?’ .
Eliezer Yudkowsky, one of the few working on the alignment problem, described the difficulty:
‘How do you encode the goal functions of an A.I. such that it has an Off switch and it wants there to be an Off switch and it won’t try to eliminate the Off switch and it will let you press the Off switch, but it won’t jump ahead and press the Off switch itself? And if it self-modifies, will it self-modify in such a way as to keep the Off switch? We’re trying to work on that. It’s not easy… When you’re building something smarter than you, you have to get it right on the first try.’
So, we know centralised systems are very vulnerable and decentralised systems have advantages, but with AGI we also have to fear that we have no room for the trial-and-error of decentralised internet style security architectures — ‘you have to get it right on the first try’. Are we snookered?! And of course there is no guarantee it is even possible to solve the alignment problem. When you hear people in this field describing ideas about ‘abstracting human ethics and encoding them’ one wonders if solving the alignment problem might prove even harder than AGI — maybe only an AGI could solve it…
Given the media debate is dominated by endless pictures of the Terminator and politicians are what they are, researchers are, understandably, extremely worried about what might happen if the political-media system makes a sudden transition from complacency to panic. After all, consider the global reaction if reputable scientists suddenly announced they have discovered plausible signals that super-intelligent aliens will arrive on earth within 30 years: even when softened by caveats, such a warning would obviously transform our culture (in many ways positively!). As Peter Thiel has said, creating true AGI is a close equivalent to the ‘super-intelligent aliens arriving on earth’ scenario and the most important questions are not economic but political, and in particular: are they friendly and can we stop them eliminating us by design, bad luck, or indifference?
Further, in my experience extremely smart technical people are often naive about politics. They greatly over-estimate the abilities of prime ministers and presidents. They greatly under-estimate the incentive problems and the degree of focus that is required to get ANYTHING done in politics. They greatly exaggerate the potential for ‘rational argument’ to change minds and wrongly assume somewhere at the top of power ‘there must be’ a group of really smart people working on very dangerous problems who have real clout. Further, everybody thinks they understand ‘communication’ but almost nobody does. We can see from recent events that even the very best engineering companies like Facebook and Google can not just make huge mistakes with the political/communication world but not learn (Facebook hiring Clegg was a sign of deep ignorance inside Facebook about their true problems). So it’s hard to be optimistic about the technical people educating the political people even assuming the technical people make progress with safety.
Hypothesis: 1) minimising nuclear/bio/AI risks and the potential for disastrous climate change requires a few very big things to change roughly simultaneously (‘normal’ political action will not be enough) and 2) this will require a weird alliance between a) technical people, b) political ‘renegades’, c) the public to ‘surround’ political Insiders locked into existing incentives:
Different ‘models for effective action’ among powerful people, which will only happen if either (A) some freak individual/group pops up, probably in a crisis environment or (B) somehow incentives are hacked. (A) can’t be relied on and (B) is very hard.
A new institution with global reach that can win global trust and support is needed. The UN is worse than useless for these purposes.
Public opinion will have to be mobilised to overcome the resistance of political Insiders, for example, regarding the potential for technology to bring very large gains ‘to me’ and simultaneously avert extreme dangers. This connects to the very widespread view that a) the existing economic model is extremely unfair and b) this model is sustained by a loose alliance of political elites and corporate looters who get richer by screwing the rest of us.
I have an idea about a specific project, mixing engineering/economics/psychology/politics, that might do this and will blog on it separately.
I suspect almost any idea that could do 1-3 will seem at least weird but without big changes, we are simply waiting for the law of averages to do its thing. We may have decades for AGI and climate change but we could collide with the WMD law of averages tomorrow so, impractical as this sounds, it seems to me people have to try new things and risk failure and ridicule.
Autonomous technology and the greater human good. Omohundro. ‘Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. Those drives will lead to anti-social and dangerous behaviour if not explicitly countered. The current computing infrastructure would be very vulnerable to unconstrained systems with these drives. We describe the use of formal methods to create provably safe but limited autonomous systems. We then discuss harmful systems and how to stop them. We conclude with a description of the ‘Safe-AI Scaffolding Strategy’ for creating powerful safe systems with a high confidence of safety at each stage of development.’ I strongly recommend reading this paper if interested in this blog.
Read this 1955 essay by von Neumann ‘Can we survive technology?‘. VN was involved in the Manhattan Project, inventing computer science, game theory and much more. This essay explored the essential problem that the scale and speed of technological change have suddenly blown past political institutions. ‘For progress there is no cure…’
This blog looks at studies comparing expertise in many fields over decades, including work by Tetlock and Kahneman, and problems like — why people don’t learn to use even simple tools to stop children dying unnecessarily. There is a summary of some basic lessons at the end.
The reason for writing about this is that we will only improve the performance of government (at individual, team and institutional levels) if we reflect on:
what expertise really is and why do some very successful fields cultivate it effectively while others, like government, do not;
how to select much higher quality people (it’s insane people as ignorant and limited as me can have the influence we do in the way we do — us limited duffers can help in limited ways but why do we deliberately exclude ~100% of the most intelligent, talented, relentless, high performing people from fields with genuine expertise, why do we not have people like Fields Medallist Tim Gowers or Michael Nielsen as Chief Scientist sitting ex officio in Cabinet?);
how to train people effectively to develop true expertise in skills relevant to government: it needs different intellectual content (PPE/economics are NOT good introductory degrees) and practice in practical skills (project management, making predictions and in general ‘thinking rationally’) with lots of fast, accurate feedback;
how to give them effective tools: e.g the Cabinet Room is worse in this respect than it was in July 1914 — at least then the clock and fireplace worked, and Lord Salisbury in the 1890s would walk round the Cabinet table gathering papers to burn in the grate — while today No10 is decades behind the state-of-the-art in old technologies like TV, doesn’t understand simple tools like checklists, and is nowhere with advanced technologies;
and how to ‘program’ institutions differently so that 1) people are more incentivised to optimise things we want them to optimise, like error-correction and predictive accuracy, and less incentivised to optimise bureaucratic process, prestige, and signalling as our institutions now do to a dangerous extent, and, connected, so that 2) institutions are much better at building high performance teams rather than continue normal rules that make this practically illegal, and so that 3) we have ‘immune systems’ to minimise the inevitable failures of even the best people and teams .
In SW1 now, those at the apex of power practically never think in a serious way about the reasons for the endemic dysfunctional decision-making that constitutes most of their daily experience or how to change it. What looks like omnishambles to the public and high performers in technology or business is seen by Insiders, always implicitly and often explicitly, as ‘normal performance’. ‘Crises’ such as the collapse of Carillion or our farcical multi-decade multi-billion ‘aircraft carrier’ project occasionally provoke a few days of headlines but it’s very rare anything important changes in the underlying structures and there is no real reflection on system failure.
This fact is why, for example, a startup created in a few months could win a referendum that should have been unwinnable. It was the systemic and consistent dysfunction of Establishment decision-making systems over a long period, with very poor mechanisms for good accurate feedback from reality, that created the space for a guerrilla operation to exploit.
This makes it particularly ironic that even after Westminster and Whitehall have allowed their internal consensus about UK national strategy to be shattered by the referendum, there is essentially no serious reflection on this system failure. It is much more psychologically appealing for Insiders to blame ‘lies’ (Blair and Osborne really say this without blushing), devilish use of technology to twist minds and so on. Perhaps the most profound aspect of broken systems is they cannot reflect on the reasons why they’re broken — never mind take effective action. Instead of serious thought, we have high status Insiders like Campbell reduced to bathos with whining on social media about Brexit ‘impacting mental health’. This lack of reflection is why Remain-dominated Insiders lurched from failure over the referendum to failure over negotiations. OODA loops across SW1 are broken and this is very hard to fix — if you can’t orient to reality how do you even see your problem well? (NB. It should go without saying that there is a faction of pro-Brexit MPs, ‘campaigners’ and ‘pro-Brexit economists’ who are at least as disconnected from reality, often more, as the May/Hammond bunker.)
In the commercial world, big companies mostly die within a few decades because they cannot maintain an internal system to keep them aligned to reality plus startups pop up. These two factors create learning at a system level — there is lots of micro failure but macro productivity/learning in which useful information is compressed and abstracted. In the political world, big established failing systems control the rules, suck in more and more resources rather than go bust, make it almost impossible for startups to contribute and so on. Even failures on the scale of the 2008 Crash or the 2016 referendum do not necessarily make broken systems face reality, at least quickly. Watching Parliament’s obsession with trivia in the face of the Cabinet’s and Whitehall’s contemptible failure to protect the interests of millions in the farcical Brexit negotiations is like watching the secretary to the Singapore Golf Club objecting to guns being placed on the links as the Japanese troops advanced.
Neither of the main parties has internalised the reality of these two crises. The Tories won’t face reality on things like corporate looting and the NHS, Labour won’t face reality on things like immigration and the limits of bureaucratic centralism. Neither can cope with the complexity of Brexit and both just look like I would look like in the ring with a professional fighter — baffled, terrified and desperate for a way to escape. There are so many simple ways to improve performance — and their own popularity! — but the system is stuck in such a closed loop it wilfully avoids seeing even the most obvious things and suppresses Insiders who want to do things differently…
But… there is a network of almost entirely younger people inside or close to the system thinking ‘we could do so much better than this’. Few senior Insiders are interested in these questions but that’s OK — few of them listened before the referendum either. It’s not the people now in power and running the parties and Whitehall who will determine whether we make Brexit a platform to contribute usefully to humanity’s biggest challenges but those that take over.
Doing better requires reflecting on what we know about real expertise…
How to distinguish between fields dominated by real expertise and those dominated by confident ‘experts’ who make bad predictions?
We know a lot about the distinction between fields in which there is real expertise and fields dominated by bogus expertise. Daniel Kahneman, who has published some of the most important research about expertise and prediction, summarises the two fundamental tests to ask about a field: 1) is there enough informational structure in the environment to allow good predictions, and 2) is there timely and effective feedback that enables error-correction and learning.
‘To know whether you can trust a particular intuitive judgment, there are two questions you should ask: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities? The answer here depends on the professionals’ experience and on the quality and speed with which they discover their mistakes. Anesthesiologists have a better chance to develop intuitions than radiologists do. Many of the professionals we encounter easily pass both tests, and their off-the-cuff judgments deserve to be taken seriously. In general, however, you should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about.’ (Emphasis added.)
In fields where these two elements are present there is genuine expertise and people build new knowledge on the reliable foundations of previous knowledge. Some fields make a transition from stories (e.g Icarus) and authority (e.g ‘witch doctor’) to quantitative models (e.g modern aircraft) and evidence/experiment (e.g some parts of modern medicine/surgery). As scientists have said since Newton, they stand on the shoulders of giants.
How do we assess predictions / judgement about the future?
‘Good judgment is often gauged against two gold standards – coherence and correspondence. Judgments are coherent if they demonstrate consistency with the axioms of probability theory or propositional logic. Judgments are correspondent if they agree with ground truth. When gold standards are unavailable, silver standards such as consistency and discrimination can be used to evaluate judgment quality. Individuals are consistent if they assign similar judgments to comparable stimuli, and they discriminate if they assign different judgments to dissimilar stimuli.
‘Coherence violations range from base rate neglect and confirmation bias to overconfidence and framing effects (Gilovich, Griffith & Kahneman, 2002; Kahneman, Slovic & Tversky, 1982). Experts are not immune. Statisticians (Christensen-Szalanski & Bushyhead, 1981), doctors (Eddy, 1982), and nurses (Bennett, 1980) neglect base rates. Physicians and intelligence professionals are susceptible to framing effects and financial investors are prone to overconfidence.
‘Research on correspondence tells a similar story. Numerous studies show that human predictions are frequently inaccurate and worse than simple linear models in many domains (e.g. Meehl, 1954; Dawes, Faust & Meehl, 1989). Once again, expertise doesn’t necessarily help. Inaccurate predictions have been found in parole officers, court judges, investment managers in the US and Taiwan, and politicians. However, expert predictions are better when the forecasting environment provides regular, clear feedback and there are repeated opportunities to learn (Kahneman & Klein, 2009; Shanteau, 1992). Examples include meteorologists, professional bridge players, and bookmakers at the racetrack, all of whom are well-calibrated in their own domains.‘ (Tetlock, How generalizable is good judgment?, 2017.)
In another 2017 piece Tetlock explored the studies further. In the 1920s researchers built simple models based on expert assessments of 500 ears of corn and the price they would fetch in the market. They found that ‘to everyone’s surprise, the models that mimicked the judges’ strategies nearly always performed better than the judges themselves’ (Tetlock, cf. ‘What Is in the Corn Judge’s Mind?’, Journal of American Society for Agronomy, 1923). Banks found the same when they introduced models for credit decisions.
‘In other fields, from predicting the performance of newly hired salespeople to the bankruptcy risks of companies to the life expectancies of terminally ill cancer patients, the experience has been essentially the same. Even though experts usually possess deep knowledge, they often do not make good predictions…
‘When humans make predictions, wisdom gets mixed with “random noise.”… Bootstrapping, which incorporates expert judgment into a decision-making model, eliminates such inconsistencies while preserving the expert’s insights. But this does not occur when human judgment is employed on its own…
‘In fields ranging from medicine to finance, scores of studies have shown that replacing experts with models of experts produces superior judgments. In most cases, the bootstrapping model performed better than experts on their own. Nonetheless, bootstrapping models tend to be rather rudimentary in that human experts are usually needed to identify the factors that matter most in making predictions. Humans are also instrumental in assigning scores to the predictor variables (such as judging the strength of recommendation letters for college applications or the overall health of patients in medical cases). What’s more, humans are good at spotting when the model is getting out of date and needs updating…
‘Human experts typically provide signal, noise, and bias in unknown proportions, which makes it difficult to disentangle these three components in field settings. Whether humans or computers have the upper hand depends on many factors, including whether the tasks being undertaken are familiar or unique. When tasks are familiar and much data is available, computers will likely beat humans by being data-driven and highly consistent from one case to the next. But when tasks are unique (where creativity may matter more) and when data overload is not a problem for humans, humans will likely have an advantage…
‘One might think that humans have an advantage over models in understanding dynamically complex domains, with feedback loops, delays, and instability. But psychologists have examined how people learn about complex relationships in simulated dynamic environments (for example, a computer game modeling an airline’s strategic decisions or those of an electronics company managing a new product). Even after receiving extensive feedback after each round of play, the human subjects improved only slowly over time and failed to beat simple computer models. This raises questions about how much human expertise is desirable when building models for complex dynamic environments. The best way to find out is to compare how well humans and models do in specific domains and perhaps develop hybrid models that integrate different approaches.‘ (Tetlock)
Kahneman also recently published new work relevant to this.
In general organisations spend almost no effort figuring out how noisy the predictions made by senior staff are and how much this costs. Kahneman has done some ‘noise audits’ and shown companies that management make MUCH more variable predictions than people realise.
‘What prevents companies from recognizing that the judgments of their employees are noisy? The answer lies in two familiar phenomena: Experienced professionals tend to have high confidence in the accuracy of their own judgments, and they also have high regard for their colleagues’ intelligence. This combination inevitably leads to an overestimation of agreement. When asked about what their colleagues would say, professionals expect others’ judgments to be much closer to their own than they actually are. Most of the time, of course, experienced professionals are completely unconcerned with what others might think and simply assume that theirs is the best answer. One reason the problem of noise is invisible is that people do not go through life imagining plausible alternatives to every judgment they make.
‘High skill develops in chess and driving through years of practice in a predictable environment, in which actions are followed by feedback that is both immediate and clear. Unfortunately, few professionals operate in such a world. In most jobs people learn to make judgments by hearing managers and colleagues explain and criticize—a much less reliable source of knowledge than learning from one’s mistakes. Long experience on a job always increases people’s confidence in their judgments, but in the absence of rapid feedback, confidence is no guarantee of either accuracy or consensus.’
Reviewing the point that Tetlock makes about simple models beating experts in many fields, Kahneman summarises the evidence:
‘People have competed against algorithms in several hundred contests of accuracy over the past 60 years, in tasks ranging from predicting the life expectancy ofcancer patients to predicting the success ofgraduate students. Algorithms were more accurate than human professionals in about half the studies, and approximately tied with the humans in the others. The ties should also count as victories for the algorithms, which are more cost-effective…
‘The common assumption is that algorithms require statistical analysis of large amounts of data. For example, most people we talk to believe that data on thousands of loan applications and their outcomes is needed to develop an equation that predicts commercial loan defaults. Very few know that adequate algorithms can be developed without any outcome data at all — and with input information on only a small number of cases. We call predictive formulas that are built without outcome data “reasoned rules,” because they draw on commonsense reasoning.
‘The construction of a reasoned rule starts with the selection of a few (perhaps six to eight) variables that are incontrovertibly related to the outcome being predicted. If the outcome is loan default, for example, assets and liabilities will surely be included in the list. The next step is to assign these variables equal weight in the prediction formula, setting their sign in the obvious direction (positive for assets, negative for liabilities). The rule can then be constructed by a few simple calculations.
‘The surprising result of much research is that in many contexts reasoned rules are about as accurate as statistical models built with outcome data. Standard statistical models combine a set of predictive variables, which are assigned weights based on their relationship to the predicted outcomes and to one another. In many situations, however, these weights are both statistically unstable and practically unimportant. A simple rule that assigns equal weights to the selected variables is likely to be just as valid. Algorithms that weight variables equally and don’t rely on outcome data have proved successful in personnel selection, election forecasting, predictions about football games, and other applications.
‘The bottom line here is that if you plan to use an algorithm to reduce noise, you need not wait for outcome data. You can reap most of the benefits by using common sense to select variables and the simplest possible rule to combine them…
‘Uncomfortable as people may be with the idea, studies have shown that while humans can provide useful input to formulas, algorithms do better in the role of final decision maker. If the avoidance of errors is the only criterion, managers should be strongly advised to overrule the algorithm only in exceptional circumstances.‘
People fail to learn from even the great examples of success and the simplest lessons
One of the most interesting meta-lessons of studying high performance, though, is that simply demonstrating extreme success does NOT lead to much learning. For example:
ARPA and PARC created the internet and PC. The PARC research team was an extraordinary collection of about two dozen people who were managed in a very unusual way that created super-productive processes extremely different to normal bureaucracies. XEROX, which owned PARC, had the entire future of the computer industry in its own hands, paid for by its own budgets, and it simultaneously let Bill Gates and Steve Jobs steal everything and XEROX then shut down the research team that did it. And then, as Silicon Valley grew on the back of these efforts, almost nobody, including most of the billionaires who got rich from the dynamics created by ARPA-PARC, studied the nature of the organisation and processes and copied it. Even today, those trying to do edge-of-the-art research in a similar way to PARC right at the heart of the Valley ecosystem are struggling for long-term patient funding. As Alan Kay, one of the PARC team, said, ‘The most interesting thing has been the contrast between appreciation/exploitation of the inventions/contributions [of PARC] versus the almost complete lack of curiosity and interest in the processes that produced them.’ARPA survived being abolished in the 1970s but it was significantly changed and is no longer the freewheeling place that it was in the 1960s when it funded the internet. In many ways DARPA’s approach now is explicitly different to the old ARPA (the addition of the ‘D’ was a sign of internal bureaucratic changes).
‘Systems management’ was invented in the 1950s and 1960s (partly based on wartime experience of large complex projects) to deal with the classified ICBM project and Apollo. It put man on the moon then NASA largely abandoned the approach and reverted to being (relative to 1963-9) a normal bureaucracy. Most of Washington has ignored the lessons ever since — look for example at the collapse of ObamaCare’s rollout, after which Insiders said ‘oh, looks like it was a system failure, wonder how we deal with this’, mostly unaware that America had developed a successful approach to such projects half a century earlier. This is particularly interesting given that China also studied Mueller’s approach to systems management in Apollo and as we speak is copying it in projects across China. The EU’s bureaucracy is, like Whitehall, an anti-checklist to high level systems management — i.e they violate almost every principle of effective action.
Buffett and Munger are the most successful investment partnership in world history. Every year for half a century they have explained some basic principles, particularly concerning incentives, behind organisational success. Practically no public companies take their advice and all around us in Britain we see vast corporate looting and politicians of all parties failing to act — they don’t even read the Buffett/Munger lessons and think about them. Even when given these lessons to read, they won’t read them (I know this because I’ve tried).
Perhaps you’re thinking — well, learning from these brilliant examples might be intrinsically really hard, much harder than Cummings thinks. I don’t think this is quite right. Why? Partly because millions of well-educated and normally-ethical people don’t learn even from much simpler things.
I will explore this separately soon but I’ll give just one example. The world of healthcare unnecessarily kills and injures people on a vast scale. Two aspects of this are 1) a deep resistance to learning from the success of very simple tools like checklists and 2) a deep resistance to face the fact that most medical experts do not understand statistics properly and their routine misjudgements cause vast suffering, plus warped incentives encourage widespread lies about statistics and irrational management. E.g People are constantly told things like ‘you’ve tested positive for X therefore you have X’ and they then kill themselves. We KNOW how to practically eliminate certain sorts of medical injury/death. We KNOW how to teach and communicate statistics better. (Cf. Professor Gigerenzer for details. He was the motivation for including things like conditional probabilities in the new National Curriculum.) These are MUCH simpler than building ICBMs, putting man on the moon, creating the internet and PC, or being great investors. Yet our societies don’t do them.
Because we do not incentivise error-correction and predictive accuracy. People are not incentivised to consider the cost of their noisy judgements. Where incentives and culture are changed, performance magically changes. It is the nature of the systems, not (mostly) the nature of the people, that is the crucial ingredient in learning from proven simple success. In healthcare like in government generally, people are incentivised to engage in wasteful/dangerous signalling to a terrifying degree — not rigorous thinking and not solving problems.
I have experienced the problem with checklists first hand in the Department for Education when trying to get the social worker bureaucracy to think about checklists in the context of avoiding child killings like Baby P. Professionals tend to see them as undermining their status and bureaucracies fight against learning, even when some great officials try really hard (as some in the DfE did such as Pamela Dow and Victoria Woodcock). ‘Social work is not the same as an airline Dominic’. No shit. Airlines can handle millions of people without killing one of them because they align incentives with predictive accuracy and error-correction.
Some appalling killings are inevitable but the social work bureaucracy will keep allowing unnecessary killings because they will not align incentives with error-correction. Undoing flawed incentives threatens the system so they’ll keep killing children instead — and they’re not particularly bad people, they’re normal people in a normal bureaucracy. The pilot dies with the passengers. The ‘CEO’ on over £150,000 a year presiding over another unnecessary death despite constantly increasing taxpayers money pouring in? Issue a statement that ‘this must never happen again’, tell the lawyers to redact embarrassing cockups on the grounds of ‘protecting someone’s anonymity’ (the ECHR is a great tool to cover up death by incompetence), fuck off to the golf course, and wait for the media circus to move on.
Why do so many things go wrong? Because usually nobody is incentivised to work relentlessly to suppress entropy, never mind come up with something new.
We can see some reasonably clear conclusions from decades of study on expertise and prediction in many fields.
Some fields are like extreme sport or physics: genuine expertise emerges because of fast effective feedback on errors.
Abstracting human wisdom into models often works better than relying on human experts as models are often more consistent and less noisy.
Models are also often cheaper and simpler to use.
Models do not have to be complex to be highly effective — quite the opposite, often simpler models outperform more sophisticated and expensive ones.
In many fields (which I’ve explored before but won’t go into again here) low tech very simple checklists have been extremely effective: e.g flying aircraft or surgery.
Successful individuals like Warren Buffett and Ray Dalio also create cognitive checklists to trap and correct normal cognitive biases that degrade individual and team performance.
Fields make progress towards genuine expertise when they make a transition from stories (e.g Icarus) and authority (e.g ‘witch doctor’) to quantitative models (e.g modern aircraft) and evidence/experiment (e.g some parts of modern medicine/surgery).
In the intellectual realm, maths and physics are fields dominated by genuine expertise and provide a useful benchmark to compare others against. They are also hierarchical. Social sciences have little in common with this.
Even when we have great examples of learning and progress, and we can see the principles behind them are relatively simple and do not require high intelligence to understand, they are so psychologically hard and run so counter to the dynamics of normal big organisations, that almost nobody learns from them. Extreme success is ‘easy to learn from’ in one sense and ‘the hardest thing in the world to learn from’ in another sense.
It is fascinating how remarkably little interest there is in the world of politics/government, and social sciences analysing politics/government, about all this evidence. This is partly because politics/government is an anti-learning and anti-expertise field, partly because the social sciences are swamped by what Feynman called ‘cargo cult science’ with very noisy predictions, little good feedback and learning, and a lot of chippiness at criticism whether it’s from statistics experts or the ‘ignorant masses’. Fields like ‘education research’ and ‘political science’ are particularly dreadful and packed with charlatans but much of economics is not much better (much pro- and anti-Brexit mainstream economics is classic ‘cargo cult’).
I have found there is overwhelmingly more interest in high technology circles than in government circles, but in high technology circles there is also a lot of incredulity and naivety about how government works — many assume politicians are trying and failing to achieve high performance and don’t realise that in fact nobody is actually trying. This illusion extends to many well-connected businessmen who just can’t internalise the reality of the apex of power. I find that uneducated people on 20k living hundreds of miles from SW1 generally have a more accurate picture of daily No10 work than extremely well-connected billionaires.
This is all sobering and is another reason to be pessimistic about the chances of changing government from ‘normal’ to ‘high performance’ — but, pessimism of the intellect, optimism of the will…
the science of prediction across different fields (e.g early warning systems, the Tetlock/IARPA project showing dramatic performance improvements),
what we know about high performance (individual/team/organisation) in different fields (e.g China’s application of ‘systems management’ to government),
technology and tools (e.g Bret Victor’s work, Michael Nielsen’s work on cognitive technologies, work on human-AI ‘minotaur’ teams),
political/government decision making affecting millions of people and trillions of dollars (e.g WMD, health), and
communication (e.g crisis management, applied psychology).
Progress requires attacking the ‘system of systems’ problem at the right ‘level’. Attacking the problems directly — let’s improve policy X and Y, let’s swap ‘incompetent’ A for ‘competent’ B — cannot touch the core problems, particularly the hardest meta-problem that government systems bitterly fight improvement. Solving the explicit surface problems of politics and government is best approached by a more general focus on applying abstract principles of effective action. We need to surround relatively specific problems with a more general approach. Attack at the right level will see specific solutions automatically ‘pop out’ of the system. One of the most powerful simplicities in all conflict (almost always unrecognised) is: ‘winning without fighting is the highest form of war’. If we approach the problem of government performance at the right level of generality then we have a chance to solve specific problems ‘without fighting’ — or, rather, without fighting nearly so much and the fighting will be more fruitful.
This is not a theoretical argument. If you look carefully at ancient texts and modern case studies, you see that applying a small number of very simple, powerful, but largely unrecognised principles (that are very hard for organisations to operationalise) can produce extremely surprising results.
How to jump from the Idea to Reality? More soon…
Ps. Just as I was about to hit publish on this, the DCMS Select Committee released their report on me. The sentence about the Singapore golf club at the top comes to mind.
The DCMS Select Committee has just sent me the following letter.
Here is my official reply…
Dear Damian et al
As you know I agreed to give evidence.
In April, I told you I could not do the date you suggested. On 12 April I suggested July.
You ignored this for weeks.
On 3 May you asked again if I could do a date I’d already said I could not do.
I replied that, as I’d told you weeks earlier, I could not.
You then threatened me with a Summons.
On 10 May, Collins wrote:
We have offered you different dates, and as I said previously we are not prepared to wait until July for you to give evidence to the committee. We have also discussed this with the Electoral Commission who have no objection to you giving evidence to us.
We are asking you to give evidence to the committee following evidence we have received that relates to the work of Vote Leave. We have extended a similar invitiation to Arron Banks and Andy Wigmore, to respond to evidence we have received about Leave.EU, and they have both agreed to attend.
The committee will be sending you a summons to appear and I hope that you are able to respond positively to this
The EC has NOT told me this.
Sending a summons is the behaviour of people looking for PR, not people looking to get to the bottom of this affair.
A summons will have ZERO positive impact on my decision and is likely only to mean I withdraw my offer of friendly cooperation, given you will have shown greater interest in grandstanding than truth-seeking, which is one of the curses of the committee system.
I hope you reconsider and put truth-seeking first.
You replied starting this charade.
You talk of ‘contempt of Parliament’.
You seem unaware that most of the country feels contempt for Parliament and this contempt is growing.
You have failed miserably over Brexit. You have not even bothered to educate yourselves on the basics of ‘what the Single Market is’, as Ivan Rogers explained in detail yesterday.
We want £350 million a week for the NHS plus long-term consistent funding and learning from the best systems in the world and instead you funnel our money to appalling companies like the parasites that dominate defence procurement.
We want action on unskilled immigration and you give us bullshit promises of ‘tens of thousands’ that you don’t even believe yourselves plus, literally, free movement for murderers, then you wonder why we don’t trust you.
We want a country MORE friendly to scientists and people from around the world with skills to offer and you give us ignorant persecution that is making our country a bad joke.
You spend your time on this sort of grandstanding instead of serving millions of people less fortunate than you and who rely on you.
If you had wanted my evidence you would have cooperated over dates.
You actually wanted to issue threats, watch me give in, then get higher audiences for your grandstanding.
I’m calling your bluff. Your threats are as empty as those from May/Hammond/DD to the EU. Say what you like, I will not come to your committee regardless of how many letters you send or whether you send characters in fancy dress to hand me papers.
If another Committee behaves reasonably and I can give evidence without compromising various legal actions then I will consider it. Once these legal actions have finished, presumably this year, it will be easy to arrange if someone else wants to do it.
Further, I’m told many of your committee support the Adonis/Mandelson/Campbell/Grieve/Goldman Sachs/FT/CBI campaign for a rematch against the country.
Do you know what Vote Leave 2 would feel like for the MPs who vote for that (and donors who fund it)?
It would feel like having Lawrence Taylor chasing you and smashing you into the ground over and over and over again.
Vote Leave 2 would not involve me — nobody will make that mistake again — but I know what it would feel like for every MP who votes for a rematch against the public.
Lawrence Taylor: relentless
So far you guys have botched things on an epic scale but it’s hard to break into the Westminster system — you rig the rules to stop competition. Vote Leave 1 needed Cameron’s help to hack the system. If you guys want to run with Adonis and create another wave, be careful what you wish for. ‘Unda fert nec regitur’ and VL2 would ride that wave right at the gates of Westminster.
A second referendum would be bad for the country and I hope it doesn’t happen but if you force the issue, then Vote Leave 2 would try to create out of the smoking wreck in SW1 something that can deliver what the public wants. Imagine Amazon-style obsession on customer satisfaction (not competitorand media obsession which is what you guys know) with Silicon Valley technology/scaling and Mueller-style ‘systems politics’ combined with the wave upon wave of emotion you will have created. Here’s some free political advice: when someone’s inside your OODA loop, it feels to them like you are working for them. If you go for a rematch, then this is what you will be doing for people like me. 350m would just be the starter.
‘Mixed emotions, Buddy, like Larry Wildman going off a cliff — in my new Maserati.’
I will happily discuss this with your colleagues on a different committee if they are interested, after the legal issues are finished…
Ps. If you’re running an inquiry on fake news, it would be better to stop spreading fake news yourselves and to correct your errors when made aware of them. If you’re running an inquiry on issues entangled with technologies, it would be better to provide yourself with technological expertise so you avoid spreading false memes. E.g your recent letter to Facebook asked them to explain to you the operational decision-making of Vote Leave. This is a meaningless question which it is impossible for Facebook to answer and could only be asked by people who do not understand the technology they are investigating.
‘Just like all British governments, they will act more or less in a hand to mouth way on the spur of the moment, but they will not think out and adopt a steady policy.’ Earl Cromer, 1896.
‘Fascinating that the same problems recur time after time, in almost every program, and that the management of the program, whether it happened to be government or industry, continues to avoid reality.’ George Mueller, pioneer of systems management and head of the Apollo programme to put man on the moon.
Traditional cultures, those that all humans lived in until quite recently and which still survive in pockets, don’t realise that they are living inside a particular perspective. They think that what they see is ‘reality’. It is, obviously, not their fault. It is not because they are stupid. It is a historical accident that they did/do not have access to mental models that help more accurate thinking about reality.
Westminster and the other political cultures dotted around the world are similar to these traditional cultures. They think they they are living in ‘reality’. The MPs and pundits get up, read each other, tweet at each other, give speeches, send press releases, have dinner, attack, fuck or fight each other, do the same tomorrow and think ‘this is reality’. Like traditional cultures they are wrong. They are living inside a particular perspective that enormously distorts reality.
They are trapped in thinking about today and their careers. They are trapped in thinking about incremental improvements. Almost nobody has ever been part of a high performance team responsible for a complex project. The speciality is a hot take to explain post facto what one cannot predict. They mostly don’t know what they don’t know. They don’t understand the decentralised information processing that allows markets to enable complex coordination. They don’t understand how scientific research works and they don’t value it. Their daily activity is massively constrained by the party and state bureaucracies that incentivise behaviour very different to what humanity needs to create long-term value. As Michael Nielsen (author of Reinventing Science) writes:
‘[M]uch of our intellectual elite who think they have “the solutions” have actually cut themselves off from understanding the basis for much of the most important human progress.’
Unlike traditional cultures, our modern political cultures don’t have the excuse of our hunter-gatherer ancestors. We could do better. But it is very very hard to escape the core imperatives that make big bureaucracies — public companies as well as state bureaucracies — so bad at learning. Warren Buffet explained decades ago how institutions actively fight against learning and fight to stay in a closed and vicious feedback loop:
‘My most surprising discovery: the overwhelming importance in business of an unseen force that we might call “the institutional imperative”. In business school, I was given no hint of the imperative’s existence and I did not intuitively understand it when I entered the business world. I thought then that decent, intelligence, and experienced managers would automatically make rational business decisions. But I learned the hard way that isn’t so. Instead rationality frequently wilts when the institutional imperative comes into play.
‘For example, 1) As if governed by Newton’s First Law, any institution will resist any change in its current direction. 2) … Corporate projects will materialise to soak up available funds. 3) Any business craving of the leader, however foolish, will quickly be supported by … his troops. 4) The behaviour of peer companies … will be mindlessly imitated.’
Almost nobody really learns from the world’s most successful investor about investing and how to run a successful business with good corporate governance. (People read what he writes but almost no investors choose to operate long-term like him, I think it is still true that not a single public company has copied his innovations with corporate governance like ‘no pay for company directors’, and governments have consistently rejected his and Munger’s advice about controlling the looting of public companies by management.) Almost nobody really learns how to do things better from the experience of dealing with this ‘institutional imperative’. We fail over and over again in the same way, trusting in institutions that are programmed to fail.
It is very very hard for humans to lift our eyes from today and to go out into the future and think about what could be done to bring the future back to the present. Like ants crawling around on the leaf, we political people only know our leaf.
Science has shown us a different way. Newton looked up from his leaf, looked far away from today, and created a new perspective — a new model of reality. It took an extreme genius to discover something like calculus but once discovered billions of people who are far from being geniuses can use this new perspective. Science advances by turning new ideas into standard ideas so each generation builds on the last.
Politics does the equivalent of constantly trying to reinvent children’s arithmetic and botching it. It does not build reliable foundations of knowledge. Archimedes is no longer cutting edge. Thucydides and Sun Tzu are still cutting edge. Even though Tetlock and others have shown how to start making similar progress with politics, our political cultures fiercely resist learning and fight ferociously to stay in closed and failing feedback loops.
In many ways our political culture has regressed as it has become more and more audio-visual and less and less literate. (Only 31% of US college graduates can read at a basic level. I’d guess it’s similar here. See end.) I’ve experimented with the way Jeff Bezos runs meetings at Amazon: i.e start the meeting with giving people a 5-10 page memo to read. Impossible in Westminster, nobody will sit and read like that! Officials have tried and failed for a year to get senior ministers to engage with complex written material about the EU negotiations. TV news dominates politics and is extremely low-bandwidth: it contains a few hundred words and rarely uses graphics properly. Evan Davis illustrates a comment about ‘going down the plughole’ with a picture of water down a plughole and Nick Robinson illustrates a comment about ‘the economy taking off’ with a picture of a plane taking off. The constant flow of bullshit from the likes of Robert Peston and Jon Snow dominates the medium because competition has been impossible until recently. BUT, although technology is making these charlatans less relevant (good) it also creates new problems and will not necessarily improve the culture.
Today, the anniversary of the referendum, is a good day to forget the babble in the bubble and think about lessons from another project that changed the world, the famous ARPA/PARC team of the 1960s and 1970s.
ARPA/PARC and ‘capturing the heavens’: The best way to predict the future is to invent it
The panic over Sputnik brought many good things such as a huge increase in science funding. America also created the Advanced Research Projects Agency (ARPA, which later added ‘Defense’ and became DARPA). Its job was to fund high risk / high payoff technology development. In the 1960s and 1970s, a combination of unusual people and unusually wise funding from ARPA created a community that in turn invented the internet, or ‘the intergalactic network’ as Licklider originally called it, and the personal computer. One of the elements of this community was PARC, a research centre working for Xerox. As Bill Gates said, he and Steve Jobs essentially broke into PARC, stole their ideas, and created Microsoft and Apple.
The ARPA/PARC project has created over 35 TRILLION DOLLARS of value for society and counting.
The whole story is fascinating in many ways. I won’t go into the technological aspects. I just want to say something about the process.
What does a process that produces ideas that change the world look like?
One of the central figures was Alan Kay. One of the most interesting things about the project is that not only has almost nobody tried to repeat this sort of research but the business world has even gone out of its way to spread mis-information about it because it was seen as so threatening to business-as-usual.
I will sketch a few lessons from one of Kay’s pieces but I urge you to read the whole thing.
‘This is what I call “The power of the context” or “Point of view is worth 80 IQ points”. Science and engineering themselves are famous examples, but there are even more striking processes within these large disciplines. One of the greatest works of art from that fruitful period of ARPA/PARC research in the 60s and 70s was the almost invisible context and community that catalysed so many researchers to be incredibly better dreamers and thinkers. That it was a great work of art is confirmed by the world-changing results that appeared so swiftly, and almost easily. That it was almost invisible, in spite of its tremendous success, is revealed by the disheartening fact today that, as far as I’m aware, no governments and no companies do edge-of-the-art research using these principles.’
‘[W]hen I think of ARPA/PARC, I think first of good will, even before brilliant people… Good will and great interest in graduate students as “world-class researchers who didn’t have PhDs yet” was the general rule across the ARPA community.
‘[I]t is no exaggeration to say that ARPA/PARC had “visions rather than goals” and “funded people, not projects”. The vision was “interactive computing as a complementary intellectual partner for people pervasively networked world-wide”. By not trying to derive specific goals from this at the funding side, ARPA/PARC was able to fund rather different and sometimes opposing points of view.
‘The pursuit of Art always sets off plans and goals, but plans and goals don’t always give rise to Art. If “visions not goals” opens the heavens, it is important to find artistic people to conceive the projects.
‘Thus the “people not projects” principle was the other cornerstone of ARPA/PARC’s success. Because of the normal distribution of talents and drive in the world, a depressingly large percentage of organizational processes have been designed to deal with people of moderate ability, motivation, and trust. We can easily see this in most walks of life today, but also astoundingly in corporate, university, and government research. ARPA/PARC had two main thresholds: self-motivation and ability. They cultivated people who “had to do, paid or not” and “whose doings were likely to be highly interesting and important”. Thus conventional oversight was not only not needed, but was not really possible. “Peer review” wasn’t easily done even with actual peers. The situation was “out of control”, yet extremely productive and not at all anarchic.
‘”Out of control” because artists have to do what they have to do. “Extremely productive” because a great vision acts like a magnetic field from the future that aligns all the little iron particle artists to point to “North” without having to see it. They then make their own paths to the future. Xerox often was shocked at the PARC process and declared it out of control, but they didn’t understand that the context was so powerful and compelling and the good will so abundant, that the artists worked happily at their version of the vision. The results were an enormous collection of breakthroughs.
‘Our game is more like art and sports than accounting, in that high percentages of failure are quite OK as long as enough larger processes succeed… [I]n most processes today — and sadly in most important areas of technology research — the administrators seem to prefer to be completely in control of mediocre processes to being “out of control” with superproductive processes. They are trying to “avoid failure” rather than trying to “capture the heavens”.
‘All of these principles came together a little over 30 years ago to eventually give rise to 1500 Altos, Ethernetworked to: each other, Laserprinters, file servers and the ARPAnet, distributed to many kinds of end-users to be heavily used in real situations. This anticipated the commercial availability of this genre by 10-15 years. The best way to predict the future is to invent it.
‘[W]e should realize that many of the most important ARPA/PARC ideas haven’t yet been adopted by the mainstream. For example, it is amazing to me that most of Doug Engelbart’s big ideas about “augmenting the collective intelligence of groups working together” have still not taken hold in commercial systems. What looked like a real revolution twice for end-users, first with spreadsheets and then with Hypercard, didn’t evolve into what will be commonplace 25 years from now, even though it could have. Most things done by most people today are still “automating paper, records and film” rather than “simulating the future”. More discouraging is that most computing is still aimed at adults in business, and that aimed at nonbusiness and children is mainly for entertainment and apes the worst of television. We see almost no use in education of what is great and unique about computer modeling and computer thinking. These are not technological problems but a lack of perspective. Must we hope that the open-source software movements will put things right?
‘The ARPA/PARC history shows that a combination of vision, a modest amount of funding, with a felicitous context and process can almost magically give rise to new technologies that not only amplify civilization, but also produce tremendous wealth for the society. Isn’t it time to do this again by Reason, even with no Cold War to use as an excuse? How about helping children of the world grow up to think much better than most adults do today? This would truly create “The Power of the Context”.’
Note how this story runs contrary to how free market think tanks and pundits describe technological development. The impetus for most of this development came from government funding, not markets.
Also note that every attempt since the 1950s to copy ARPA and JASON (the semi-classified group that partly gave ARPA its direction) in the UK has been blocked by Whitehall. The latest attempt was in 2014 when the Cabinet Office swatted aside the idea. Hilariously its argument was ‘DARPA has had a lot of failures’ thus demonstrating extreme ignorance about the basic idea — the whole point is you must have failures and if you don’t have lots of failures then you are failing!
People later claimed that while PARC may have changed the world it never made any money for XEROX. This is ‘absolute bullshit’ (Kay). It made billions from the laser printer alone and overall Xerox made 250 times what they invested in PARC before they went bust. In 1983 they fired Bob Taylor, the manager of PARC and the guy who made it all happen.
‘They hated [Taylor] for the very reason that most companies hate people who are doing something different, because it makes middle and upper management extremely uncomfortable. The last thing they want to do is make trillions, they want to make a few millions in a comfortable way’ (Kay).
Someone finally listened to Kay recently. ‘YC Research’, the research arm of the world’s most successful (by far) technology incubator, is starting to fund people in this way. I am not aware of any similar UK projects though I know that a small network of people are thinking again about how something like this could be done here. If you can help them, take a risk and help them! Someone talk to science minister Jo Johnson but be prepared for the Treasury’s usual ignorant bullshit — ‘what are we buying for our money, and how can we put in place appropriate oversight and compliance?’ they will say!
Why is this relevant to the referendum?
As we ponder the future of the UK-EU relationship shaped amid the farce of modern Whitehall, we should think hard about the ARPA/PARC example: how a small group of people can make a huge breakthrough with little money but the right structure, the right ways of thinking, and the right motives.
Those of us outside the political system thinking ‘we know we can do so much better than this but HOW can we break through the bullshit?’ need to change our perspective and gain 80 IQ points.
This real picture is a metaphor for the political culture: ad hoc solutions that are either bad or don’t scale.
ARPA said ‘Let’s get rid of all the wires’. How do we ‘get rid of all the wires’ and build something different that breaks open the closed and failing political cultures? Winning the referendum was just one step that helps clear away dead wood but we now need to build new things.
The ARPA vision that aligned the artists ‘like little iron filings’ was:
‘Computers are destined to become interactive intellectual amplifiers for everyone in the world universally networked worldwide’ (Licklider).
We need a motivating vision aimed not at tomorrow but at changing the basic wiring of the whole system, a vision that can align ‘the little iron filings’, and then start building for the long-term.
I will go into what I think this vision could be and how to do it another day. I think it is possible to create something new that could scale very fast and enable us to do politics and government extremely differently, as different to today as the internet and PC were to the post-war mainframes. This would enable us to build huge long-term value for humanity in a relatively short time (less than 20 years). To create it we need a process as well suited to the goal as the ARPA/PARC project was and incorporating many of its principles.
We must try to escape the current system with its periodic meltdowns and international crises. These crises move 500-1,000 times faster than that of summer 1914. Our destructive potential is at least a million-fold greater than it was in 1914. Yet we have essentially the same hierarchical command-and-control decision-making systems in place now that could not even cope with 1914 technology and pace. We have dodged nuclear wars by fluke because individuals made snap judgements in minutes. Nobody who reads the history of these episodes can think that this is viable long-term, and we will soon have another wave of innovation to worry about with autonomous robots and genetic engineering. Technology gives us no option but to try to overcome evolved instincts like destroying out-group competitors.
Ironically, one of the very few people in politics who understood the sort of thinking needed was … Jean Monnet, the architect of the EEC/EU! Monnet understood how to step back from today and build institutions. He worked operationally to prepare the future:
‘If there was stiff competition round the centres of power, there was practically none in the area where I wanted to work – preparing the future.’
Monnet was one of the few people in modern politics who really deserve the label ‘genius’. The story of how he wangled the creation of his institutions through the daily chaos of post-war politics is a lesson to anybody who wants to get things done.
But the institutions he created are in many ways the opposite of what the world needs. Their core operating principle is perpetual centralisation of power in the hands of an all powerful bureaucracy (Commission) and Court (ECJ). Nothing that works well in the world works like this!
Thanks to the prominence of Farage the dominant story among educated people is that those who got us out of the EU want to take us back to the pre-1914 era of hostile competing nation states. Nothing could be further from the truth. The key people in Vote Leave wanted and want not just what is best for Britain but what is best for all humanity. We want more international cooperation, not less. The problem with the EU is not that it is about international cooperation but that it is so bad at it and actually undermines it.
Britain leaving forces those with power to ask: how can all European countries trade freely and cooperate without subscribing to Monnet’s bureaucratic centralism? This will help Europe in the long-term. To those who favour this bureaucratic centralism and uniformity, reflect on the different trajectories of Europe and China post-Renaissance. In Europe, regulatory competition (so Columbus could chase funding in Spain after rejection in Portugal) brought immense gains. In China, centrally directed uniformity led to centuries of stagnation. America’s model of competitive federalism created by the founding fathers has been a far more effective engine of civilisation, growth, and new knowledge than the Monnet-Delors Single Market model.
If Britain were to focus on science and education with huge resources and a new-found seriousness, then this regulatory diversity would help not just Britain but all Europe and the global science community. We could make Britain the best place in the world to be for those who can invent the future. Like Alan Kay and his colleagues, we could create whole new industries. We could call Jeff Bezos and say, ‘Ok Jeff, you want a permanent international manned moon base, let’s talk about who does what, but not with that old rocket technology.’ No country on earth funds science as well as we already know how it could be done — that is something for Britain to do that would create real long-term value for humanity, instead of the ‘punching above our weight’ and ‘special relationship’ bullshit that passes for strategy in London. How we change our domestic institutions is within our power and will have much much greater influence on our long-termfuture than whatever deal is botched together with Brussels. We have the resources. But can we break the system open? If we don’t then we’re likely to go down the path we were already going down inside the EU, like the deluded Norma Desmond in Sunset Boulevard claiming ‘I am big, it’s the pictures that got small.’
Vote Leave and ‘good will’
Although Vote Leave was enmeshed in a sort of collective lunacy we managed, barely, to fend it off from the inner working of the campaign. Much of my job (sadly) was just trying to maintain a cordon around the core team so they could deliver the campaign with as little disruption as possible. We managed this because among the core people we had great good will. The stories of the campaign focus on the lunacy, but the people who really made it work remember the goodwill.
A year ago tonight I was sitting alone in a room thinking ‘we’ve won, now…’ when the walls started rumbling. At first I couldn’t make it out then, as Tim Shipman tells the story in his definitive book on the campaign, I heard ‘Dom, Dom, DOM’ — the team had declared victory. I went next door…
Thanks to everybody who sacrificed something. As I said that night and as I said in my long blog on the campaign, I’ve been given credit I don’t deserve and which rightly belongs to others — Cleo Watson, Richard ‘Ricardo’ Howell, Brother Starkie, Oliver Lewis, Lord Suart et al. Now, let’s think about what should come next…
Watch Alan Kay explain how to invent the future HERE and HERE.
Ps. Kay also points out that the real computer revolution won’t happen until people fulfil the original vision of enabling children to use this powerful way of thinking:
‘The real printing revolution was a qualitative change in thought and argument that lagged the hardware inventions by almost two centuries. The special quality of computers is their ability to rapidly simulate arbitrary descriptions, and the real computer revolution won’t happen until children can learn to read, write, argue and think in this powerful new way. We should all try to make this happen much sooner than 200 or even 20 more years!’
Almost nobody in education policy is aware of the educational context for the ARPA/PARC project which also speaks volumes about the abysmal field of ‘education research/policy’.
* Re the US literacy statistic, cf. A First Look at the Literacy of America’s Adults in the 21st Century, National Assessment of Adult Literacy, U.S. Dept of Education, NCES 2006.
This paper concerns a very interesting story combining politics, management, institutions, science and technology. When high technology projects passed a threshold of complexity post-1945 amid the extreme pressure of the early Cold War, new management ideas emerged. These ideas were known as ‘systems engineering’ and ‘systems management’. These ideas were particularly connected to the classified program to build the first Intercontinental Ballistic Missiles (ICBMs) in the 1950s and successful ideas were transplanted into a failing NASA by George Mueller and others from 1963 leading to the successful moon landing in 1969.
These ideas were then applied in other mission critical teams and could be used to improve government performance. Urgently needed projects to lower the probability of catastrophes for humanity will benefit from considering why Mueller’s approach was 1) so successful and 2) so un-influential in politics. Could we develop a ‘systems politics’ that applies the unrecognised simplicities of effective action?
For those interested, it also looks briefly at an interesting element of the story – the role of John von Neumann, the brilliant mathematician who was deeply involved in the Manhattan Project, the project to build ICBMs, the first digital computers, and subjects like artificial intelligence, artificial life, possibilities for self-replicating machines made from unreliable components, and the basic problem that technological progress ‘gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we have known them, cannot continue.’
An obvious project with huge inherent advantages for humanity is the development of an international manned lunar base as part of developing space for commerce and science. It is the sort of thing that might change political dynamics on earth and could generate enormous support across international boundaries. After 23 June 2016, the UK has to reorient national policy on many dimensions. Developing basic science is one of the most important dimensions (for example, as I have long argued we urgently need a civilian version of DARPA similarly operating outside normal government bureaucratic systems including procurement and HR). Supporting such an international project would be a great focus for UK efforts and far more productive than our largely wasted decades of focus on the dysfunctional bureaucracy in Brussels that is dominated by institutions that fail the most important test – the capacity for error-correction the importance of which has been demonstrated over long periods and through many problems by the Anglo-American political system and its common law.
Please leave comments or email dmc2.cummings at gmail.com