‘The combination of physics and politics could render the surface of the earth uninhabitable.’ John von Neumann.
This series of blogs considers:
- the difference between fields with genuine expertise, such as fighting and physics, and fields dominated by bogus expertise, such as politics and economic forecasting;
- the big big problem we face – the world is ‘undersized and underorganised’ because of a collision between four forces: 1) our technological civilisation is inherently fragile and vulnerable to shocks, 2) the knowledge it generates is inherently dangerous, 3) our evolved instincts predispose us to aggression and misunderstanding, and 4) there is a profound mismatch between the scale and speed of destruction our knowledge can cause and the quality of individual and institutional decision-making in ‘mission critical’ institutions – our institutions are similar to those that failed so spectacularly in summer 1914 yet they face crises moving at least ~103 times faster and involving ~106 times more destructive power able to kill ~1010 people;
- what classic texts and case studies suggest about the unrecognised simplicities of effective action to improve the selection, education, training, and management of vital decision-makers to improve dramatically, reliably, and quantifiably the quality of individual and institutional decisions (particularly 1) the ability to make accurate predictions and b) the quality of feedback);
- how we can change incentives to aim a much bigger fraction of the most able people at the most important problems;
- what tools and technologies can help decision-makers cope with complexity.
[I’ve tweaked a couple of things in response to this blog by physicist Steve Hsu.]
Summary of the big big problem
The investor Peter Thiel (founder of PayPal and Palantir, early investor in Facebook) asks people in job interviews: what billion (109) dollar business is nobody building? The most successful investor in world history, Warren Buffett, illustrated what a quadrillion (1015) dollar business might look like in his 50th anniversary letter to Berkshire Hathaway investors.
‘There is, however, one clear, present and enduring danger to Berkshire against which Charlie and I are powerless. That threat to Berkshire is also the major threat our citizenry faces: a “successful” … cyber, biological, nuclear or chemical attack on the United States… The probability of such mass destruction in any given year is likely very small… Nevertheless, what’s a small probability in a short period approaches certainty in the longer run. (If there is only one chance in thirty of an event occurring in a given year, the likelihood of it occurring at least once in a century is 96.6%.) The added bad news is that there will forever be people and organizations and perhaps even nations that would like to inflict maximum damage on our country. Their means of doing so have increased exponentially during my lifetime. “Innovation” has its dark side.
‘There is no way for American corporations or their investors to shed this risk. If an event occurs in the U.S. that leads to mass devastation, the value of all equity investments will almost certainly be decimated.
‘No one knows what “the day after” will look like. I think, however, that Einstein’s 1949 appraisal remains apt: “I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.”’
Politics is profoundly nonlinear. (I have written a series of blogs about complexity and prediction HERE which are useful background for those interested.) Changing the course of European history via the referendum only involved about 10 crucial people controlling ~£107 while its effects over ten years could be on the scale of ~108 – 109 people and ~£1012: like many episodes in history the resources put into it are extremely nonlinear in relation to the potential branching histories it creates. Errors dealing with Germany in 1914 and 1939 were costly on the scale of ~100,000,000 (108) lives. If we carry on with normal human history – that is, international relations defined as out-groups competing violently – and combine this with modern technology then it is extremely likely that we will have a disaster on the scale of billions (109) or even all humans (~1010). The ultimate disaster would kill about 100 times more people than our failure with Germany. Our destructive power is already much more than 100 times greater than it was then: nuclear weapons increased destructiveness by roughly a factor of a million.
Even if we dodge this particular bullet there are many others lurking. New genetic engineering techniques such as CRISPR allow radical possibilities for re-engineering organisms including humans in ways thought of as science fiction only a decade ago. We will soon be able to remake human nature itself. CRISPR-enabled ‘gene drives’ enable us to make changes to the germ-line of organisms permanent such that changes spread through the entire wild population, including making species extinct on demand. Unlike nuclear weapons such technologies are not complex, expensive, and able to be kept secret for a long time. The world’s leading experts predict that people will be making them cheaply at home soon – perhaps they already are. These developments have been driven by exponential progress much faster than Moore’s Law reducing the cost of DNA sequencing per genome from ~$108 to ~$103 in roughly 15 years.
It is already practically possible to deploy a cheap, autonomous, and anonymous drone with facial-recognition software and a one gram shaped-charge to identify a relevant face and blow it up. Military logic is driving autonomy. For example, 1) the explosion in the volume of drone surveillance video (from 71 hours in 2004 to 300,000 hours in 2011 to millions of hours now) requires automated analysis, and 2) jamming and spoofing of drones strongly incentivise a push for autonomy. It is unlikely that promises to ‘keep humans in the loop’ will be kept. It is likely that state and non-state actors will deploy low-cost drone swarms using machine learning to automate the ‘find-fix-finish’ cycle now controlled by humans. (See HERE for a video just released for one such program and imagine the capability when they carry their own communication and logistics network with them.)
In the medium-term, many billions are being spent on finding the secrets of general intelligence. We know this secret is encoded somewhere in the roughly 125 million ‘bits’ of information that is the rough difference between the genome that produces the human brain and the genome that produces the chimp brain. This search space is remarkably small – the equivalent of just 25 million English words or 30 copies of the King James Bible. There is no fundamental barrier to decoding this information and it is possible that the ultimate secret could be described relatively simply (cf. this great essay by physicist Michael Nielsen). One of the world’s leading experts has told me they think a large proportion of this problem could be solved in about a decade with a few tens of billions and something like an Apollo programme level of determination.
Not only is our destructive and disruptive power still getting bigger quickly – it is also getting cheaper and faster every year. The change in speed adds another dimension to the problem. In the period between the Archduke’s murder and the outbreak of World War I a month later it is striking how general failures of individuals and institutions were compounded by the way in which events moved much faster than the ‘mission critical’ institutions could cope with such that soon everyone was behind the pace, telegrams were read in the wrong order and so on. The crisis leading to World War I was about 30 days from the assassination to the start of general war – about 700 hours. The timescale for deciding what to do between receiving a warning of nuclear missile launch and deciding to launch yourself is less than half an hour and the President’s decision time is less than this, maybe just minutes. This is a speedup factor of at least 103.
Economic crises already occur far faster than human brains can cope with. The financial system has made a transition from people shouting at each other to a a system dominated by high frequency ‘algorithmic trading’ (HFT), i.e. machine intelligence applied to robot trading with vast volumes traded on a global spatial scale and a microsecond (10-6) temporal scale far beyond the monitoring, understanding, or control of regulators and politicians. There is even competition for computer trading bases in specific locations based on calculations of Special Relativity as the speed of light becomes a factor in minimising trade delays (cf. Relativistic statistical arbitrage, Wissner-Gross). ‘The Flash Crash’ of 9 May 2010 saw the Dow lose hundreds of points in minutes. Mini ‘flash crashes’ now blow up and die out faster than humans can notice. Given our institutions cannot cope with economic decisions made at ‘human speed’, a fortiori they cannot cope with decisions made at ‘robot speed’. There is scope for worse disasters than 2008 which would further damage the moral credibility of decentralised markets and provide huge chances for extremist political entrepreneurs to exploit. (* See endnote.)
What about the individuals and institutions that are supposed to cope with all this?
Our brains have not evolved much in thousands of years and are subject to all sorts of constraints including evolved heuristics that lead to misunderstanding, delusion, and violence particularly under pressure. There is a terrible mismatch between the sort of people that routinely dominate mission critical political institutions and the sort of people we need: high-ish IQ (we need more people >145 (+3SD) while almost everybody important is between 115-130 (+1 or 2SD)), a robust toolkit for not fooling yourself including quantitative problem-solving (almost totally absent at the apex of relevant institutions), determination, management skills, relevant experience, and ethics. While our ancestor chiefs at least had some intuitive feel for important variables like agriculture and cavalry our contemporary chiefs (and those in the media responsible for scrutiny of decisions) generally do not understand their equivalents, and are often less experienced in managing complex organisations than their predecessors.
The national institutions we have to deal with such crises are pretty similar to those that failed so spectacularly in summer 1914 yet they face crises moving at least ~103 times faster and involving ~106 times more destructive power able to kill ~1010 people. The international institutions developed post-1945 (UN, EU etc) contribute little to solving the biggest problems and in many ways make them worse. These institutions fail constantly and do not – cannot – learn much.
If we keep having crises like we have experienced over the past century then this combination of problems pushes the probability of catastrophe towards ‘overwhelmingly likely’.
What Is To be Done? There’s plenty of room at the top
‘In a knowledge-rich world, progress does not lie in the direction of reading information faster, writing it faster, and storing more of it. Progress lies in the direction of extracting and exploiting the patterns of the world… And that progress will depend on … our ability to devise better and more powerful thinking programs for man and machine.’ Herbert Simon, Designing Organizations for an Information-rich World, 1969.
‘Fascinating that the same problems recur time after time, in almost every program, and that the management of the program, whether it happened to be government or industry, continues to avoid reality.’ George Mueller, pioneer of ‘systems engineering’ and ‘systems management’ and the man most responsible for the success of the 1969 moon landing.
Somehow the world has to make a series of extremely traumatic and dangerous transitions over the next 20 years. The main transition needed is:
Embed reliably the unrecognised simplicities of high performance teams (HPTs), including personnel selection and training, in ‘mission critical’ institutions while simultaneously developing a focused project that radically improves the prospects for international cooperation and new forms of political organisation beyond competing nation states.
Big progress on this problem would automatically and for free bring big progress on other big problems. It could improve (even save) billions of lives and save a quadrillion dollars (~$1015). If we avoid disasters then the error-correcting institutions of markets and science will, patchily, spread peace, prosperity, and learning. We will make big improvements with public services and other aspects of ‘normal’ government. We will have a healthier political culture in which representative institutions, markets serving the public (not looters), and international cooperation are stronger.
Can a big jump in performance – ‘better and more powerful thinking programs for man and machine’ – somehow be systematised?
Feynman once gave a talk titled ‘There’s plenty of room at the bottom’ about the huge performance improvements possible if we could learn to do engineering at the atomic scale – what is now called nanotechnology. There is also ‘plenty of room at the top’ of political structures for huge improvements in performance. As I explained recently, the victory of the Leave campaign owed more to the fundamental dysfunction of the British Establishment than it did to any brilliance from Vote Leave. Despite having the support of practically every force with power and money in the world (including the main broadcasters) and controlling the timing and legal regulation of the referendum, they blew it. This was good if you support Leave but just how easily the whole system could be taken down should be frightening for everybody .
Creating high performance teams is obviously hard but in what ways is it really hard? It is not hard in the same sense that some things are hard like discovering profound new mathematical knowledge. HPTs do not require profound new knowledge. We have been able to read the basic lessons in classics for over two thousand years. We can see relevant examples all around us of individuals and teams showing huge gains in effectiveness.
The real obstacle is not financial. The financial resources needed are remarkably low and the return on small investments could be incalculably vast. We could significantly improve the decisions of the most powerful 100 people in the UK or the world for less than a million dollars (~£106) and a decade-long project on a scale of just ~£107 could have dramatic effects.
The real obstacle is not a huge task of public persuasion – quite the opposite. A government that tried in a disciplined way to do this would attract huge public support. (I’ve polled some ideas and am confident about this.) Political parties are locked in a game that in trying to win in conventional ways leads to the public despising them. Ironically if a party (established or new) forgets this game and makes the public the target of extreme intelligent focus then it would not only make the world better but would trounce their opponents.
The real obstacle is not a need for breakthrough technologies though technology could help. As Colonel Boyd used to shout, ‘People, ideas, machines – in that order!’
The real obstacle is that although we can all learn and study HPTs it is extremely hard to put this learning to practical use and sustain it against all the forces of entropy that constantly operate to degrade high performance once the original people have gone. HPTs are episodic. They seem to come out of nowhere, shock people, then vanish with the rare individuals. People write about them and many talk about learning from them but in fact almost nobody ever learns from them – apart, perhaps, from those very rare people who did not need to learn – and nobody has found a method to embed this learning reliably and systematically in institutions that can maintain it. The Prussian General Staff remained operationally brilliant but in other ways went badly wrong after the death of the elder Moltke. When George Mueller left NASA it reverted to what it had been before he arrived – management chaos. All the best companies quickly go downhill after the departure of people like Bill Gates – even when such very able people have tried very very hard to avoid exactly this problem.
Charlie Munger, half of the most successful investment team in world history, has a great phrase he uses to explain their success that gets to the heart of this problem:
‘There isn’t one novel thought in all of how Berkshire [Hathaway] is run. It’s all about … exploiting unrecognized simplicities… It’s a community of like-minded people, and that makes most decisions into no-brainers. Warren [Buffett] and I aren’t prodigies. We can’t play chess blindfolded or be concert pianists. But the results are prodigious, because we have a temperamental advantage that more than compensates for a lack of IQ points.’
The simplicities that bring high performance in general, not just in investing, are largely unrecognised because they conflict with many evolved instincts and are therefore psychologically very hard to implement. The principles of the Buffett-Munger success are clear – they have even gone to great pains to explain them and what the rest of us should do – and the results are clear yet still almost nobody really listens to them and above average intelligence people instead constantly put their money into active fund management that is proved to destroy wealth every year!
Most people think they are already implementing these lessons and usually strongly reject the idea that they are not. This means that just explaining things is very unlikely to work:
‘I’d say the history that Charlie [Munger] and I have had of persuading decent, intelligent people who we thought were doing unintelligent things to change their course of action has been poor.’ Buffett.
Even more worrying, it is extremely hard to take over organisations that are not run right and make them excellent.
‘We really don’t believe in buying into organisations to change them.’ Buffett.
If people won’t listen to the world’s most successful investor in history on his own subject, and even he finds it too hard to take over failing businesses and turn them around, how likely is it that politicians and officials incentivised to keep things as they are will listen to ideas about how to do things better? How likely is it that a team can take over broken government institutions and make them dramatically better in a way that outlasts the people who do it? Bureaucracies are extraordinarily resistant to learning. Even after the debacles of 9/11 and the Iraq War, costing many lives and trillions of dollars, and even after the 2008 Crash, the security and financial bureaucracies in America and Europe are essentially the same and operate on the same principles.
Buffett’s success is partly due to his discipline in sticking within what he and Munger call their ‘circle of competence’. Within this circle they have proved the wisdom of avoiding trying to persuade people to change their minds and avoiding trying to fix broken institutions.
This option is not available in politics. The Enlightenment and the scientific revolution give us no choice but to try to persuade people and try to fix or replace broken institutions. In general ‘it is better to undertake revolution than undergo it’. How might we go about it? What can people who do not have any significant power inside the system do? What international projects are most likely to spark the sort of big changes in attitude we urgently need?
This is the first of a series. I will keep it separate from the series on the EU referendum though it is connected in the sense that I spent a year on the referendum in the belief that winning it was a necessary though not sufficient condition for Britain to play a part in improving the quality of government dramatically and improving the probability of avoiding the disasters that will happen if politics follows a normal path. I intended to implement some of these ideas in Downing Street if the Boris-Gove team had not blown up. The more I study this issue the more confident I am that dramatic improvements are possible and the more pessimistic I am that they will happen soon enough.
Please leave comments and corrections…
* A new transatlantic cable recently opened for financial trading. Its cost? £300 million. Its advantage? It shaves 2.6 milliseconds off the latency of financial trades. Innovative groups are discussing the application of military laser technology, unmanned drones circling the earth acting as routers, and even the use of neutrino communication (because neutrinos can go straight through the earth just as zillions pass through your body every second without colliding with its atoms) – cf. this recent survey in Nature.
Interesting (if terrifying) piece. Thanks for posting.
Have you read Jim Rickards’s (General Counsel of Long Term Capital Management when it was bailed out in 1998) new book (https://www.amazon.co.uk/dp/B01GGZPKPM/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1)?
– do you agree with his view (based on similar complexity theory to that you cite above) that a global collapse of fiat money is coming, followed by imposition of “world money” (IMF Special Drawing Rights)?
– do you think the events predicted by Rickards are more likely to lead to i. the disaster or ii. the change you describe above?
LikeLiked by 1 person
Beautifully clear in thought and delivery!
Will 2017 be the year of paradigm shifts….?
Unfortunately I think the chances of our anachronistic institutions embracing any form of change without a prior disaster are close to zero.
LikeLiked by 1 person
I’m trying really hard, against all evidence, not to be pessimistic and all the more so because of the intelligence, energy and positivity you exude. Those same qualities that won you well-deserved plaudits for your organisation of the Vote Leave campaign, a remarkable achievement. Put simply, the proportion of clever people in the world has never been great, .01% maybe. The proportion of these in politics, pretty much zero. Scientists don’t gravitate to politics nor do (genuine) entrepreneurs. John Gray makes the excellent observation, if I may be permitted a license to translate crudely that, much in contrast to the general belief, the ever-improving level of understanding and enlightenment in matters of science (progress) is not mirrored in societies. In contrast, societies can evaporate surprisingly rapidly and, in the process, everything that had hitherto been deemed “progress” disappears with little or no trace. Yes it is a depressing thought, nonetheless demonstrably true. Whilst I, almost without reservation, agree with your premises I can only see them coming into being as a result of cataclysmic or revolutionary events. Given that we, at least in the West, appear to be stuck with democracy with all the mechanisms designed to protect and defend it against precisely such interference/improvement – how do you see any of your excellent ideas coming into being?
LikeLiked by 1 person
You underestimate the cost of a ‘everyone dies’ disaster by several orders of magnitude: if everyone dies, there are no future generations of people who could have lived.
One fermi (under) estimate would be that earth will remain life-sustaining of the order of another 100 million (10^8) years, assuming a population of around 10 billion and a lifespan of ~100 leaves at least another future 10^16 (8+10-2) lives lost if humanity goes extinct now.
Re: the state of DIY genetics: It seems that currently DNA synthesis and assembly is a primary barrier / expense in doing DIY gene therapy on humans, according to https://www.technologyreview.com/s/603217/one-mans-quest-to-hack-his-own-genes/ . One sampled DIY Biology mailing list confirmed the impression that low-budget DIY DNA synthesis is not yet solved ( https://groups.google.com/forum/#!topic/diybio/FXaT-MjJQYI ); unfortunately I don’t have the bio background to know how close they are to getting anything working in practice. They’ve definitely CRISPR’d bacteria in their kitchens, though.
Regarding highish IQ:
I understand the argument for more high IQ people at the highest echelons of government. But for more mid-tier positions, isn’t the fact that people are, on average, IQ 120+ a good thing? That’s pretty high for an occupational position. How much higher should it be? And, perhaps more importantly, how much higher can it be? I’m also curious about the tradeoff involved here. Is it really worthwhile having a parliamentary undersecretary with an IQ of 140? Wouldn’t this person be better off working as a researcher or as an entrepreneur?
Regarding genetic engineering:
I imagine you will talk about this in more depth at some point in this series. However, the way I see it, genetic modification of intelligence will create a massive host of social problems. There will almost certainly be split between enhanced and unenhanced persons. Even if accessibility problems are addressed (doubtful), people who are born right now (i.e. in 2016) will likely be in direct competition with the people born in 2030 who have been enhanced. Assuming we can produce significant enhancement of cognitive ability, this will almost certainly exacerbate the difference between haves and have-nots in our society (a problem that is already worsening because the complexity of modern technology inherently disadvantages those at the lower end of the spectrum, who lack the skills required to leverage these technologies productively).
Even if the obvious, quantifiable economic effects are not as severe as I think they will be, it seems to me that genetic engineering will challenge our notions of humanity and the way we think about human beings. It is difficult not to think of fetuses as objects once we have control over their characteristics. Is that the way we want to view human life?
People might respond that we already try to modify traits already – through education for example. But there is a significant difference in degree that we’re talking about here is so extreme that I would argue we really should be talking about a difference in kind. At present, we can make modest changes in a person’s nature via environmental influences. We can encourage people to be kinder, to be better listeners, to avoid lying or misrepresenting themselves, etc. However, these aims are always relatively circumscribed. We don’t suppose we’ll change everything about the target of such interventions. In other words, we respect that there are some relatively immutable aspects of that person’s character that will persist over time. With genetic engineering, this will almost certainly change. If we can design large parts of a person’s nature, it becomes extraordinarily tempting to think of human beings as merely a collection of traits, much in the way we think about customizable cars. I fear that change will be undesirable, that it will change our belief in the intrinsic worth of the individual.
I should say that I don’t think these changes in mindset are not logical consequences of adopting genetic engineering for mass use. Instead, I merely think the same principle that motivates genetic engineering lends itself too readily to this conception of the individual and that this change in mindset is more likely than it isn’t.
Very interesting. I also read your 250 page document, which I found interesting and led me to several new sources. There is a lot to discuss from that, but, in regards this blog post, you should talk more about the attributes of HPT – the unrecognized simplicity you talk of. Ok, maybe these don’t scale and can’t even be replicated. But what makes a HPT? You don’t really talk about this in your longer work either, so I’d be eager to hear your analysis of this.
LikeLiked by 1 person
I will don’t worry!
LikeLiked by 1 person
Having worked at the highest levels of the US federal bureaucracy, I have some strong thoughts on this, and I eagerly await your own.
LikeLiked by 1 person
The concept of “unrecognised simplicities of effective action” is very interesting. You have attributed it to Charlie Munger and Warren Buffet. Can you provide a good reference (e.g. a book) that goes into the concept in more detail?
Pingback: Unrecognised simplicities of effective action #2: ‘Systems’ thinking — ideas from the Apollo programme for a ‘systems politics’ – Dominic Cummings's Blog
Pingback: The unrecognised simplicities of effective action #3: lessons on ‘capturing the heavens’ from the ARPA/PARC project that created the internet & PC – Dominic Cummings's Blog
What we also need to learn, at least from PARC, is the importance of what in medicine is called “translational research”: actually bringing ideas into production. This is much easier now than it was in the 1970s. thanks to standard operating systems and free software, yet Alan Kay and friends made exactly the same mistakes in 2008-2014 or so at VPRI as they made at PARC 35 years earlier: brilliant research and prototypes (funded largely by tax dollars) with only slim written reports to show for much of their work (Ian Piumarta and others have been better at releasing code, but it’s all woefully out of date and under-documented).
If anyone’s to blame for us having the bowdlerised commercially-twisted world of Apple and Microsoft that we now inhabit, rather than the vision Doug Engelbart offered 50 years ago (or Stephen Fry’s brilliant alternative in “Making History”); that is, computing power focused in users’ hands, it’s Alan Kay.
And the saddest thing is that the gap between here and there, at least this time around, is so tiny: such is his influence that simply “tossing the code over the wall” would probably have been enough, because there’s an army of hackers just waiting to lap it up. As it is, some interesting work has made it into the wild and been influential, and we may have shaved 10 years off the next cycle compared with last time; but it’s far far short of what it could have been.
Given the involvement of public money, this is shameful and desperately sad.
Pingback: #29 On the referendum & #4c on Expertise: On the ARPA/PARC ‘Dream Machine’, science funding, high performance, and UK national strategy – Dominic Cummings's Blog