‘Two hands are a lot’ — we’re hiring data scientists, project managers, policy experts, assorted weirdos…

‘This is possibly the single largest design flaw contributing to the bad Nash equilibrium in which … many governments are stuck. Every individual high-functioning competent person knows they can’t make much difference by being one more face in that crowd.’ Eliezer Yudkowsky, AI expert, LessWrong etc.

‘[M]uch of our intellectual elite who think they have “the solutions” have actually cut themselves off from understanding the basis for much of the most important human progress.’ Michael Nielsen, physicist and one of the handful of most interesting people I’ve ever talked to.

‘People, ideas, machines — in that order.’ Colonel Boyd.

‘There isn’t one novel thought in all of how Berkshire [Hathaway] is run. It’s all about … exploiting unrecognized simplicities.’ Charlie Munger,Warren Buffett’s partner.

‘Two hands, it isn’t much considering how the world is infinite. Yet, all the same, two hands, they are a lot.’ Alexander Grothendieck, one of the great mathematicians.

*

There are many brilliant people in the civil service and politics. Over the past five months the No10 political team has been lucky to work with some fantastic officials. But there are also some profound problems at the core of how the British state makes decisions. This was seen by pundit-world as a very eccentric view in 2014. It is no longer seen as eccentric. Dealing with these deep problems is supported by many great officials, particularly younger ones, though of course there will naturally be many fears — some reasonable, most unreasonable.

Now there is a confluence of: a) Brexit requires many large changes in policy and in the structure of decision-making, b) some people in government are prepared to take risks to change things a lot, and c) a new government with a significant majority and little need to worry about short-term unpopularity while trying to make rapid progress with long-term problems.

There is a huge amount of low hanging fruit — trillion dollar bills lying on the street — in the intersection of:

  • the selection, education and training of people for high performance
  • the frontiers of the science of prediction
  • data science, AI and cognitive technologies (e.g Seeing Rooms, ‘authoring tools designed for arguing from evidence’, Tetlock/IARPA prediction tournaments that could easily be extended to consider ‘clusters’ of issues around themes like Brexit to improve policy and project management)
  • communication (e.g Cialdini)
  • decision-making institutions at the apex of government.

We want to hire an unusual set of people with different skills and backgrounds to work in Downing Street with the best officials, some as spads and perhaps some as officials. If you are already an official and you read this blog and think you fit one of these categories, get in touch.

The categories are roughly:

  • Data scientists and software developers
  • Economists
  • Policy experts
  • Project managers
  • Communication experts
  • Junior researchers one of whom will also be my personal assistant
  • Weirdos and misfits with odd skills

We want to improve performance and make me much less important — and within a year largely redundant. At the moment I have to make decisions well outside what Charlie Munger calls my ‘circle of competence’ and we do not have the sort of expertise supporting the PM and ministers that is needed. This must change fast so we can properly serve the public.

A. Unusual mathematicians, physicists, computer scientists, data scientists

You must have exceptional academic qualifications from one of the world’s best universities or have done something that demonstrates equivalent (or greater) talents and skills. You do not need a PhD — as Alan Kay said, we are also interested in graduate students as ‘world-class researchers who don’t have PhDs yet’.

You should have the following:

  • PhD or MSc in maths or physics.
  • Outstanding mathematical skills are essential.
  • Experience of using analytical languages: e.g. Python, SQL, R.
  • Familiarity with data tools and technologies such as Postgres, Scikit Learn, NEO4J.

A few examples of papers that you will be considering:

You should be able to explain to other mathematicians, physicists and computer scientists the ideas in such papers, discuss what could be useful for our projects, synthesise ideas for other data scientists, and apply them to practical problems. You won’t be expert on the maths used in all these papers but you should be confident that you could study it and understand it.

We will be using machine learning and associated tools so it is important you can program. You do not need software development levels of programming but it would be an advantage.

Those applying must watch Bret Victor’s talks and study Dynamic Land. If this excites you, then apply; if not, then don’t. I and others interviewing will discuss this with anybody who comes for an interview. If you want a sense of the sort of things you’d be working on, then read my previous blog on Seeing Rooms, cognitive technologies etc.

B. Unusual software developers

We are looking for great software developers who would love to work on these ideas, build tools and work with some great people. You should also look at some of Victor’s technical talks on programming languages and the history of computing.

You will be working with data scientists, designers and others.

C. Unusual economists

We are looking to hire some recent graduates in economics. You should a) have an outstanding record at a great university, b) understand conventional economic theories, c) be interested in arguments on the edge of the field — for example, work by physicists on ‘agent-based models’ or by the hedge fund Bridgewater on the failures/limitations of conventional macro theories/prediction, and d) have very strong maths and be interested in working with mathematicians, physicists, and computer scientists.

The ideal candidate might, for example, have a degree in maths and economics, worked at the LHC in one summer, worked with a quant fund another summer, and written software for a YC startup in a third summer!

We’ve found one of these but want at least one more.

The sort of conversation you might have is discussing these two papers in Science (2015): Computational rationality: A converging paradigm for intelligence in brains, minds, and machines, Gershman et al and Economic reasoning and artificial intelligence, Parkes & Wellman

You will see in these papers an intersection of:

  • von Neumann’s foundation of game theory and ‘expected utility’,
  • mainstream economic theories,
  • modern theories about auctions,
  • theoretical computer science (including problems like the complexity of probabilistic inference in Bayesian networks, which is in the NP–hard complexity class),
  • ideas on ‘computational rationality’ and meta-reasoning from AI, cognitive science and so on.

If these sort of things are interesting, then you will find this project interesting.

It’s a bonus if you can code but it isn’t necessary.

D. Great project managers.

If you think you are one of the a small group of people in the world who are truly GREAT at project management, then we want to talk to you. Victoria Woodcock ran Vote Leave — she was a truly awesome project manager and without her Cameron would certainly have won. We need people like this who have a 1 in 10,000 or higher level of skill and temperament.

The Oxford Handbook on Megaprojects points out that it is possible to quantify lessons from the failures of projects like high speed rail projects because almost all fail so there is a large enough sample to make statistical comparisons, whereas there can be no statistical analysis of successes because they are so rare.

It is extremely interesting that the lessons of Manhattan (1940s), ICBMs (1950s) and Apollo (1960s) remain absolutely cutting edge because it is so hard to apply them and almost nobody has managed to do it. The Pentagon systematically de-programmed itself from more effective approaches to less effective approaches from the mid-1960s, in the name of ‘efficiency’. Is this just another way of saying that people like General Groves and George Mueller are rarer than Fields Medallists?

Anyway — it is obvious that improving government requires vast improvements in project management. The first project will be improving the people and skills already here.

If you want an example of the sort of people we need to find in Britain, look at this on CC Myers — the legendary builders. SPEED. We urgently need people with these sort of skills and attitude. (If you think you are such a company and you could dual carriageway the A1 north of Newcastle in record time, then get in touch!)

E. Junior researchers

In many aspects of government, as in the tech world and investing, brains and temperament smash experience and seniority out of the park.

We want to hire some VERY clever young people either straight out of university or recently out with with extreme curiosity and capacity for hard work.

One of you will be a sort of personal assistant to me for a year — this will involve a mix of very interesting work and lots of uninteresting trivia that makes my life easier which you won’t enjoy. You will not have weekday date nights, you will sacrifice many weekends — frankly it will hard having a boy/girlfriend at all. It will be exhausting but interesting and if you cut it you will be involved in things at the age of ~21 that most people never see.

I don’t want confident public school bluffers. I want people who are much brighter than me who can work in an extreme environment. If you play office politics, you will be discovered and immediately binned.

F. Communications

In SW1 communication is generally treated as almost synonymous with ‘talking to the lobby’. This is partly why so much punditry is ‘narrative from noise’.

With no election for years and huge changes in the digital world, there is a chance and a need to do things very differently.

We’re particularly interested in deep experts on TV and digital. We also are interested in people who have worked in movies or on advertising campaigns. There are some very interesting possibilities in the intersection of technology and story telling — if you’ve done something weird, this may be the place for you.

I noticed in the recent campaign that the world of digital advertising has changed very fast since I was last involved in 2016. This is partly why so many journalists wrongly looked at things like Corbyn’s Facebook stats and thought Labour was doing better than us — the ecosystem evolves rapidly while political journalists are still behind the 2016 tech, hence why so many fell for Carole’s conspiracy theories. The digital people involved in the last campaign really knew what they are doing, which is incredibly rare in this world of charlatans and clients who don’t know what they should be buying. If you are interested in being right at the very edge of this field, join.

We have some extremely able people but we also must upgrade skills across the spad network.

G. Policy experts

One of the problems with the civil service is the way in which people are shuffled such that they either do not acquire expertise or they are moved out of areas they really know to do something else. One Friday, X is in charge of special needs education, the next week X is in charge of budgets.

There are, of course, general skills. Managing a large organisation involves some general skills. Whether it is Coca Cola or Apple, some things are very similar — how to deal with people, how to build great teams and so on. Experience is often over-rated. When Warren Buffett needed someone to turn around his insurance business he did not hire someone with experience in insurance: ‘When Ajit entered Berkshire’s office on a Saturday in 1986, he did not have a day’s experience in the insurance business’ (Buffett).

Shuffling some people who are expected to be general managers is a natural thing but it is clear Whitehall does this too much while also not training general management skills properly. There are not enough people with deep expertise in specific fields.

If you want to work in the policy unit or a department and you really know your subject so that you could confidently argue about it with world-class experts, get in touch.

It’s also the case that wherever you are most of the best people are inevitably somewhere else. This means that governments must be much better at tapping distributed expertise. Of the top 20 people in the world who best understand the science of climate change and could advise us what to do with COP 2020, how many now work as a civil servant/spad or will become one in the next 5 years?

G. Super-talented weirdos

People in SW1 talk a lot about ‘diversity’ but they rarely mean ‘true cognitive diversity’. They are usually babbling about ‘gender identity diversity blah blah’. What SW1 needs is not more drivel about ‘identity’ and ‘diversity’ from Oxbridge humanities graduates but more genuine cognitive diversity.

We need some true wild cards, artists, people who never went to university and fought their way out of an appalling hell hole, weirdos from William Gibson novels like that girl hired by Bigend as a brand ‘diviner’ who feels sick at the sight of Tommy Hilfiger or that Chinese-Cuban free runner from a crime family hired by the KGB. If you want to figure out what characters around Putin might do, or how international criminal gangs might exploit holes in our border security, you don’t want more Oxbridge English graduates who chat about Lacan at dinner parties with TV producers and spread fake news about fake news.

By definition I don’t really know what I’m looking for but I want people around No10 to be on the lookout for such people.

We need to figure out how to use such people better without asking them to conform to the horrors of ‘Human Resources’ (which also obviously need a bonfire).

*

Send a max 1 page letter plus CV to ideasfornumber10@gmail.com and put in the subject line ‘job/’ and add after the / one of: data, developer, econ, comms, projects, research, policy, misfit.

I’ll have to spend time helping you so don’t apply unless you can commit to at least 2 years.

I’ll bin you within weeks if you don’t fit — don’t complain later because I made it clear now. 

I will try to answer as many as possible but last time I publicly asked for job applications in 2015 I was swamped and could not, so I can’t promise an answer. If you think I’ve insanely ignored you, persist for a while.

I will use this blog to throw out ideas. It’s important when dealing with large organisations to dart around at different levels, not be stuck with formal hierarchies. It will seem chaotic and ‘not proper No10 process’ to some. But the point of this government is to do things differently and better and this always looks messy. We do not care about trying to ‘control the narrative’ and all that New Labour junk and this government will not be run by ‘comms grid’.

As Paul Graham and Peter Thiel say, most ideas that seem bad are bad but great ideas also seem at first like bad ideas — otherwise someone would have already done them. Incentives and culture push people in normal government systems away from encouraging ‘ideas that seem bad’. Part of the point of a small, odd No10 team is to find and exploit, without worrying about media noise, what Andy Grove called ‘very high leverage ideas’ and these will almost inevitably seem bad to most.

I will post some random things over the next few weeks and see what bounces back — it is all upside, there’s no downside if you don’t mind a bit of noise and it’s a fast cheap way to find good ideas…

Unrecognised simplicities of effective action #1: expertise and a quadrillion dollar business

‘The combination of physics and politics could render the surface of the earth uninhabitable.’ John von Neumann.

Introduction

This series of blogs considers:

  • the difference between fields with genuine expertise, such as fighting and physics, and fields dominated by bogus expertise, such as politics and economic forecasting;
  • the big big problem we face – the world is ‘undersized and underorganised’ because of a collision between four forces: 1) our technological civilisation is inherently fragile and vulnerable to shocks, 2) the knowledge it generates is inherently dangerous, 3) our evolved instincts predispose us to aggression and misunderstanding, and 4) there is a profound mismatch between the scale and speed of destruction our knowledge can cause and the quality of individual and institutional decision-making in ‘mission critical’ institutions – our institutions are similar to those that failed so spectacularly in summer 1914 yet they face crises moving at least ~103 times faster and involving ~106 times more destructive power able to kill ~1010 people;
  • what classic texts and case studies suggest about the unrecognised simplicities of effective action to improve the selection, education, training, and management of vital decision-makers to improve dramatically, reliably, and quantifiably the quality of individual and institutional decisions (particularly 1) the ability to make accurate predictions and b) the quality of feedback);
  • how we can change incentives to aim a much bigger fraction of the most able people at the most important problems;
  • what tools and technologies can help decision-makers cope with complexity.

[I’ve tweaked a couple of things in response to this blog by physicist Steve Hsu.]

*

Summary of the big big problem

The investor Peter Thiel (founder of PayPal and Palantir, early investor in Facebook) asks people in job interviews: what billion (109) dollar business is nobody building? The most successful investor in world history, Warren Buffett, illustrated what a quadrillion (1015) dollar business might look like in his 50th anniversary letter to Berkshire Hathaway investors.

‘There is, however, one clear, present and enduring danger to Berkshire against which Charlie and I are powerless. That threat to Berkshire is also the major threat our citizenry faces: a “successful” … cyber, biological, nuclear or chemical attack on the United States… The probability of such mass destruction in any given year is likely very small… Nevertheless, what’s a small probability in a short period approaches certainty in the longer run. (If there is only one chance in thirty of an event occurring in a given year, the likelihood of it occurring at least once in a century is 96.6%.) The added bad news is that there will forever be people and organizations and perhaps even nations that would like to inflict maximum damage on our country. Their means of doing so have increased exponentially during my lifetime. “Innovation” has its dark side.

‘There is no way for American corporations or their investors to shed this risk. If an event occurs in the U.S. that leads to mass devastation, the value of all equity investments will almost certainly be decimated.

‘No one knows what “the day after” will look like. I think, however, that Einstein’s 1949 appraisal remains apt: “I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.”’

Politics is profoundly nonlinear. (I have written a series of blogs about complexity and prediction HERE which are useful background for those interested.) Changing the course of European history via the referendum only involved about 10 crucial people controlling ~£107  while its effects over ten years could be on the scale of ~108 – 10people and ~£1012: like many episodes in history the resources put into it are extremely nonlinear in relation to the potential branching histories it creates. Errors dealing with Germany in 1914 and 1939 were costly on the scale of ~100,000,000 (108) lives. If we carry on with normal human history – that is, international relations defined as out-groups competing violently – and combine this with modern technology then it is extremely likely that we will have a disaster on the scale of billions (109) or even all humans (~1010). The ultimate disaster would kill about 100 times more people than our failure with Germany. Our destructive power is already much more than 100 times greater than it was then: nuclear weapons increased destructiveness by roughly a factor of a million.

Even if we dodge this particular bullet there are many others lurking. New genetic engineering techniques such as CRISPR allow radical possibilities for re-engineering organisms including humans in ways thought of as science fiction only a decade ago. We will soon be able to remake human nature itself. CRISPR-enabled ‘gene drives’ enable us to make changes to the germ-line of organisms permanent such that changes spread through the entire wild population, including making species extinct on demand. Unlike nuclear weapons such technologies are not complex, expensive, and able to be kept secret for a long time. The world’s leading experts predict that people will be making them cheaply at home soon – perhaps they already are. These developments have been driven by exponential progress much faster than Moore’s Law reducing the cost of DNA sequencing per genome from ~$108 to ~$10in roughly 15 years.

screenshot-2017-01-16-12-24-13

It is already practically possible to deploy a cheap, autonomous, and anonymous drone with facial-recognition software and a one gram shaped-charge to identify a relevant face and blow it up. Military logic is driving autonomy. For example, 1) the explosion in the volume of drone surveillance video (from 71 hours in 2004 to 300,000 hours in 2011 to millions of hours now) requires automated analysis, and 2) jamming and spoofing of drones strongly incentivise a push for autonomy. It is unlikely that promises to ‘keep humans in the loop’ will be kept. It is likely that state and non-state actors will deploy low-cost drone swarms using machine learning to automate the ‘find-fix-finish’ cycle now controlled by humans. (See HERE for a video just released for one such program and imagine the capability when they carry their own communication and logistics network with them.)

In the medium-term, many billions are being spent on finding the secrets of general intelligence. We know this secret is encoded somewhere in the roughly 125 million ‘bits’ of information that is the rough difference between the genome that produces the human brain and the genome that produces the chimp brain. This search space is remarkably small – the equivalent of just 25 million English words or 30 copies of the King James Bible. There is no fundamental barrier to decoding this information and it is possible that the ultimate secret could be described relatively simply (cf. this great essay by physicist Michael Nielsen). One of the world’s leading experts has told me they think a large proportion of this problem could be solved in about a decade with a few tens of billions and something like an Apollo programme level of determination.

Not only is our destructive and disruptive power still getting bigger quickly – it is also getting cheaper and faster every year. The change in speed adds another dimension to the problem. In the period between the Archduke’s murder and the outbreak of World War I a month later it is striking how general failures of individuals and institutions were compounded by the way in which events moved much faster than the ‘mission critical’ institutions could cope with such that soon everyone was behind the pace, telegrams were read in the wrong order and so on. The crisis leading to World War I was about 30 days from the assassination to the start of general war – about 700 hours. The timescale for deciding what to do between receiving a warning of nuclear missile launch and deciding to launch yourself is less than half an hour and the President’s decision time is less than this, maybe just minutes. This is a speedup factor of at least 103.

Economic crises already occur far faster than human brains can cope with. The financial system has made a transition from people shouting at each other to a a system dominated by high frequency ‘algorithmic trading’ (HFT), i.e. machine intelligence applied to robot trading with vast volumes traded on a global spatial scale and a microsecond (10-6) temporal scale far beyond the monitoring, understanding, or control of regulators and politicians. There is even competition for computer trading bases in specific locations based on calculations of Special Relativity as the speed of light becomes a factor in minimising trade delays (cf. Relativistic statistical arbitrage, Wissner-Gross). ‘The Flash Crash’ of 9 May 2010 saw the Dow lose hundreds of points in minutes. Mini ‘flash crashes’ now blow up and die out faster than humans can notice. Given our institutions cannot cope with economic decisions made at ‘human speed’, a fortiori they cannot cope with decisions made at ‘robot speed’. There is scope for worse disasters than 2008 which would further damage the moral credibility of decentralised markets and provide huge chances for extremist political entrepreneurs to exploit. (* See endnote.)

What about the individuals and institutions that are supposed to cope with all this?

Our brains have not evolved much in thousands of years and are subject to all sorts of constraints including evolved heuristics that lead to misunderstanding, delusion, and violence particularly under pressure. There is a terrible mismatch between the sort of people that routinely dominate mission critical political institutions and the sort of people we need: high-ish IQ (we need more people >145 (+3SD) while almost everybody important is between 115-130 (+1 or 2SD)), a robust toolkit for not fooling yourself including quantitative problem-solving (almost totally absent at the apex of relevant institutions), determination, management skills, relevant experience, and ethics. While our ancestor chiefs at least had some intuitive feel for important variables like agriculture and cavalry our contemporary chiefs (and those in the media responsible for scrutiny of decisions) generally do not understand their equivalents, and are often less experienced in managing complex organisations than their predecessors.

The national institutions we have to deal with such crises are pretty similar to those that failed so spectacularly in summer 1914 yet they face crises moving at least ~103 times faster and involving ~106 times more destructive power able to kill ~1010 people. The international institutions developed post-1945 (UN, EU etc) contribute little to solving the biggest problems and in many ways make them worse. These institutions fail constantly and do not  – cannot – learn much.

If we keep having crises like we have experienced over the past century then this combination of problems pushes the probability of catastrophe towards ‘overwhelmingly likely’.

*

What Is To be Done? There’s plenty of room at the top

‘In a knowledge-rich world, progress does not lie in the direction of reading information faster, writing it faster, and storing more of it. Progress lies in the direction of extracting and exploiting the patterns of the world… And that progress will depend on … our ability to devise better and more powerful thinking programs for man and machine.’ Herbert Simon, Designing Organizations for an Information-rich World, 1969.

‘Fascinating that the same problems recur time after time, in almost every program, and that the management of the program, whether it happened to be government or industry, continues to avoid reality.’ George Mueller, pioneer of ‘systems engineering’ and ‘systems management’ and the man most responsible for the success of the 1969 moon landing.

Somehow the world has to make a series of extremely traumatic and dangerous transitions over the next 20 years. The main transition needed is:

Embed reliably the unrecognised simplicities of high performance teams (HPTs), including personnel selection and training, in ‘mission critical’ institutions while simultaneously developing a focused project that radically improves the prospects for international cooperation and new forms of political organisation beyond competing nation states.

Big progress on this problem would automatically and for free bring big progress on other big problems. It could improve (even save) billions of lives and save a quadrillion dollars (~$1015). If we avoid disasters then the error-correcting institutions of markets and science will, patchily, spread peace, prosperity, and learning. We will make big improvements with public services and other aspects of ‘normal’ government. We will have a healthier political culture in which representative institutions, markets serving the public (not looters), and international cooperation are stronger.

Can a big jump in performance – ‘better and more powerful thinking programs for man and machine’ – somehow be systematised?

Feynman once gave a talk titled ‘There’s plenty of room at the bottom’ about the huge performance improvements possible if we could learn to do engineering at the atomic scale – what is now called nanotechnology. There is also ‘plenty of room at the top’ of political structures for huge improvements in performance. As I explained recently, the victory of the Leave campaign owed more to the fundamental dysfunction of the British Establishment than it did to any brilliance from Vote Leave. Despite having the support of practically every force with power and money in the world (including the main broadcasters) and controlling the timing and legal regulation of the referendum, they blew it. This was good if you support Leave but just how easily the whole system could be taken down should be frightening for everybody .

Creating high performance teams is obviously hard but in what ways is it really hard? It is not hard in the same sense that some things are hard like discovering profound new mathematical knowledge. HPTs do not require profound new knowledge. We have been able to read the basic lessons in classics for over two thousand years. We can see relevant examples all around us of individuals and teams showing huge gains in effectiveness.

The real obstacle is not financial. The financial resources needed are remarkably low and the return on small investments could be incalculably vast. We could significantly improve the decisions of the most powerful 100 people in the UK or the world for less than a million dollars (~£106) and a decade-long project on a scale of just ~£107 could have dramatic effects.

The real obstacle is not a huge task of public persuasion – quite the opposite. A government that tried in a disciplined way to do this would attract huge public support. (I’ve polled some ideas and am confident about this.) Political parties are locked in a game that in trying to win in conventional ways leads to the public despising them. Ironically if a party (established or new) forgets this game and makes the public the target of extreme intelligent focus then it would not only make the world better but would trounce their opponents.

The real obstacle is not a need for breakthrough technologies though technology could help. As Colonel Boyd used to shout, ‘People, ideas, machines – in that order!’

The real obstacle is that although we can all learn and study HPTs it is extremely hard to put this learning to practical use and sustain it against all the forces of entropy that constantly operate to degrade high performance once the original people have gone. HPTs are episodic. They seem to come out of nowhere, shock people, then vanish with the rare individuals. People write about them and many talk about learning from them but in fact almost nobody ever learns from them – apart, perhaps, from those very rare people who did not need to learn – and nobody has found a method to embed this learning reliably and systematically in institutions that can maintain it. The Prussian General Staff remained operationally brilliant but in other ways went badly wrong after the death of the elder Moltke. When George Mueller left NASA it reverted to what it had been before he arrived – management chaos. All the best companies quickly go downhill after the departure of people like Bill Gates – even when such very able people have tried very very hard to avoid exactly this problem.

Charlie Munger, half of the most successful investment team in world history, has a great phrase he uses to explain their success that gets to the heart of this problem:

‘There isn’t one novel thought in all of how Berkshire [Hathaway] is run. It’s all about … exploiting unrecognized simplicities… It’s a community of like-minded people, and that makes most decisions into no-brainers. Warren [Buffett] and I aren’t prodigies. We can’t play chess blindfolded or be concert pianists. But the results are prodigious, because we have a temperamental advantage that more than compensates for a lack of IQ points.’

The simplicities that bring high performance in general, not just in investing, are largely unrecognised because they conflict with many evolved instincts and are therefore psychologically very hard to implement. The principles of the Buffett-Munger success are clear – they have even gone to great pains to explain them and what the rest of us should do – and the results are clear yet still almost nobody really listens to them and above average intelligence people instead constantly put their money into active fund management that is proved to destroy wealth every year!

Most people think they are already implementing these lessons and usually strongly reject the idea that they are not. This means that just explaining things is very unlikely to work:

‘I’d say the history that Charlie [Munger] and I have had of persuading decent, intelligent people who we thought were doing unintelligent things to change their course of action has been poor.’ Buffett.

Even more worrying, it is extremely hard to take over organisations that are not run right and make them excellent.

‘We really don’t believe in buying into organisations to change them.’ Buffett.

If people won’t listen to the world’s most successful investor in history on his own subject, and even he finds it too hard to take over failing businesses and turn them around, how likely is it that politicians and officials incentivised to keep things as they are will listen to ideas about how to do things better? How likely is it that a team can take over broken government institutions and make them dramatically better in a way that outlasts the people who do it? Bureaucracies are extraordinarily resistant to learning. Even after the debacles of 9/11 and the Iraq War, costing many lives and trillions of dollars, and even after the 2008 Crash, the security and financial bureaucracies in America and Europe are essentially the same and operate on the same principles.

Buffett’s success is partly due to his discipline in sticking within what he and Munger call their ‘circle of competence’. Within this circle they have proved the wisdom of avoiding trying to persuade people to change their minds and avoiding trying to fix broken institutions.

This option is not available in politics. The Enlightenment and the scientific revolution give us no choice but to try to persuade people and try to fix or replace broken institutions. In general ‘it is better to undertake revolution than undergo it’. How might we go about it? What can people who do not have any significant power inside the system do? What international projects are most likely to spark the sort of big changes in attitude we urgently need?

This is the first of a series. I will keep it separate from the series on the EU referendum though it is connected in the sense that I spent a year on the referendum in the belief that winning it was a necessary though not sufficient condition for Britain to play a part in improving the quality of government dramatically and improving the probability of avoiding the disasters that will happen if politics follows a normal path. I intended to implement some of these ideas in Downing Street if the Boris-Gove team had not blown up. The more I study this issue the more confident I am that dramatic improvements are possible and the more pessimistic I am that they will happen soon enough.

Please leave comments and corrections…

* A new transatlantic cable recently opened for financial trading. Its cost? £300 million. Its advantage? It shaves 2.6 milliseconds off the latency of financial trades. Innovative groups are discussing the application of military laser technology, unmanned drones circling the earth acting as routers, and even the use of neutrino communication (because neutrinos can go straight through the earth just as zillions pass through your body every second without colliding with its atoms) – cf. this recent survey in Nature.

On the Referendum #2: The Lisbon Treaty compared with the Articles of Confederation & US Constitution

In 2004, I invited Professor Richard Epstein (Chicago) to London to give a lecture on the EU Constitution, which became the Lisbon Treaty. That lecture became this essay, PDF HERE. Professor Epstein is one of the foremost legal minds in America and one of the great experts on the US Constitution.

His essay is a fascinating comparison of the EU Constitution with the original American Articles of Confederation (after the Declaration of Independence) and the American Constitution that replaced those Articles. He examined how power became ever more centralised in the US federal government despite theoretical protections. For example, the Commerce Clause in the US Constitution (not in the original Articles) that allowed the regulation of trade between states was, by the time of the New Deal, profoundly re-interpreted by the courts to allow the regulation of trade within states.

Given the US Constitution had far greater protections than the EU Constitution / Lisbon Treaty, he predicted that the Lisbon Treaty would be more dangerous. While the Tenth Amendment to the US Constitution explicitly reserves those powers to the states that are not conferred to the centre by the Constitution, the EU Constitution allowed the members to do only what the EU does not. Given the objectives of the EU are so widely drawn, almost no activity can be confidently guaranteed to be outside the EU’s jurisdiction.

Unsurprisingly, the developments since Professor Epstein’s lecture have proved him right. The EU system has worked as intended to centralise power in Brussels and the European Court of Justice. Of course David Cameron famously made a ‘cast iron’ promise to give a referendum on Lisbon / EU Constitution because, he said, he opposed it. It is near-certain, however, that his renegotiation will not undo the main elements of the Lisbon Treaty. Almost all the dangers that Professor Epstein explained therefore remain relevant.

Epstein made a further argument of relevance to the question: what should the trading relationship between European states be? Epstein argued that Europe should replace its system of regulatory harmonisation (adopted to further political, not trading, ends) with a simple agreement on non-discrimination, along the lines of the Articles of Confederation. This would maximise trade gains without damaging markets, individual rights, or democratic accountability. The diversity of institutional structures and the competition between them that would follow would enable faster and more effective adaptation to globalisation’s challenges than bureaucratic uniformity.

His advice to Britain was:

‘For those who want a strong state with weak individual rights, then this Constitution achieves many of their goals. But for those who think that private markets and private property are the keys to social progress and stability, this Constitution should be stillborn. It promises little gain from the federation of defense that was so central to the American Founding, and its internal structures are sure to invite power dominance from the center…

‘My recommendation is therefore this: Opt for the economic free trade zone and consign the EU Constitution to the dust heap.’

I thought it would be interesting to repost the PDF since I cannot find it anywhere else on the web and this historical comparison is, I think, very useful.

His essay starts on page 9 and is preceded by an Introduction written by me in 2005.

Please leave comments below.

Complexity, ‘fog and moonlight’, prediction, and politics III – von Neumann and economics as a science

The two previous blogs in this series were:

Part I HERE.

Part II HERE.

All page references unless otherwise stated are to my essay, HERE.

Since the financial crisis, there has been a great deal of media and Westminster discussion about why so few people predicted it and what the problems are with economics and financial theory.

Absent from most of this discussion is the history of the subject and its intellectual origins. Economics is clearly a vital area of prediction for people in politics. I therefore will explore some intellectual history to provide context for contemporary discussions about ‘what is wrong with economics and what should be done about it’.

*

It has often been argued that the ‘complexity’ of human behaviour renders precise mathematical treatment of economics impossible, or that the undoubted errors of modern economics in applying the tools of mathematical physics are evidence of the irredeemable hopelessness of the goal.

For example, Kant wrote in Critique of Judgement:

‘For it is quite certain that in terms of merely mechanical principles of nature we cannot even adequately become familiar with, much less explain, organized beings and how they are internally possible. So certain is this that we may boldly state that it is absurd for human beings even to attempt it, or to hope that perhaps some day another Newton might arise who would explain to us, in terms of natural laws unordered by any intention, how even a mere blade of grass is produced. Rather, we must absolutely deny that human beings have such insight.’

In the middle of the 20th Century, one of the great minds of the century turned to this question. John Von Neumann was one of the leading mathematicians of the 20th Century. He was also a major contributor to the mathematisation of quantum mechanics, created the field of ‘quantum logic’ (1936), worked as a consultant to the Manhattan Project and other wartime technological projects, and was one of the two most important creators of modern computer science and artificial intelligence (with Turing) which he developed partly for immediate problems he was working on (e.g. the hydrogen bomb and ICBMs) and partly to probe the general field of understanding complex nonlinear systems.  In an Endnote of my essay I discuss some of these things.

Von Neumann was regarded as an extraordinary phenomenon even by  the cleverest people in the world. The Nobel-winning physicist and mathematician Wigner said of von Neumann:

‘I have known a great many intelligent people in my life. I knew Planck, von Laue and Heisenberg. Paul Dirac was my brother in law; Leo Szilard and Edward Teller have been among my closest friends; and Albert Einstein was a good friend, too. But none of them had a mind as quick and acute as Jansci von Neumann. I have often remarked this in the presence of those men and no one ever disputed me… Perhaps the consciousness of animals is more shadowy than ours and perhaps their perceptions are always dreamlike. On the opposite side, whenever I talked with the sharpest intellect whom I have known – with von Neumann – I always had the impression that only he was fully awake, that I was halfway in a dream.’

Von Neumann also had a big impact on economics. During breaks from pressing wartime business, he wrote ‘Theory of Games and Economic Behaviour’ (TGEB) with Morgenstern. This practically created the field of ‘game theory’ which one sees so many references to now. TGEB was one of the most influential books ever written on economics. (The movie The Beautiful Mind gave a false impression of Nash’s contribution.) In the Introduction, his explanation of some foundational issues concerning economics, mathematics, and prediction is clearer for non-specialists than any other thing I have seen on the subject and cuts through a vast amount of contemporary discussion which fogs the issues.

This documentary on von Neumann is also interesting:

*

There are some snippets from pre-20th Century figures explaining concepts in terms recognisable through the prism of Game Theory. For example, Ampère wrote ‘Considerations sur la théorie mathématique du jeu’ in 1802 and credited Buffon’s 1777 essay on ‘moral arithmetic’ (Buffon figured out many elements that Darwin would later harmonise in his theory of evolution). Cournot discussed what would later be described as a specific example of a ‘Nash equilibrium’ viz duopoly in 1838.  The French mathematician Emile Borel also made contributions to early ideas.

However, Game Theory really was born with von Neumann. In December 1926, he presented the paper ‘Zur Theorie der Gesellschaftsspiele’ (On the Theory of Parlour Games, published in 1928, translated version here) while working on the Hilbert Programme [cf. Endnote on Computing] and quantum mechanics. The connection between the Hilbert Programme and the intellectual origins of Game Theory can perhaps first be traced in a 1912 lecture by one of the world’s leading mathematicians and founders of modern set theory, Zermelo, titled ‘On the Application of Set Theory to Chess’ which stated of its purpose:

‘… it is not dealing with the practical method for games, but rather is simply giving an answer to the following question: can the value of a particular feasible position in a game for one of the players be mathematically and objectively decided, or can it at least be defined without resorting to more subjective psychological concepts?’

He presented a theorem that chess is strictly determined: that is, either (i) white can force a win, or (ii) black can force a win, or (iii) both sides can force at least a draw. Which of these is the actual solution to chess remains unknown. (Cf. ‘Zermelo and the Early History of Game Theory’, by Schwalbe & Walker (1997), which argues that modern scholarship is full of errors about this paper. According to Leonard (2006), Zermelo’s paper was part of a general interest in the game of chess among intellectuals in the first third of the 20th century. Lasker (world chess champion 1897–1921) knew Zermelo and both were taught by Hilbert.)

Von Neumman later wrote:

‘[I]f the theory of Chess were really fully known there would be nothing left to play.  The theory would show which of the three possibilities … actually holds, and accordingly the play would be decided before it starts…  But our proof, which guarantees the validity of one (and only one) of these three alternatives, gives no practically usable method to determine the true one. This relative, human difficulty necessitates the use of those incomplete, heuristic methods of playing, which constitute ‘good’ Chess; and without it there would be no element of ‘struggle’ and ‘surprise’ in that game.’ (p.125)

Elsewhere, he said:

‘Chess is not a game. Chess is a well-defined computation. You may not be able to work out the answers, but in theory there must be a solution, a right procedure in any position. Now, real games are not like that at all. Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory.’

Von Neumman’s 1928 paper proved that there is a rational solution to every two-person zero-sum game. That is, in a rigorously defined game with precise payoffs, there is a mathematically rational strategy for both sides – an outcome which both parties cannot hope to improve upon. This introduced the concept of the minimax: choose a strategy that minimises the possible maximum loss.

Zero-sum games are those where the payoffs ‘sum’ to zero. For example, chess or Go are zero-sum games because the gain (+1) and the loss (-1) sum to zero; one person’s win is another’s loss. The famous Prisoners’ Dilemma is a non-zero-sum game because the payoffs do not sum to zero: it is possible for both players to make gains. In some games the payoffs to the players are symmetrical (e.g. Prisoners’ Dilemma); in others, the payoffs are asymmetrical (e.g. the Dictator or Ultimatum games). Sometimes the strategies can be completely stated without the need for probabilities (‘pure’ strategies); sometimes, probabilities have to be assigned for particular actions (‘mixed’ strategies).

While the optimal minimax strategy might be a ‘pure’ strategy, von Neumann showed it would often have to be a ‘mixed strategy’ and this means a spontaneous return of probability, even if the game itself does not involve probability.

‘Although … chance was eliminated from the games of strategy under consideration (by introducing expected values and eliminating ‘draws’), it has now made a spontaneous reappearance. Even if the rules of the game do not contain any elements of ‘hazard’ … in specifying the rules of behaviour for the players it becomes imperative to reconsider the element of ‘hazard’. The dependence on chance (the ‘statistical’ element) is such an intrinsic part of the game itself (if not of the world) that there is no need to introduce it artificially by way of the rules of the game itself: even if the formal rules contain no trace of it, it still will assert itself.’

In 1932, he gave a lecture titled ‘On Certain Equations of Economics and A Generalization of Brouwer’s Fixed-Point Theorem’. It was published in German in 1938 but not in English until 1945 when it was published as ‘A Model of General Economic Equilibrium’. This paper developed what is sometimes called von Neumann’s Expanding Economic Model and has been described as the most influential article in mathematical economics. It introduced the use of ‘fixed-point theorems’. (Brouwer’s ‘fixed point theorem’ in topology proved that, in crude terms, if you lay a map of the US on the ground anywhere in the US, one point on the map will lie precisely over the point it represents on the ground beneath.)

‘The mathematical proof is possible only by means of a generalisation of Brouwer’s Fix-Point Theorem, i.e. by the use of very fundamental topological facts… The connection with topology may be very surprising at first, but the author thinks that it is natural in problems of this kind. The immediate reason for this is the occurrence of a certain ‘minimum-maximum’ problem… It is closely related to another problem occurring in the theory of games.’

Von Neumann’s application of this topological proof to economics was very influential in post-war mathematical economics and in particular was used by Arrow and Debreu in their seminal 1954 paper on general equilibrium, perhaps the central paper in modern traditional economics.

*

In the late 1930’s, von Neumann, based at the IAS in Princeton to which Gödel and Einstein also fled to escape the Nazis, met up with the economist Oskar Morgenstern who was deeply dissatisfied with the state of economics. In 1940, von Neumann began his collaboration on games with Morgenstern, while working on war business including the Manhattan Project and computers, that became The Theory of Games and Economic Behavior (TGEB). By December 1942, he had finished his work on this though it was not published until 1944.

In the Introduction of TGEB, von Neumann explained the real problems in applying mathematics to economics and why Kant was wrong.

‘It is not that there exists any fundamental reason why mathematics should not be used in economics.  The arguments often heard that because of the human element, of the psychological factors etc., or because there is – allegedly – no measurement of important factors, mathematics will find no application, can all be dismissed as utterly mistaken.  Almost all these objections have been made, or might have been made, many centuries ago in fields where mathematics is now the chief instrument of analysis [e.g. physics in the 16th Century or chemistry and biology in the 18th]…

‘As to the lack of measurement of the most important factors, the example of the theory of heat is most instructive; before the development of the mathematical theory the possibilities of quantitative measurements were less favorable there than they are now in economics.  The precise measurements of the quantity and quality of heat (energy and temperature) were the outcome and not the antecedents of the mathematical theory…

‘The reason why mathematics has not been more successful in economics must be found elsewhere… To begin with, the economic problems were not formulated clearly and are often stated in such vague terms as to make mathematical treatment a priori appear hopeless because it is quite uncertain what the problems really are. There is no point using exact methods where there is no clarity in the concepts and issues to which they are applied. [Emphasis added] Consequently the initial task is to clarify the knowledge of the matter by further careful descriptive work. But even in those parts of economics where the descriptive problem has been handled more satisfactorily, mathematical tools have seldom been used appropriately. They were either inadequately handled … or they led to mere translations from a literary form of expression into symbols…

‘Next, the empirical background of economic science is definitely inadequate. Our knowledge of the relevant facts of economics is incomparably smaller than that commanded in physics at the time when mathematization of that subject was achieved.  Indeed, the decisive break which came in physics in the seventeenth century … was possible only because of previous developments in astronomy. It was backed by several millennia of systematic, scientific, astronomical observation, culminating in an observer of unparalleled calibre, Tycho de Brahe. Nothing of this sort has occurred in economics. It would have been absurd in physics to expect Kepler and Newton without Tycho – and there is no reason to hope for an easier development in economics…

‘Very frequently the proofs [in economics] are lacking because a mathematical treatment has been attempted in fields which are so vast and so complicated that for a long time to come – until much more empirical knowledge is acquired – there is hardly any reason at all to expect progress more mathematico. The fact that these fields have been attacked in this way … indicates how much the attendant difficulties are being underestimated. They are enormous and we are now in no way equipped for them.

‘[We will need] changes in mathematical technique – in fact, in mathematics itself…  It must not be forgotten that these changes may be very considerable. The decisive phase of the application of mathematics to physics – Newton’s creation of a rational discipline of mechanics – brought about, and can hardly be separated from, the discovery of the infinitesimal calculus…

‘The importance of the social phenomena, the wealth and multiplicity of their manifestations, and the complexity of their structure, are at least equal to those in physics.  It is therefore to be expected – or feared – that mathematical discoveries of a stature comparable to that of calculus will be needed in order to produce decisive success in this field… A fortiori, it is unlikely that a mere repetition of the tricks which served us so well in physics will do for the social phenomena too.  The probability is very slim indeed, since … we encounter in our discussions some mathematical problems which are quite different from those which occur in physical science.’

Von Neumann therefore exhorted economists to humility and the task of ‘careful, patient description’, a ‘task of vast proportions’. He stressed that economics could not attack the ‘big’ questions – much more modesty is needed to establish an exact theory for very simple problems, and build on those foundations.

‘The everyday work of the research physicist is … concerned with special problems which are “mature”… Unifications of fields which were formerly divided and far apart may alternate with this type of work. However, such fortunate occurrences are rare and happen only after each field has been thoroughly explored. Considering the fact that economics is much more difficult, much less understood, and undoubtedly in a much earlier stage of its evolution as a science than physics, one should clearly not expect more than a development of the above type in economics either…

‘The great progress in every science came when, in the study of problems which were modest as compared with ultimate aims, methods were developed which could be extended further and further. The free fall is a very trivial physical example, but it was the study of this exceedingly simple fact and its comparison with astronomical material which brought forth mechanics. It seems to us that the same standard of modesty should be applied in economics… The sound procedure is to obtain first utmost precision and mastery in a limited field, and then to proceed to another, somewhat wider one, and so on.’

Von Neumann therefore aims in TGEB at ‘the behavior of the individual and the simplest forms of exchange’ with the hope that this can be extended to more complex situations.

‘Economists frequently point to much larger, more ‘burning’ questions…  The experience of … physics indicates that this impatience merely delays progress, including that of the treatment of the ‘burning’ questions. There is no reason to assume the existence of shortcuts…

‘It is a well-known phenomenon in many branches of the exact and physical sciences that very great numbers are often easier to handle than those of medium size. An almost exact theory of a gas, containing about 1025 freely moving particles, is incomparably easier than that of the solar system, made up of 9 major bodies… This is … due to the excellent possibility of applying the laws of statistics and probabilities in the first case.

‘This analogy, however, is far from perfect for our problem. The theory of mechanics for 2,3,4,… bodies is well known, and in its general theoretical …. form is the foundation of the statistical theory for great numbers. For the social exchange economy – i.e. for the equivalent ‘games of strategy’ – the theory of 2,3,4… participants was heretofore lacking. It is this need that … our subsequent investigations will endeavor to satisfy. In other words, only after the theory for moderate numbers of participants has been satisfactorily developed will it be possible to decide whether extremely great numbers of participants simplify the situation.’

[This last bit has changed slightly as I forgot to include a few things.]

While some of von Neumann’s ideas were extremely influential on economics, his general warning here about the right approach to the use of mathematics was not widely heeded.

Most economists initially ignored von Neumann’s ideas.  Martin Shubik, a Princeton mathematician, recounted the scene he found:

‘The contrast of attitudes between the economics department and mathematics department was stamped on my mind… The former projected an atmosphere of dull-business-as-usual conservatism… The latter was electric with ideas… When von Neumann gave his seminar on his growth model, with a few exceptions, the serried ranks of Princeton economists could scarce forebear to yawn.’

However, a small but influential number, including mathematicians at the RAND Corporation (the first recognisable modern ‘think tank’) led by John Williams, applied it to nuclear strategy as well as economics. For example, Albert Wohlstetter published his Selection and Use of Strategic Air Bases (RAND, R-266, sometimes referred to as The Basing Study) in 1954. Williams persuaded the RAND Board and the infamous SAC General Curtis LeMay to develop a social science division at RAND that could include economists and psychologists to explore the practical potential of Game Theory further. He also hired von Neumann as a consultant; when the latter said he was too busy, Williams told him he only wanted the time it took von Neumann to shave in the morning. (Kubrick’s Dr Strangelove satirised RAND’s use of game theory.)

In the 1990’s, the movie A Beautiful Mind brought John Nash into pop culture, giving the misleading impression that he was the principle developer of Game Theory. Nash’s fame rests principally on work he did in 1950-1 that became known as ‘the Nash Equilibrium’. In Non-Cooperative Games (1950), he wrote:

‘[TGEB] contains a theory of n-person games of a type which we would call cooperative. This theory is based on an analysis of the interrelationships of the various coalitions which can be formed by the players of the game. Our theory, in contradistinction, is based on the absence of coalitions in that it is assumed each participant acts independently, without collaboration or communication with any of the others… [I have proved] that a finite non-cooperative game always has at least one equilibrium point.’

Von Neumann remarked of Nash’s results, ‘That’s trivial you know. It’s just a fixed point theorem.’ Nash himself said that von Neumann was a ‘European gentleman’ but was not impressed by his results.

In 1949-50, Merrill Flood, another RAND researcher, began experimenting with staff at RAND (and his own children) playing various games. Nash’s results prompted Flood to create what became known as the ‘Prisoners’ Dilemma’ game, the most famous and studied game in Game Theory. It was initially known as ‘a non-cooperative pair’ and the name ‘Prisoners’ Dilemma’ was given it by Tucker later in 1950 when he had to think of a way of explaining the concept to his psychology class at Stanford and hit on an anecdote putting the payoff matrix in the form of two prisoners in separate cells considering the pros and cons of ratting on each other.

The game was discussed and played at RAND without publishing. Flood wrote up the results in 1952 as an internal RAND memo accompanied by the real-time comments of the players. In 1958, Flood published the results formally (Some Experimental Games). Flood concluded that ‘there was no tendency to seek as the final solution … the Nash equilibrium point.’ Prisoners’ Dilemma has been called ‘the E. coli of social psychology’ by Axelrod, so popular has it become in so many different fields. Many studies of Iterated Prisoners’ Dilemma games have shown that generally neither human nor evolved genetic algorithm players converge on the Nash equilibrium but choose to cooperate far more than Nash’s theory predicts.

Section 7 of my essay discusses some recent breakthroughs, particularly the paper by Press & Dyson. This is also a good example of how mathematicians can invade fields. Dyson’s professional fields are maths and physics. He was persuaded to look at the Prisoners’ Dilemma. He very quickly saw that there was a previously unseen class of strategies that has opened up a whole new field for exploration. This article HERE is a good summary of recent developments.

Von Neumann’s brief forays into economics were very much a minor sideline for him but there is no doubt of his influence. Despite von Neumann’s reservations about neoclassical economics, Paul Samuelson admitted that, ‘He darted briefly into our domain, and it has never been the same since.’

In 1987, the Santa Fe Institute, founded by Gell Mann and others, organised a ten day meeting to discuss economics. On one side, they invited leading economists such as Kenneth Arrow and Larry Summers; on the other side, they invited physicists, biologists, and computer scientists, such as Nobel-winning Philip Anderson and John Holland (inventor of genetic algorithms). When the economists explained their assumptions, Phil Anderson said to them, ‘You guys really believe that?

One physicist later described the meeting as like visiting Cuba – the cars are all from the 1950’s so on one hand you admire them for keeping them going, but on the other hand they are old technology; similarly the economists were ingeniously using 19th Century maths and physics on very out-of-date models. The physicists were shocked at how the economists were content with simplifying assumptions that were obviously contradicted by reality, and they were surprised at the way the economists seemed unconcerned about how poor their predictions were.

Twenty-seven years later, this problem is more acute. Some economists are listening to the physicists about fundamental problems with the field. Some are angrily rejecting the physicists’ incursions into their field.

Von Neumann explained the scientifically accurate approach to economics and mathematics. [Inserted later. I mean – the first part of his comments above that discusses maths, prediction, models, and economics and physics. As far as I know, nobody seriously disputes these comments – i.e. that Kant and the general argument that ‘maths cannot make inroads into economics’ are wrong. The later comments about building up economic theories from theories of 2, 3, 4 agents etc is a separate topic. See comments.] In other blogs in this series I will explore some of the history of economic thinking as part of a description of the problem for politicians and other decision-makers who need to make predictions.

Please leave corrections and comments below.

 

‘Standin’ by the window, where the light is strong’: de-extinction, machine intelligence, the search for extra-solar life, autonomous drone swarms bombing Parliament, genetics & IQ, science & politics, and much more @ SciFoo 2014

‘SciFoo’ 8-10 August 2014, the Googleplex, Silicon Valley, California.

On Friday 8 August, I woke up in Big Sur (the coast of Northern California), looked out over the waves breaking on the wild empty coastline, munched a delicious Mexican breakfast at Deetjen’s, then drove north on Highway 1 towards Palo Alto where a few hours later I found myself looking through the windows of Google’s HQ at a glittering sunset in Silicon Valley.

I was going to ‘SciFoo’. SciFoo is a weekend science conference. It is hosted by Larry Page at Google’s HQ in Silicon Valley and organised by various people including the brilliant Timo Hannay from Digital Science.

I was invited because of my essay that became public last year (cf. HERE). Of the 200+ people, I was probably the only one who made zero positive contribution to the fascinating weekend and therefore wasted a place, so although it was a fantastic experience for me the organisers should not invite me back and I feel guilty about the person who could not go because I was there. At least I can let others know about some of the things discussed… (Although it was theoretically ‘on the record unless stated otherwise’, I could tell that many scientists were not thinking about this and so I have left out some things that I think they would not want attributed. Given they were not experienced politicians being interviewed but scientists at a scientific conference, I’m erring on the side of caution, particularly given the subjects discussed.)

It was very interesting to see many of the people whose work I mentioned in my essay and watch them interacting with each other – intellectually and psychologically / physically.

I will describe some of the things that struck me though, because there are about 7-10 sessions going on simultaneously, this is only a small snapshot.

In my essay, I discuss some of the background to many of these subjects so I will put references [in square brackets] so people can refer to it if they want.

Please note that below I am reporting what I think others were saying – unless it is clear, I am not giving my own views. On technical issues, I do not have my ‘own’ views – I do not have relevant skills. All I can do is judge where consensus lies and how strong it is. Many important issues involve asking at least 1) is there a strong scientific consensus on X among physical scientists with hard quantitative data to support their ideas (uber-example, the Standard Model of particle physics), b) what are the non-science issues, such as ‘what will it cost, who pays/suffers and why?’ On A, I can only try to judge what technically skilled people think. B is a different matter.

Whether you were there or not, please leave corrections / additions / questions in the comments box. Apologies for errors…

In a nutshell, a few likely scenarios / ideas, without spelling out caveats… 1) Extinct species are soon going to be brought back to life and the same technology will be used to modify existing species to help prevent them going extinct. 2) CRISPR  – a new gene editing technology – will be used to cure diseases and ‘enhance’ human performance but may also enable garage bio-hackers to make other species extinct. 3) With the launch of satellites in 2017/18, we may find signs of life by 2020 among the ~1011 exoplanets we now know exist just in our own galaxy though it will probably take 20-30 years, but the search will also soon get crowdsourced in a way schools can join in. 4) There is a reasonable chance we will have found many of the genes for IQ within a decade via BGI’s project, and the rich may use this information for embryo selection. 5) ‘Artificial neural networks’ are already outperforming humans on various pattern-recognition problems and will continue to advance rapidly. 6) Automation will push issues like a negative income tax onto the political agenda as millions lose their jobs to automation. 7) Autonomous drones will be used for assassinations in Europe and America shortly. 8) Read Neil Gershenfeld’s book ‘FAB’ if you haven’t and are interested in science education / 3D printing / computer science (or at least watch his TED talks). 9) Scientists are desperate to influence policy and politics but do not know how.

Biological engineering / computational biology / synthetic biology [Section 4]

George Church (Harvard), a world-leading biologist, spoke at a few sessions and his team’s research interests were much discussed.  (Don’t assume he said any specific thing below.)

The falling cost of DNA sequencing continues to spur all sorts of advances. It has fallen from a billion dollars per genome a decade ago to less than a thousand dollars now (a million-fold improvement), and the Pentagon is planning on it reaching $100 soon. We can also sequence cancer cells to track their evolution in the body.

CRISPR. CRISPR is a new (2012) and very hot technology that is a sort of ‘cut and paste’ gene editing tool. It allows much more precise and effective engineering of genomes. Labs across America are rushing to apply it to all sorts of problems. In March this year, it was used to correct faulty genes in mice and cure them of a liver condition. It plays a major part in many of the biological issues sketched below.

‘De-extinction’ (bringing extinct species back to life). People are now planning the practical steps for de-extinction to the extent that they are scoping out land in Siberia where woolly mammoths will roam. As well as creating whole organisms, they will also grow organs modified by particular genes to test what specific genes and combinations do. This is no longer sci-fi – it is being planned and is likely to happen. The buffalo population was recently re-built (Google serves buffalo burgers in its amazing kitchens) from a tiny population to hundreds of thousands and there seems no reason to think it is impossible to build a significant population from scratch.

What does this mean? You take the DNA from an animal, say a woolly mammoth buried in the ground, sequence it, then use the digitised genome to create an embryo and either grow it in a similar animal (e.g. elephant for a mammoth) or in an artificial womb. (I missed the bit explaining the rationale for some of the proposed projects but, apart from the scientific reasons, one rationale for the mammoth was described as a conservation effort to preserve the frozen tundra and prevent massive amounts of greenhouse gases being released from beneath it.)

There are also possibilities of using this technology for conservation. For example, one could re-engineer the Asian elephant so that it could survive in less hospitable climates (e.g. modify the genes that produce haemoglobin so it is viable in colder places).

Now that we have sequenced the genome for Neanderthals (and learned that humans interbred with them, so you have traces of their DNA – unless you’re an indigenous sub-Saharan African), there is no known physical reason why we could not bring a Neanderthal back to life once the technology has been refined on other animals. This obviously raises many ethical issues – e.g. if we did it, they would have to be given the same legal rights as us (one distinguished person said that if there were one in the room with us we would not notice, contra the pictures often used to illustrate them). It is assumed by many that this will happen (nobody questioned the assumption) – just as it seemed to be generally assumed that human cloning will happen – though probably not in a western country but somewhere with fewer legal restrictions, after the basic technologies have been refined. (The Harvard team gets emails from women volunteering to be the Neanderthal’s surrogate mum.)

‘Biohacking’. Biohacking is advancing faster than Moore’s Law. CRISPR editing will allow us to enhance ourselves. E.g. Tibetans have evolved much more efficient systems for coping with high altitude, and some Africans have much stronger bones than the rest of us (see below). Will we reengineer ourselves to obtain these advantages? CRISPR obviously also empowers all sorts of malevolent actors too – cf. this very recent paper (by Church et al). It may soon be possible for people in their garages to edit genomes and accidentally or deliberately drive species to extinction as well as attempt to release deadly pathogens. I could not understand why people were not more worried about this – I hope I was missing a lot. (Some had the attitude that ‘nature already does bio-terrorism’ so we should relax. I did not find this comforting and I’m sure I am in the majority so for anybody influential reading this I would strongly advise you not to use this argument in public advocacy or it is likely to accelerate calls for your labs to be shut down.)

‘Junk’. There is more and more analysis of what used to be called ‘junk DNA’. It is now clear that far from being ‘junk’ much of this has functions we do not understand. This connects to the issue that although we sequenced the human genome over a decade ago, the quality of the ‘reference’ version is not great and (it sounded like from the discussions) it needs upgrading.

‘Push button’ cheap DNA sequencers are around the corner. Might such devices become as ubiquitous as desktop printers? Why doesn’t someone create a ‘gene web browser’ that can cope with all the different data formats for genomes?

Privacy. There was a lot of talk about ‘do you want your genome on the web?’. I asked a quick informal pop quiz (someone else’s idea): there was unanimity that ‘I’d much rather my genome was on the web than my browsing history’. [UPDATE: n<10 and perhaps they were tongue in cheek!? One scientist pointed out in a session that when he informed his insurance company, after sequencing his own genome, that he had a very high risk of getting colon cancer, they raised his premiums. There are all sorts of reasons one would want to control genomic information and I was being a bit facetious.]

In many ways, computational biology and synthetic biology have that revolutionary feeling of the PC revolution in the 1970s – huge energy, massive potential for people without big resources to make big contributions, the young crowding in, the feeling of dramatic improvements imminent. Will this all seem ‘too risky’? It’s hard to know how the public will respond to risk. We put up with predictable annual carnage from car accidents but freak out over trivia. We ignore millions of deaths in the Congo but freak out over a handful in Israel/Gaza. My feeling is some of the scientists are too blasé about how the public will react to the risks, but I was wrong about how much fear there would be about the news that scientists recently deliberately engineered a much more dangerous version of an animal flu.

AI / machine learning / neuroscience [Section 5].

Artificial neural networks (NNs), now often referred to as ‘deep learning’, were first created 50 years ago but languished for a while when progress slowed. The field is now hot again. (Last year Google bought some companies leading the field, and a company, Boston Dynamics, that has had a long-term collaboration with DARPA.)

Jurgen Schmidhuber explained progress and how NNs have recently approached or surpassed human performance in various fields. E.g. recently NNs have surpassed human performance in recognising traffic signals (0.56% error rate for the best NN versus 1.16% for humans). Progress in all sorts of pattern recognition problems is clearly going to continue rapidly. E.g. NNs are now being used to automate a) the analysis of scans for cancer cells and b) the labelling of scans of human brains – so artificial neural networks are now scanning and labelling natural neural networks.

Steve Hsu has blogged about this session here:

http://infoproc.blogspot.co.uk/2014/08/neural-networks-and-deep-learning.html?m=1

Michael Nielsen is publishing an education project online for people to teach themselves the basics of neural networks. It is brilliant and I would strongly advise teachers reading this blog to consider introducing it into their schools and doing the course with the pupils.

http://neuralnetworksanddeeplearning.com

Neil Gershenfeld (MIT) gave a couple of presentations. One was on developments in computer science connecting: non-‘von Neumann architecture’, programmable matter, 3D printing, ‘the internet of things’ etc. [Cf. Section 3.] NB. IBM announced this month substantial progress in their quest for a new computer architecture that is ‘non-Von Neumann’: cf. this –

http://venturebeat.com/2014/08/07/ibms-synapse-marshals-the-power-of-the-human-brain-in-a-computer/view-all/

Another was on the idea of an ‘interspecies internet’. We now know many species can recognise each other, think, and communicate much better than we realised. He showed bonobos playing music with Peter Gabriel and dolphins communicating. He and others are plugging them into the internet. Some are doing this to help the general goal of figuring out how we might communicate with intelligent aliens – or how they might communicate with us.

(Gershenfeld’s book FAB led me to push 3D printing into the new National Curriculum and I would urge school science teachers to watch his TED talks and read this book. [INSERTED LATER: Some people have asked about this point. I (I thought obviously) did not mean I wrote the NC document. I meant – I pushed the subject into the discussions with the committees/drafters who wrote the NC. Experts in the field agreed it belonged. When it came out, this was not controversial. We also funded pilots with 3D printers so schools could get good advice about how to teach the subject well.] His point about 3D printers restoring the connection between thinking and making – lost post-Renaissance – is of great importance and could help end the foolishly entrenched ‘knowledge’ vs ‘skills’ and academic vs vocational trench wars. Gove actually gave a speech about this not long before he was moved and as far as I could tell it got less coverage than any speech he ever gave, thus proving the cliché about speeches on ‘skills’.)

There were a few presentations about ‘computational neuroscience’. I could not understand anything much as they were too technical. It was clear that there is deep concern among EU neuroscientists about the EU’s  huge funding for Henry Markram’s Human Brain Project. One leading neuroscientist said to me that the whole project is misguided as it does not have clear focused goals and the ‘overhype’ will lead to public anger in a few years. Apparently, the EU is reconsidering the project and its goals. I have no idea about the merits of these arguments. I have a general prejudice that, outside special circumstances, experience suggests that it is better to put funding into many pots and see what works, as DARPA does.

There are all sorts of crossovers between: AI / neuroscience / big data / NNs / algorithmic pattern recognition in other fields.

Peter Norvig, a leader in machine intelligence, said that he is more worried about the imminent social implications of continued advances making millions unemployed than he is about a sudden ‘Terminator / SKYNET’ scenario of a general purpose AI bootstrapping itself to greater than human intelligence and exterminating us all. Let’s hope so. It is obvious that this field is going to keep pushing boundaries – in open, commercial, and classified projects – so we are essentially going to be hoping for the best as we make more and more advances in AI. The idea of a ‘negative income tax’ – or some other form of essentially paying people X just to live – seems bound to return to the agenda. I think it could be a way around all sorts of welfare arguments. The main obstacle, it seems to me, is that people won’t accept paying for it if they think uncontrolled immigration will continue as it is now.

Space [Section 2]

There was great interest in various space projects and some senior people from NASA. There is much sadness at how NASA, despite many great people, has become a normal government institution – ie. caught in DC politics, very bureaucratic, and dysfunctional in various ways. On the other hand, many private ventures are now growing. E.g. Elon Musk is lowering the $/kg of getting material into orbit and planning a non-government Mars mission. As I said in my essay, really opening up space requires a space economy – not just pure science and research (such as putting telescopes on the far side of the moon, which we obviously should do). Columbus opened up America – not the Vikings.

There is another obvious motive. As Carl Sagan said, if the dinosaurs had had a space programme, they’d still be here. In the long-term we either develop tools for dealing with asteroids or we will be destroyed. We know this for sure. I think I heard that NASA is planning to park a small asteroid close to the moon around 2020 but I may have misheard / misunderstood.

Mario Livio led a great session on the search for life on exoplanets. The galaxy has ~1011 stars and there is ~1 planet on average per star. There are ~1011 galaxies, so a Fermi estimate is there are ~1022 planets – 10 billion trillion planets – in the observable universe (this number is roughly 1,000 times bigger than the number you get in the fable of putting a grain of rice on the first square of a chessboard and doubling on each subsequent square). Many of them are in the ‘habitable zone’ around stars.

In 2017/18, there are two satellites launching that will be able to do spectroscopy on exoplanets – i.e. examine their atmospheres and detect things like oxygen and water. ‘If we get lucky’, these satellites will find ‘bio-signatures’ of life. If they find life having looked at only a few planets, then it would mean that life is very common. ‘More likely’ is it will take 20-30 years and a new generation of space-based telescopes to find life. If planets are found with likely biosignatures, then it would make sense to turn SETI’s instruments towards them to see if they find anything. (However, we are already phasing out the use of radio waves for various communications – perhaps the use of radio waves is only a short window in the lifetime of a civilisation.) There are complex Bayesian arguments about what we might infer about our own likely future given various discoveries but I won’t go into those now. (E.g. if we find life is common but no traces of intelligent life, does this mean a) the evolution of complex life is not a common development from simple life; b) intelligent life is also common but it destroys itself; c) they’re hiding, etc.)

A very impressive (and helpful towards the ignorant like me) young scientist working on exoplanets called Oliver Guyon demonstrated a fascinating project to crowdsource the search for exoplanets by building a global network of automated cameras – PANOPTES (www.projectpanoptes.org). His team has built a simple system that can find exoplanets using normal digital cameras costing less than $1,000. They sit in a box connected to a 12V power supply, automatically take pictures of the night sky every few seconds, then email the data to the cloud. There, the data is aggregated and algorithms search for exoplanets. These units are cheap (can’t remember what he said but I think <$5,000). Everything is open-source, open-hardware. They will start shipping later this year and will make a brilliant school science project. Guyon has made the project with schools in mind so that assembling and operating the units will not require professional level skills. They are also exploring the next move to connect smartphone cameras.

Building the >15m diameter space telescopes we need to search for life seems to me an obvious priority for scientific budgets –  it is one of the handful of the most profound questions facing us.

There was an interesting cross-over discussion about ‘space and genetics’ in which people discussed various ways in which space exploration would encourage / require genetic modification. E.g.1 some sort of rocket fuel has recently been discovered to exist in large quantities on Mars. This is very handy but the substance is toxic. It might therefore make sense to modify humans going to live on Mars to be resistant. E.g.2 Space travel weakens bones. It has been discovered that mutations in the human population can improve bone strength by 8 standard deviations. This is a massive improvement – for comparison, 8 SDs in IQ covers people from severely mentally disabled to Nobel-winners. This was discovered by a team of scientists in Africa who noticed that people in a local tribe who got hit by cars did not suffer broken bones, so they sequenced the locals’ genomes. (Someone said there have already been successful clinical trials testing this discovery in a real drug to deal with osteoporosis.) E.g.3 Engineering E. Coli shows that just four mutations can improve resistance to radiation by ?1,000 times (can’t read my note).

Craig Venter and others are thinking about long-term projects to send ‘von Neumman-bots’ (self-replicating space drones) across the universe containing machines that could create biological life once they arrive somewhere interesting, thus avoiding the difficult problems of keeping humans alive for thousands of years on spaceships. (Nobel-winning physicist Gerard t’ Hooft explains the basic principles of this in his book Playing with planets.)

This paper (August 2014) summarises issues in the search for life:

http://www.pnas.org/content/early/2014/08/01/1304213111.full.pdf

Finding the genes for IQ and engineering possibilities [Section 5].

When my essay came out last year, there was a lot of mistaken reporting that encouraged many in the education world to grab the wrong end of the stick about IQ, though the BBC documentary about the controversy (cf. below) was excellent and a big step forward. It remains the case that very few people realise that in the last couple of years direct examination of DNA has now vindicated the consistent numbers on IQ heritability from decades of twin/adoption studies.

The rough heritability numbers for IQ are no longer in doubt among physical scientists who study this field: it is roughly 50% heritable at age ~18-20 and this number rises towards 70-80% for older adults. This is important because IQ is such a good predictor of the future – it is a better predictor than social class. E.g. The long-term Study of Mathematically Precocious Youth, which follows what has happened to children with 1:10,000 ability, shows among many things that a) a simple ‘noisy’ test administered at age 12-13 can make amazingly accurate predictions about their future, and b) achievements such as scientific breakthroughs correlate strongly with IQ. (If people looked at the data from SMPY, then I think some of the heat and noise in the debate  would fade but it is a sad fact that approximately zero senior powerful people in the English education world had even heard of this study before the furore over Plomin last year.)

Further, the environmental effects that are important are not the things that people assume. If you test the IQ of an adopted child in adulthood and the parents who adopted it, you find approximately zero correlation – all those anguished parenting discussions had approximately no measurable impact on IQ. (This does not mean that ‘parenting doesn’t matter’ – parents can transfer narrow skills such as playing the violin.) In the technical language, the environmental effects that are important are ‘non-shared’ environmental effects – i.e. they are things that two identical twins do not experience in the same way. We do not know what they are. It is reasonable to think that they are effectively random tiny events with nonlinear effects that we may never be able to track in detail – cf. this paper for a discussion of this issue in the context of epidemiology: http://ije.oxfordjournals.org/content/40/3/537.full.pdf+html

There remains widespread confusion on this subject among social scientists, education researchers, and the worlds of politics and the media where people were told misleading things in the 1980s and 1990s and do not realise that the debates have been transformed. To be fair, however, it was clear from this weekend that even many biologists do not know about new developments in this field so it is not surprising that political journalists and education researchers do not.

(An example of confusion in the political/media world… In my essay, I used the technical term ‘heritable’ which is a population statistic – not a statement about an individual. I also predicted that media coverage would confuse the subject (e.g. by saying things like ‘70% of your IQ comes from genes’). Sure enough some journalists claimed I said the opposite of what I actually said then they quoted scientists attacking me for making a mistake that not only did I not make but which I actually warned about. Possibly the most confused sentence of all those in the media about my essay was the line ‘wealth is more heritable than genes’, which was in Polly Toynbee’s column and accompanying headline in the Guardian. This sentence is a nonsense sentence as it completely mangles the meaning of the term ‘heritable’. Much prominent commentary from politicians and sociologists/economists on ‘social mobility’ is gibberish because of mistaken assumptions about genes and environment. The Endnote in my essay has links to work by Plomin, Hsu et al that explains it all properly. This interview with Plomin is excellent: http://www.spectator.co.uk/features/8970941/sorry-but-intelligence-really-is-in-the-genes/. This recent BBC radio programme is excellent and summarises the complex issues well: http://www.bbc.co.uk/programmes/b042q944/episodes/guide)

I had a fascinating discussion/tutorial at SciFoo with Steve Hsu. Steve Hsu is a professor of theoretical physics (and successful entrepreneur) with a long interest in IQ (he also runs a brilliant blog that will keep you up to speed on all sorts). He now works part time on the BGI project in China to discover the genes responsible for IQ.

IQ is very similar to height from the perspective of behavioural genetics. Height has the advantage that it is obviously easier to measure than IQ but it has roughly the same heritability. Large scale GWAS are already identifying some of the genes responsible for height. Hsu recently watched a talk by Fields Medallist Terry Tao and realised that a branch of maths could be used to examine the question – how many genomes do we need to scan to identify a substantial number of the genes for IQ? His answer: ‘roughly 10k moderately rare causal variants of mostly negative effect are responsible for normal population variation’ and finding them will require sequencing roughly a million genomes. The falling cost of sequencing DNA means that this is within reach. ‘At the time of this writing SNP genotyping costs are below $50 USD per individual, meaning that a single super-wealthy benefactor could independently fund a crash program for less than $100 million’ (Hsu).

The BGI project to find these genes has hit some snags recently (e.g. a US lawsuit between the two biggest suppliers of gene sequencing machines). However, it is now expected to start again soon. Hsu thinks that within a decade we could find many of the genes responsible for IQ. He has just put his fascinating paper on this subject on his blog (there is also a Q&A on p.27 that will be very useful for journalists):

http://infoproc.blogspot.co.uk/2014/08/genetic-architecture-of-intelligence.html

Just discovering a substantial fraction of the genes would be momentous in itself but there is more. It is already the case that farmers use genomes to make predictions about cows’ properties and behaviour (‘genotype to phenotype’ predictions). It is already the case that rich people could use in vitro fertilisation to select the egg which they think will be most advantageous, because they can sequence genomes of multiple eggs and examine each one to look for problems then pick the one they prefer. Once we identify a substantial number of IQ genes, there is no obvious reason why rich people will not select the egg that has the highest prediction for IQ. 

This clearly raises many big questions. If the poor cannot do the same, then the rich could quickly embed advantages and society could become not only more unequal but also based on biological classes. One response is that if this sort of thing does become possible, then a national health system should fund everybody to do this. (I.e. It would not mandate such a process but it would give everybody a choice of whether to make use of it.) Once the knowledge exists, it is hard to see what will stop some people making use of it and offering services to – at least – the super-rich.

It is vital to separate two things: a) the basic science of genetics and cognition (which must be allowed to develop), and b) the potential technological applications and their social implications. The latter will rightly make people deeply worried, given our history, and clearly require extremely serious public debate. One of the reasons I wrote my essay was to try to stimulate such debate on the biggest – and potentially most dangerous – scientific issues. By largely ignoring such issues, Westminster, Whitehall, and the political media are wasting the time we have to discuss them so technological breakthroughs will be unnecessarily  shocking when they come.

Hsu’s contribution to this research – and his insight when listening to Tao about how to apply a branch of mathematics to a problem – is also a good example of how the more abstract fields of maths and physics often make contributions to the messier study of biology and society. The famous mathematician von Neumann practically invented some new fields outside maths and made many contributions to others. The physicist-mathematician Freeman Dyson recently made a major contribution to Game Theory which had lain unnoticed for decades until he realised that a piece of maths could be applied to uncover new strategies (Google “Dyson zero determinant strategies” and cf. this good piece: http://www.americanscientist.org/issues/id.16112,y.0,no.,content.true,page.1,css.print/issue.aspx).

However, this also raises a difficult issue. There is a great deal of Hsu’s paper – and the subject of IQ and heritability generally – that I do not have the mathematical skills to understand. This will be true of a large fraction of education researchers in education departments – I would bet a large majority. This problem is similar for many other vital issues (and applies to MPs and their advisers) and requires general work on translating such research into forms that can be explained by the media.

Kathryn Ashbury also did a session on genes and education but I went to a conflicting one with George Church so unfortunately I missed it.

‘Big data’, simulations, and distributed systems [Section 6&7]

The rival to Markram’s Brain Project for mega EU funding was Dirk Helbing (ETH Zurich) and his project for new simulations to aid policy-making. Helbing was also at SciFoo and gave a couple of presentations. I will write separately about this.

Helbing says convincingly: ‘science must become a fifth pillar of democracies, besides legislation, executive, jurisdiction, and the public media’. Many in politics hope that technology will help them control things that now feel out of control. This is unlikely. The amount of data is growing at a faster rate than the power of processing and the complexity of networked systems grows factorially therefore top-down control will become less and less effective.

The alternative? ‘Distributed (self-)control, i.e. bottom-up self-regulation’. E.g. Helbing’s team has invented self-regulating traffic lights driven by traffic flows that can ‘outperform the classical top-down control by a conventional traffic center.’

‘Can we transfer and extend this principle to socio-economic systems? Indeed, we are now developing mechanisms to overcome coordination and cooperation failures, conflicts, and other age-old problems. This can be done with suitably designed social media and sensor networks for real-time measurements, which will eventually weave a Planetary Nervous System. Hence, we can finally realize the dream of self-regulating systems… [S]uitable institutions such as certain social media – combined with suitable reputation systems – can promote other-regarding decision-making. The quick spreading of social media and reputation systems, in fact, indicates the emergence of a superior organizational principle, which creates collective intelligence by harvesting the value of diversity…’

His project’s website is here:

http://www.futurict.eu

I wish MPs and spads in all parties would look at this project and Helbing’s work. It provides technologically viable and theoretically justifiable mechanisms to avoid the current sterile party debates about delivery of services. We must move from Whitehall control to distributed systems…

Science and politics

Unsurprisingly, there was a lot of grumbling about politicians, regulation, Washington gridlock, bureaucracy and so on.

Much of it is clearly justified. Some working in genetics had stories about how the regulations forbid them to tell people about imminently life threatening medical problems they discover. Others were bemoaning the lack of action on asteroid defence and climate change.

Some of these problems are inherently extremely difficult, as I discuss in my essay. On top of this, though, is the problem that many (most?) scientists do not know how to go about changing things.

It was interesting that some very eminent scientists, all much cleverer than ~100% of those in politics [INSERT: better to say ‘all with higher IQ than ~100% of those in politics’], have naive views about how politics works. In group discussions, there was little focused discussion about how they could influence politics better even though it is clearly a subject that they care about very much. (Gershenfeld said that scientists have recently launched a bid to take over various local government functions in Barcelona, which sounds interesting.)

A few times I nearly joined in the discussion but I thought it would disrupt things and distract them. In retrospect, I think this may have been a mistake and I should have spoken up. But also I am not articulate and I worried I would not be able to explain their errors and it would waste their time.

I will blog on this issue separately. A few simple observations…

To get things changed in politics, scientists need mechanisms a) to agree priorities in order to focus their actions on b) roadmaps with specifics. Generalised whining never works. The way to influence politicians is to make it easy for them to fall down certain paths without much thought, and this means having a general set of goals but also a detailed roadmap the politicians can apply, otherwise they will drift by default to the daily fog of chaos and moonlight.

Scientists also need to be prepared to put their heads above the parapet and face controversy. Many comments amounted to ‘why don’t politicians do the obviously rational thing without me having to take a risk of being embroiled in media horrors’. Sorry guys but this is not how it works.

Many academics are entirely focused on their research and do not want to lose time to politics. This is entirely reasonable. But if you won’t get involved you can have little influence other than lending your name to the efforts of others.

Working in the Department for Education, I have experienced in England that very few scientists were prepared to face controversy over the issue of A Levels (exams at 18) and university entry / undergraduate standards even though this problem directly affected their own research area. Many dozens sought me out 2007-14 to complain about existing systems. I can count on the fingers of one hand those who rolled the dice and did things in the public domain that could have caused them problems. I have heard many scientists complain about media reports but when I’ve said – ‘write a blog explaining why they’re wrong’, the answer is almost invariably ‘oh, the VC’s office would go mad’. If they won’t put their heads above the parapet on an issue that directly touches their own subject and career, how much are they likely to achieve in moving political debate in areas outside their own fields?

Provided scientists a) want to avoid controversy and b) are isolated, they cannot have the leverage they want. The way to minimise controversy is to combine in groups – for the evolutionary biologists reading this, think SHOALS! – so that each individual is less exposed. But you will only join a shoal if you agree a common purpose.

I’m going to do a blog on ‘How scientists can learn from Bismarck and Jean Monnet to influence politics‘. Monnet avoided immediate battles for power in favour of ‘preparing the future’ – i.e. having plans in his pocket for when crises hit and politicians were desperate. He created the EEC in this way. In the same way people find it extremely hard to operationalise the lessons of Thucydides or Bismarck, they do not operationalise the lessons from Monnet. It would be interesting if scientists did this in a disciplined way. In some ways, it seems to me vital if we are to avoid various disasters. It is also necessary, however, to expose scientists to the non-scientific factors in play.

Anyway, it would be worth exploring this question: can very high IQ people with certain personality traits (like von Neumann, not like Gödel) learn enough in half a day’s exposure to case studies of successful political action to enable them to change something significant in politics, provided someone else can do most of the admin donkey work? I’m willing to bet the answer is YES. Whether they will then take personal risks by ACTING is another question.

A physicist remarked: ‘we’re bitching about politicians but we can’t even sort out our own field of scientific publishing which is a mess’.

NB. for scientists who haven’t read anything I’ve read before, do not make the mistake of thinking I am defending politicians. If you read other stuff I’ve written you will see that I have made all the criticisms that you have. But that doesn’t mean that scientists cannot do much better than they are at influencing policy.

A few general comments

1. It has puzzled me for over a decade that a) one of the few things the UK still has that is world class is Oxbridge, b) we have the example of Silicon Valley and our own history of post-1945 bungling to compare it with (e.g. how the Pentagon treated von Neumann and how we treated Turing viz the issue of developing computer science), yet c) we persistently fail to develop venture capital-based hubs around Oxbridge on the scale they deserve. As I pottered down University Avenue in Palo Alto looking for a haircut, past venture capital offices that can provide billions in start-up investment, I thought: you’ve made a few half-hearted attempts to persuade people to do more on this, when you get home try again. So I will…

2. It was interesting to see how physicists have core mathematical skills that allow them to grasp fundamentals of other fields without prior study. Watching them reminded me of Mandelbrot’s comment that:

‘It is an extraordinary feature of science that the most diverse, seemingly unrelated, phenomena can be described with the same mathematical tools. The same quadratic equation with which the ancients drew right angles to build their temples can be used today by a banker to calculate the yield to maturity of a new, two-year bond. The same techniques of calculus developed by Newton and Leibniz two centuries ago to study the orbits of Mars and Mercury can be used today by a civil engineer to calculate the maximum stress on a new bridge… But the variety of natural phenomena is boundless while, despite all appearances to the contrary, the number of really distinct mathematical concepts and tools at our disposal is surprisingly small… When we explore the vast realm of natural and human behavior, we find the most useful tools of measurement and calculation are based on surprisingly few basic ideas.’

3. High status people have more confidence in asking basic / fundamental / possibly stupid questions. One can see people thinking ‘I thought that but didn’t say it in case people thought it was stupid and now the famous guy’s said it and everyone thinks he’s profound’. The famous guys don’t worry about looking stupid and they want to get down to fundamentals in fields outside their own.

4. I do not mean this critically but watching some of the participants I was reminded of Freeman Dyson’s comment:

‘I feel it myself, the glitter of nuclear weapons. It is irresistible if you come to them as a scientist. To feel it’s there in your hands. To release the energy that fuels the stars. To let it do your bidding. And to perform these miracles, to lift a million tons of rock into the sky, it is something that gives people an illusion of illimitable power, and it is in some ways responsible for all our troubles, I would say, this is what you might call ‘technical arrogance’ that overcomes people when they see what they can do with their minds.’ 

People talk about rationales for all sorts of things but looking in their eyes the fundamental driver seems to be – am I right, can I do it, do the patterns in my mind reflect something real? People like this are going to do new things if they can and they are cleverer than the regulators. As a community I think it is fair to say that outside odd fields like nuclear weapons research (which is odd because it still requires not only a large collection of highly skilled people but also a lot of money and all sorts of elements that are hard (but not impossible) for a non-state actor to acquire and use without detection), they believe that pushing the barriers of knowledge is right and inevitable. Fifteen years on from the publication by Silicon Valley legend Bill Joy of his famous essay (‘Why the future doesn’t need us’), it is clear that many of the things he feared have proceeded and there remains no coherent government approach or serious international discussion. (I am not suggesting that banning things is generally the way forward.)

5. The only field where there was a group of people openly lobbying for something to be made illegal was the field of autonomous lethal drones. (There is a remorseless logic that means that countermeasures against non-autonomous drones (e.g. GPS-spoofing) incentivises one to make one’s drones autonomous. They can move about waiting to spot someone’s face then destroy them without any need for human input.) However, the discussion confirmed my view that even if this might be a good idea – it is doomed, in the short-term at least. I wonder what is to stop someone sending a drone swarm across the river and bombing Parliament during PMQs. Given it will be possible to deploy autonomous drones anonymously, it seems there may be a new era of assassinations coming, apart from all the other implications of drones. Given one may need a drone swarm to defend against drone swarm, I can’t see them being outlawed any time soon. (Cf. Suarez’s Kill Decision for a great techno-thriller on the subject.)

(Also, I thought that this was an area where those involved in cutting edge issues could benefit from talking to historians. E.g. my understanding is that we filmed the use of anthrax on a Scottish island and delivered the footage to the Nazis with the message that we would anthrax Germany if they used chemical weapons – i.e. the lack of chemical warfare in WWII was a case of successful deterrence, not international law.)

6. A common comment is – ‘technology X [e.g. in vitro fertilisation] was denounced at the time but humans adapt to such changes amazingly fast, so technology Y will be just the same’. This is a reasonable argument in some ways but I cannot help but think that many will think de-extinction, engineered bio-weapons, or human clones are going to be perceived as qualitative changes far beyond things like in vitro fertilisation.

7. Daniel Suarez told me what his next techno-thriller is about but if I put it on my blog he will deploy an autonomous drone with face recognition AI to kill me, so I’m keeping quiet. If you haven’t read Daemon, read it – it’s a rare book that makes you laugh out loud about how clever the plot is.

8. Von Neumann was heavily involved not only in the Manhattan Project but also the birth of the modern computer, the creation of the hydrogen bomb, and nuclear strategy. Before his tragic early death, he wrote a brilliant essay about the political problem of dealing with advanced technology which should be compulsory reading for all politicians aspiring to lead. It summarises the main problems that we face – ‘for progress, there is no cure…’

http://features.blogs.fortune.cnn.com/2013/01/13/can-we-survive-technology/

As I said at the top, any participants please tell me where I went wrong, and thanks for such a wonderful weekend.