On the referendum #33: High performance government, ‘cognitive technologies’, Michael Nielsen, Bret Victor, & ‘Seeing Rooms’

On the referendum #33: High performance government, ‘cognitive technologies’, Michael Nielsen, Bret Victor, & ‘Seeing Rooms’

‘People, ideas, machines — in that order!’ Colonel Boyd.

‘The main thing that’s needed is simply the recognition of how important seeing is, and the will to do something about it.’ Bret Victor.

‘[T]he transfer of an entirely new and quite different framework for thinking about, designing, and using information systems … is immensely more difficult than transferring technology.’ Robert Taylor, one of the handful most responsible for the creation of the internet and personal computing, and in inspiration to Bret Victor.

‘[M]uch of our intellectual elite who think they have “the solutions” have actually cut themselves off from understanding the basis for much of the most important human progress.’ Michael Nielsen, physicist. 

Introduction

This blog looks at an intersection of decision-making, technology, high performance teams and government. It sketches some ideas of physicist Michael Nielsen about cognitive technologies and of computer visionary Bret Victor about the creation of dynamic tools to help understand complex systems and ‘argue with evidence’, such as tools for authoring dynamic documents’, and ‘Seeing Rooms’ for decision-makers — i.e rooms designed to support decisions in complex environments. It compares normal Cabinet rooms, such as that used in summer 1914 or October 1962, with state-of-the-art Seeing Rooms. There is very powerful feedback between: a) creating dynamic tools to see complex systems deeper (to see inside, see across time, and see across possibilities), thus making it easier to work with reliable knowledge and interactive quantitative models, semi-automating error-correction etc, and b) the potential for big improvements in the performance of political and government decision-making.

It is relevant to Brexit and anybody thinking ‘how on earth do we escape this nightmare’ but 1) these ideas are not at all dependent on whether you support or oppose Brexit, about which reasonable people disagree, and 2) they are generally applicable to how to improve decision-making — for example, they are relevant to problems like ‘how to make decisions during a fast moving nuclear crisis’ which I blogged about recently, or if you are a journalist ‘what future media could look like to help improve debate of politics’. One of the tools Nielsen discusses is a tool to make memory a choice by embedding learning in long-term memory rather than, as it is for almost all of us, an accident. I know from my days working on education reform in government that it’s almost impossible to exaggerate how little those who work on education policy think about ‘how to improve learning’.

Fields make huge progress when they move from stories (e.g Icarus)  and authority (e.g ‘witch doctor’) to evidence/experiment (e.g physics, wind tunnels) and quantitative models (e.g design of modern aircraft). Political ‘debate’ and the processes of government are largely what they have always been largely conflict over stories and authorities where almost nobody even tries to keep track of the facts/arguments/models they’re supposedly arguing about, or tries to learn from evidence, or tries to infer useful principles from examples of extreme success/failure. We can see much better than people could in the past how to shift towards processes of government being ‘partially rational discussion over facts and models and learning from the best examples of organisational success‘. But one of the most fundamental and striking aspects of government is that practically nobody involved in it has the faintest interest in or knowledge of how to create high performance teams to make decisions amid uncertainty and complexity. This blindness is connected to another fundamental fact: critical institutions (including the senior civil service and the parties) are programmed to fight to stay dysfunctional, they fight to stay closed and avoid learning about high performance, they fight to exclude the most able people.

I wrote about some reasons for this before the referendum (cf. The Hollow Men). The Westminster and Whitehall response was along the lines of ‘natural party of government’, ‘Rolls Royce civil service’ blah blah. But the fact that Cameron, Heywood (the most powerful civil servant) et al did not understand many basic features of how the world works is why I and a few others gambled on the referendum — we knew that the systemic dysfunction of our institutions and the influence of grotesque incompetents provided an opportunity for extreme leverage. 

Since then, after three years in which the parties, No10 and the senior civil service have imploded (after doing the opposite of what Vote Leave said should happen on every aspect of the negotiations) one thing has held steady — Insiders refuse to ask basic questions about the reasons for this implosion, such as: ‘why Heywood didn’t even put together a sane regular weekly meeting schedule and ministers didn’t even notice all the tricks with agendas/minutes etc’, how are decisions really made in No10, why are so many of the people below some cognitive threshold for understanding basic concepts (cf. the current GATT A24 madness), what does it say about Westminster that both the Adonis-Remainers and the Cash-ERGers have become more detached from reality while a large section of the best-educated have effectively run information operations against their own brains to convince themselves of fairy stories about Facebook, Russia and Brexit…

It’s a mix of amusing and depressing — but not surprising to me — to hear Heywood explain HERE how the British state decided it couldn’t match the resources of a single multinational company or a single university in funding people to think about what the future might hold, which is linked to his failure to make serious contingency plans for losing the referendum. And of course Heywood claimed after the referendum that we didn’t need to worry about the civil service because on project management it has ‘nothing to learn’ from the best private companies. The elevation of Heywood in the pantheon of SW1 is the elevation of the courtier-fixer at the expense of the thinker and the manager — the universal praise for him recently is a beautifully eloquent signal that those in charge are the blind leading the blind and SW1 has forgotten skills of high value, the skills of public servants such as Alanbrooke or Michael Quinlan.

This blog is hopefully useful for some of those thinking about a) improving government around the world and/or b) ‘what comes after the coming collapse and reshaping of the British parties, and how to improve drastically the performance of critical institutions?’

Some old colleagues have said ‘Don’t put this stuff on the internet, we don’t want the second referendum mob looking at it.’ Don’t worry! Ideas like this have to be forced down people’s throats practically at gunpoint. Silicon Valley itself has barely absorbed Bret Victor’s ideas so how likely is it that there will be a rush to adopt them by the world of Blair and Grieve?! These guys can’t tell the difference between courtier-fixers and people with models for truly effective action like General Groves (HERE). Not one in a thousand will read a 10,000 word blog on the intersection of management and technology and the few who do will dismiss it as the babbling of a deluded fool, they won’t learn any more than they learned from the 2004 referendum or from Vote Leave. And if I’m wrong? Great. Things will improve fast and a second referendum based on both sides applying lessons from Bret Victor would be dynamite.

NB. Bret Victor’s project, Dynamic Land, is a non-profit. For an amount of money that a government department like the Department for Education loses weekly without any minister realising it’s lost (in the millions per week in my experience because the quality of financial control is so bad), it could provide crucial funding for Victor and help itself. Of course, any minister who proposed such a thing would be told by officials ‘this is illegal under EU procurement law and remember minister that we must obey EU procurement law forever regardless of Brexit’ — something I know from experience officials say to ministers whether it is legal or not when they don’t like something. And after all, ministers meekly accepted the Kafka-esque order from Heywood to prioritise duties of goodwill to the EU under A50 over preparations to leave A50, so habituated had Cameron’s children become to obeying the real deputy prime minister…

Below are 4 sections:

  1. The value found in intersections of fields
  2. Some ideas of Bret Victor
  3. Some ideas of Michael Nielsen
  4. A summary

*

1. Extreme value is often found in the intersection of fields

The legendary Colonel Boyd (he of the ‘OODA loop’) would shout at audiences ‘People, ideas, machines — in that order.‘ Fundamental political problems we face require large improvements in the quality of all three and, harder, systems to integrate all three. Such improvements require looking carefully at the intersection of roughly five entangled areas of study. Extreme value is often found at such intersections.

  • Explore what we know about the selection, education and training of people for high performance (individual/team/organisation) in different fields. We should be selecting people much deeper in the tails of the ability curve — people who are +3 (~1:1,000) or +4 (~1:30,000) standard deviations above average on intelligence, relentless effort, operational ability and so on (now practically entirely absent from the ’50 most powerful people in Britain’). We should  train them in the general art of ‘thinking rationally’ and making decisions amid uncertainty (e.g Munger/Tetlock-style checklists, exercises on SlateStarCodex blog). We should train them in the practical reasons for normal ‘mega-project failure’ and case studies such as the Manhattan Project (General Groves), ICBMs (Bernard Schriever), Apollo (George Mueller), ARPA-PARC (Robert Taylor) that illustrate how the ‘unrecognised simplicities’ of high performance bring extreme success and make them work on such projects before they are responsible for billions rather than putting people like Cameron in charge (after no experience other than bluffing through PPE then PR). NB. China’s leaders have studied these episodes intensely while American and British institutions have actively ‘unlearned’ these lessons.
  • Explore the frontiers of the science of prediction across different fields from physics to weather forecasting to finance and epidemiology. For example, ideas from physics about early warning systems in physical systems have application in many fields, including questions like: to what extent is it possible to predict which news will persist over different timescales, or predict wars from news and social media? There is interesting work combining game theory, machine learning, and Red Teams to predict security threats and improve penetration testing (physical and cyber). The Tetlock/IARPA project showed dramatic performance improvements in political forecasting are possible, contra what people such as Kahneman had thought possible. A recent Nature article by Duncan Watts explained fundamental problems with the way normal social science treats prediction and suggested new approaches — which have been almost entirely ignored by mainstream economists/social scientists. There is vast scope for applying ideas and tools from the physical sciences and data science/AI — largely ignored by mainstream social science, political parties, government bureaucracies and media — to social/political/government problems (as Vote Leave showed in the referendum, though this has been almost totally obscured by all the fake news: clue — it was not ‘microtargeting’).
  • Explore technology and tools. For example, Bret Victor’s work and Michael Nielsen’s work on cognitive technologies. The edge of performance in politics/government will be defined by teams that can combine the ancient ‘unrecognised simplicities of high performance’ with edge-of-the-art technology. No10 is decades behind the pace in old technologies like TV, doesn’t understand simple tools like checklists, and is nowhere with advanced technologies.
  • Explore the frontiers of communication (e.g crisis management, applied psychology). Technology enables people to improve communication with unprecedented speed, scale and iterative testing. It also allows people to wreak chaos with high leverage. The technologies are already beyond the ability of traditional government centralised bureaucracies to cope with. They will develop rapidly such that most such centralised bureaucracies lose more and more control while a few high performance governments use the leverage they bring (c.f China’s combination of mass surveillance, AI, genetic identification, cellphone tracking etc as they desperately scramble to keep control). The better educated think that psychological manipulation is something that happens to ‘the uneducated masses’ but they are extremely deluded — in many ways people like FT pundits are much easier to manipulate, their education actually makes them more susceptible to manipulation, and historically they are the ones who fall for things like Russian fake news (cf. the Guardian and New York Times on Stalin/terror/famine in the 1930s) just as now they fall for fake news about fake news. Despite the centrality of communication to politics it is remarkable how little attention Insiders pay to what works — never mind the question ‘what could work much better?’.  The fact that so much of the media believes total rubbish about social media and Brexit shows that the media is incapable of analysing the intersection of politics and technology but, although it is obviously bad that the media disinforms the public, the only rational planning assumption is that this problem will continue and even get worse. The media cannot explain either the use of TV or traditional polling well, these have been extremely important for over 70 years, and there is no trend towards improvement so a sound planning assumption is surely that the media will do even worse with new technologies and data science. This will provide large opportunities for good and evil. A new approach able to adapt to the environment an order of magnitude faster than now would disorient political opponents (desperately scrolling through Twitter) to such a degree — in Boyd’s terms it would ‘collapse their OODA loops’ — that it could create crucial political space for focus on the extremely hard process of rewiring government institutions which now seems impossible for Insiders to focus on given their psychological/operational immersion in the hysteria of 24 hour rolling news and the constant crises generated by dysfunctional bureaucracies.
  • Explore how to re-program political/government institutions at the apex of decision-making authority so that a) people are more incentivised to optimise things we want them to optimise, like error-correction and predictive accuracy, and less incentivised to optimise bureaucratic process, prestige, and signalling as our institutions now do; b) institutions are incentivised to build high performance teams rather than make this practically illegal at the apex of government; and c) we have ‘immune systems’ based on decentralisation and distributed control to minimise the inevitable failures of even the best people and teams.

Example 1: Red Teams and pre-mortems can combat groupthink and normal cognitive biases but they are practically nowhere in the formal structure of governments. There is huge scope for a Parliament-mandated small and extremely elite Red Team operating next to, and in some senses above, the Cabinet Office to ensure diversity of opinions, fight groupthink and other standard biases, make sure lessons are learned and so on. Cost: a few million that it would recoup within weeks by stopping blunders.

Example 2: prediction tournaments/markets could improve policy and project management, with people able to ‘short’ official delivery timetables — imagine being able to short Grayling’s transport announcements, for example. In many areas new markets could help — e.g markets to allow shorting of house prices to dampen bubbles, as Chris Dillow and others have suggested. The way in which the IARPA/Tetlock work has been ignored in SW1 is proof that MPs and civil servants are not actually interested in — or incentivised to be interested in — who is right, who is actually an ‘expert’, and so on. There are tools available if new people do want to take these things seriously. Cost: a few million at most, possibly thousands, that it would recoup within a year by stopping blunders.

Example 3: we need to consider projects that could bootstrap new international institutions that help solve more general coordination problems such as the risk of accidental nuclear war. The most obvious example of a project like this I can think of is a manned international lunar base which would be useful for a) basic science, b) the practical purposes of building urgently needed near-Earth infrastructure for space industrialisation, and c) to force the creation of new practical international institutions for cooperation between Great Powers. George Mueller’s team that put man on the moon in 1969 developed a plan to do this that would have been built by now if their plans had not been tragically abandoned in the 1970s. Jeff Bezos is explicitly trying to revive the Mueller vision and Britain should be helping him do it much faster. The old institutions like the UN and EU — built on early 20th Century assumptions about the performance of centralised bureaucracies — are incapable of solving global coordination problems. It seems to me more likely that institutions with qualities we need are much more likely to emerge out of solving big problems than out of think tank papers about reforming existing institutions. Cost = 10s/100s of billions, return = trillions, or near infinite if shifting our industrial/psychological frontiers into space drastically reduces the chances of widespread destruction.

A) Some fields have fantastic predictive models and there is a huge amount of high quality research, though there is a lot of low-hanging fruit in bringing methods from one field to another.

B) We know a lot about high performance including ‘systems management’ for complex projects but very few organisations use this knowledge and government institutions overwhelmingly try to ignore and suppress the knowledge we have.

C) Some fields have amazing tools for prediction and visualisation but very few organisations use these tools and almost nobody in government (where colour photocopying is a major challenge).

D) We know a lot about successful communication but very few organisations use this knowledge and most base action on false ideas. E.g political parties spend millions on spreading ideas but almost nothing on thinking about whether the messages are psychologically compelling or their methods/distribution work, and TV companies spend billions on news but almost nothing understanding what science says about how to convey complex ideas — hence why you see massively overpaid presenters like Evan Davis babbling metaphors like ‘economic takeoff’ in front of an airport while his crew films a plane ‘taking off’, or ‘the economy down the plughole’ with pictures of — a plughole.

E) Many thousands worldwide are thinking about all sorts of big government issues but very few can bring them together into coherent plans that a government can deliver and there is almost no application of things like Red Teams and prediction markets. E.g it is impossible to describe the extent to which politicians in Britain do not even consider ‘the timetable and process for turning announcement X into reality’ as something to think about — for people like Cameron and Blair the announcement IS the only reality and ‘management’ is a dirty word for junior people to think about while they focus on ‘strategy’. As I have pointed out elsewhere, it is fascinating that elite business schools have been collecting billions in fees to teach their students WRONGLY that operational excellence is NOT a source of competitive advantage, so it is no surprise that politicians and bureaucrats get this wrong.

But I can see almost nobody integrating the very best knowledge we have about A+B+C+D with E and I strongly suspect there are trillion dollar bills lying on the ground that could be grabbed for trivial cost — trillion dollar bills that people with power are not thinking about and are incentivised not to think about. I might be wrong but I would remind readers that Vote Leave was itself a bet on this proposition being right and I think its success should make people update their beliefs on the competence of elite political institutions and the possibilities for improvement.

Here I want to explore one set of intersections — the ideas of Bret Victor and Michael Nielsen.

*

2. Bret Victor: Cognitive technologies, dynamic tools, interactive quantitative models, Seeing Rooms — making it as easy to insert facts, data, and models in political discussion as it is to insert emoji 

In the 1960s visionaries such as Joseph Licklider, Robert Taylor and Doug Engelbart developed a vision of networked interactive computing that provided the foundation not just for new technologies (the internet, PC etc) but for whole new industries. Licklider, Sutherland,Taylor et al provided a model (ARPA) for how science funding can work. Taylor provided a model (PARC) of how to manage a team of extremely talented people who turned a profound vision into reality. The original motivation for the vision of networked interactive computing was to help humans make good decisions in a complex world — or, ‘augmenting human intelligence’ and ‘man-machine symbiosis’. This story shows how to make big improvements in the world with very few resources if they are structured right: PARC involved ~25 key people and tens of millions over roughly a decade and generated trillions of dollars in value. If interested in the history and the super-productive processes behind the success of ARPA-PARC read THIS.

It’s fascinating that in many ways the original 1960s Licklider vision has still not been implemented. The Silicon Valley ecosystem developed parts of the vision but not others for complex reasons I don’t understand (cf. The Future of Programming). One of those who is trying to implement parts of the vision that have not been implemented is Bret Victor. Bret Victor is a rare thing: a genuine visionary in the computing world according to some of those ‘present at the creation’ of ARPA-PARC such as Alan Kay. His ideas lie at critical intersections between fields sketched above. Watch talks such as Inventing on Principle and Media for Thinking the Unthinkable and explore his current project, Dynamic Land in Berkeley.

Victor has described, and now demonstrates in Dynamic Land, how existing tools fail and what is possible. His core principle is that creators need an immediate connection to what they are creating. Current programming languages and tools are mostly based on very old ideas before computers even had screens and there was essentially no interactivity — they date from the era of punched cards. They do not allow users to interact dynamically. New dynamic tools enable us to think previously unthinkable thoughts and allow us to see and interact with complex systems: to see inside, see across time, and see across possibilities.

I strongly recommend spending a few days exploring his his whole website but I will summarise below his ideas on two things:

  1. His ideas about how to build new dynamic tools for working with data and interactive models.
  2. His ideas about transforming the physical spaces in which teams work so that dynamic tools are embedded in their environment — people work inside a tool.

Applying these ideas would radically improve how people make decisions in government and how the media reports politics/government.

Language and writing were cognitive technologies created thousands of years ago which enabled us to think previously unthinkable thoughts. Mathematical notation did the same over the past 1,000 years. For example, take a mathematics problem described by the 9th Century mathematician al-Khwarizmi (who gave us the word algorithm):

screenshot 2019-01-28 23.46.10

Once modern notation was invented, this could be written instead as:

x2 + 10x = 39

Michael Nielsen uses a similar analogy. Descartes and Fermat demonstrated that equations can be represented on a diagram and a diagram can be represented as an equation. This was a new cognitive technology, a new way of seeing and thinking: algebraic geometry. Changes to the ‘user interface’ of mathematics were critical to its evolution and allowed us to think unthinkable thoughts (Using Artificial Intelligence to Augment Human Intelligence, see below).

Screenshot 2019-03-06 11.33.19

Similarly in the 18th Century, there was the creation of data graphics to demonstrate trade figures. Before this, people could only read huge tables. This is the first data graphic:

screenshot 2019-01-29 00.28.21

The Jedi of data visualisation, Edward Tufte, describes this extraordinary graphic of Napoleon’s invasion of Russia as ‘probably the best statistical graphic ever drawn’. It shows the losses of Napoleon’s army: from the Polish-Russian border, the thick band shows the size of the army at each position, the path of Napoleon’s winter retreat from Moscow is shown by the dark lower band, which is tied to temperature and time scales (you can see some of the disastrous icy river crossings famously described by Tolstoy). NB. The Cabinet makes life-and-death decisions now with far inferior technology to this from the 19th Century (see below).

screenshot 2019-01-29 10.37.05

If we look at contemporary scientific papers they represent extremely compressed information conveyed through a very old fashioned medium, the scientific journal. Printed journals are centuries old but the ‘modern’ internet versions are usually similarly static. They do not show the behaviour of systems in a visual interactive way so we can see the connections between changing values in the models and changes in behaviour of the system. There is no immediate connection. Everything is pretty much the same as a paper and pencil version of a paper. In Media for Thinking the Unthinkable, Victor shows how dynamic tools can transform normal static representations so systems can be explored with immediate feedback. This dramatically shows how much more richly and deeply ideas can be explored. With Victor’s tools we can interact with the systems described and immediately grasp important ideas that are hidden in normal media.

Picture: the very dense writing of a famous paper (by chance the paper itself is at the intersection of politics/technology and Watts has written excellent stuff on fake news but has been ignored because it does not fit what ‘the educated’ want to believe)

screenshot 2019-01-29 10.55.01

Picture: the same information presented differently. Victor’s tools make the information less compressed so there’s less work for the brain to do ‘decompressing’. They not only provide visualisations but the little ‘sliders’ over the graphics are to drag buttons and interact with the data so you see the connection between changing data and changing model. A dynamic tool transforms a scientific paper from ‘pencil and paper’ technology to modern interactive technology.

screenshot 2019-01-29 10.58.38

Victor’s essay on climate change

Victor explains in detail how policy analysis and public debate of climate change could be transformed. Leave aside the subject matter — of course it’s extremely important, anybody interested in this issue will gain from reading the whole thing and it would be great material for a school to use for an integrated science / economics / programming / politics project, but my focus is on his ideas about tools and thinking, not the specific subject matter.

Climate change is a great example to consider because it involves a) a lot of deep scientific knowledge, b) complex computer modelling which is understood in detail by a tiny fraction of 1% (and almost none of the social science trained ‘experts’ who are largely responsible for interpreting such models for politicians/journalists, cf HERE for the science of this), c) many complex political, economic, cultural issues, d) very tricky questions about how policy is discussed in mainstream culture, and e) the problem of how governments try to think about and act on important, complex, and long-term problems. Scientific knowledge is crucial but it cannot by itself answer the question: what to do? The ideas BV describes to transform the debate on climate change apply generally to how we approach all important political issues.

In the section Languages for technical computing, BV describes his overall philosophy (if you look at the original you will see dynamic graphics to help make each point but I can’t make them play on my blog — a good example of the failure of normal tools!):

‘The goal of my own research has been tools where scientists see what they’re doing in realtime, with immediate visual feedback and interactive exploration. I deeply believe that a sea change in invention and discovery is possible, once technologists are working in environments designed around:

  • ubiquitous visualization and in-context manipulation of the system being studied;
  • actively exploring system behavior across multiple levels of abstraction in parallel;
  • visually investigating system behavior by transforming, measuring, searching, abstracting;
  • seeing the values of all system variables, all at once, in context;
  • dynamic notations that embed simulation, and show the effects of parameter changes;
  • visually improvising special-purpose dynamic visualizations as needed.’

He then describes how the community of programming language developers have failed to create appropriate languages for scientists, which I won’t go into but which is fascinating.

He then describes the problem of how someone can usefully get to grips with a complex policy area involving technological elements.

‘How can an eager technologist find their way to sub-problems within other people’s projects where they might have a relevant idea? How can they be exposed to process problems common across many projects?… She wishes she could simply click on “gas turbines”, and explore the space:

  • What are open problems in the field?
  • Who’s working on which projects?
  • What are the fringe ideas?
  • What are the process bottlenecks?
  • What dominates cost? What limits adoption?
  • Why make improvements here? How would the world benefit?

‘None of this information is at her fingertips. Most isn’t even openly available — companies boast about successes, not roadblocks. For each topic, she would have to spend weeks tracking down and meeting with industry insiders. What she’d like is a tool that lets her skim across entire fields, browsing problems and discovering where she could be most useful…

‘Suppose my friend uncovers an interesting problem in gas turbines, and comes up with an idea for an improvement. Now what?

  • Is the improvement significant?
  • Is the solution technically feasible?
  • How much would the solution cost to produce?
  • How much would it need to cost to be viable?
  • Who would use it? What are their needs?
  • What metrics are even relevant?

‘Again, none of this information is at her fingertips, or even accessible. She’d have to spend weeks doing an analysis, tracking down relevant data, getting price quotes, talking to industry insiders.

‘What she’d like are tools for quickly estimating the answers to these questions, so she can fluidly explore the space of possibilities and identify ideas that have some hope of being important, feasible, and viable.

‘Consider the Plethora on-demand manufacturing service, which shows the mechanical designer an instant price quote, directly inside the CAD software, as they design a part in real-time. In what other ways could inventors be given rapid feedback while exploring ideas?’

Victor then describes a public debate over a public policy. Ideas were put forward. Everybody argued.

‘Who to believe? The real question is — why are readers and decision-makers forced to “believe” anything at all? Many claims made during the debate offered no numbers to back them up. Claims with numbers rarely provided context to interpret those numbers. And never — never! — were readers shown the calculations behind any numbers. Readers had to make up their minds on the basis of hand-waving, rhetoric, bombast.’

And there was no progress because nobody could really learn from the debate or even just be clear about exactly what was being proposed. Sound familiar?!! This is absolutely normal and Victor’s description applies to over 99% of public policy debates.

Victor then describes how you can take the policy argument he had sketched and change its nature. Instead of discussing words and stories, DISCUSS INTERACTIVE MODELS. 

Here you need to click to the original to understand the power of what he is talking about as he programs a simple example.

‘The reader can explore alternative scenarios, understand the tradeoffs involved, and come to an informed conclusion about whether any such proposal could be a good decision.

‘This is possible because the author is not just publishing words. The author has provided a model — a set of formulas and algorithms that calculate the consequences of a given scenario… Notice how the model’s assumptions are clearly visible, and can even be adjusted by the reader.

‘Readers are thus encouraged to examine and critique the model. If they disagree, they can modify it into a competing model with their own preferred assumptions, and use it to argue for their position. Model-driven material can be used as grounds for an informed debate about assumptions and tradeoffs.

‘Modeling leads naturally from the particular to the general. Instead of seeing an individual proposal as “right or wrong”, “bad or good”, people can see it as one point in a large space of possibilities. By exploring the model, they come to understand the landscape of that space, and are in a position to invent better ideas for all the proposals to come. Model-driven material can serve as a kind of enhanced imagination.

Victor then looks at some standard materials from those encouraging people to take personal action on climate change and concludes:

‘These are lists of proverbs. Little action items, mostly dequantified, entirely decontextualized. How significant is it to “eat wisely” and “trim your waste”? How does it compare to other sources of harm? How does it fit into the big picture? How many people would have to participate in order for there to be appreciable impact? How do you know that these aren’t token actions to assauge guilt?

‘And why trust them? Their rhetoric is catchy, but so is the horrific “denialist” rhetoric from the Cato Institute and similar. When the discussion is at the level of “trust me, I’m a scientist” and “look at the poor polar bears”, it becomes a matter of emotional appeal and faith, a form of religion.

‘Climate change is too important for us to operate on faith. Citizens need and deserve reading material which shows context — how significant suggested actions are in the big picture — and which embeds models — formulas and algorithms which calculate that significance, for different scenarios, from primary-source data and explicit assumptions.’

Even the supposed ‘pros’ — Insiders at the top of research fields in politically relevant areas — have to scramble around typing words into search engines, crawling around government websites, and scrolling through PDFs. Reliable data takes ages to find. Reliable models are even harder to find. Vast amounts of useful data and models exist but they cannot be found and used effectively because we lack the tools.

‘Authoring tools designed for arguing from evidence’

Why don’t we conduct public debates in the way his toy example does with interactive models? Why aren’t paragraphs in supposedly serious online newspapers written like this? Partly because of the culture, including the education of those who run governments and media organisations, but also because the resources for creating this sort of material don’t exist.

‘In order for model-driven material to become the norm, authors will need data, models, tools, and standards…

‘Suppose there were good access to good data and good models. How would an author write a document incorporating them? Today, even the most modern writing tools are designed around typing in words, not facts. These tools are suitable for promoting preconceived ideas, but provide no help in ensuring that words reflect reality, or any plausible model of reality. They encourage authors to fool themselves, and fool others

‘Imagine an authoring tool designed for arguing from evidence. I don’t mean merely juxtaposing a document and reference material, but literally “autocompleting” sourced facts directly into the document. Perhaps the tool would have built-in connections to fact databases and model repositories, not unlike the built-in spelling dictionary. What if it were as easy to insert facts, data, and models as it is to insert emoji and cat photos?

‘Furthermore, the point of embedding a model is that the reader can explore scenarios within the context of the document. This requires tools for authoring “dynamic documents” — documents whose contents change as the reader explores the model. Such tools are pretty much non-existent.’

These sorts of tools for authoring dynamic documents should be seen as foundational technology like the integrated circuit or the internet.

‘Foundational technology appears essential only in retrospect. Looking forward, these things have the character of “unknown unknowns” — they are rarely sought out (or funded!) as a solution to any specific problem. They appear out of the blue, initially seem niche, and eventually become relevant to everything.

‘They may be hard to predict, but they have some common characteristics. One is that they scale well. Integrated circuits and the internet both scaled their “basic idea” from a dozen elements to a billion. Another is that they are purpose-agnostic. They are “material” or “infrastructure”, not applications.’

Victor ends with a very potent comment — that much of what we observe is ‘rearranging  app icons on the deck of the Titanic’. Commercial incentives drive people towards trying to create ‘the next Facebook’ — not fixing big social problems. I will address this below.

If you are an arts graduate interested in these subjects but not expert (like me), here is an example that will be more familiar… If you look at any big historical subject, such as ‘why/how did World War I start?’ and examine leading scholarship carefully, you will see that all the leading books on such subjects provide false chronologies and mix facts with errors such that it is impossible for a careful reader to be sure about crucial things. It is routine for famous historians to write that ‘X happened because Y’ when Y happened after X. Part of the problem is culture but this could potentially be improved by tools. A very crude example: why doesn’t Kindle make it possible for readers to log factual errors, with users’ reliability ranked by others, so authors can easily check potential errors and fix them in online versions of books? Even better, this could be part of a larger system to develop gold standard chronologies with each ‘fact’ linked to original sources and so on. This would improve the reliability of historical analysis and it would create an ‘anti-entropy’ ratchet — now, entropy means that errors spread across all books on a subject and there is no mechanism to reverse this…

 

‘Seeing Rooms’: macro-tools to help make decisions

Victor also discusses another fundamental issue: the rooms/spaces in which most modern work and thinking occurs are not well-suited to the problems being tackled and we could do much better. Victor is addressing advanced manufacturing and robotics but his argument applies just as powerfully, perhaps more powerfully, to government analysis and decision-making.

Now, ‘software based tools are trapped in tiny rectangles’. We have very sophisticated tools but they all sit on computer screens on desks, just as you are reading this blog.

In contrast, ‘Real-world tools are in rooms where workers think with their bodies.’ Traditional crafts occur in spatial environments designed for that purpose. Workers walk around, use their hands, and think spatially. ‘The room becomes a macro-tool they’re embedded inside, an extension of the body.’ These rooms act like tools to help them understand their problems in detail and make good decisions.

Picture: rooms designed for the problems being tackled

Screenshot 2017-03-20 14.29.19

The wave of 3D printing has developed ‘maker rooms’ and ‘Fab Labs’ where people work with a set of tools that are too expensive for an individual. The room is itself a network of tools. This approach is revolutionising manufacturing.

Why is this useful?

‘Modern projects have complex behavior… Understanding requires seeing and the best seeing tools are rooms.’ This is obviously particularly true of politics and government.

Here is a photo of a recent NASA mission control room. The room is set up so that all relevant people can see relevant data and models at different scales and preserve a common picture of what is important. NASA pioneered thinking about such rooms and the technology and tools needed in the 1960s.

Screenshot 2017-03-20 14.35.35

Here are pictures of two control rooms for power grids.

Screenshot 2017-03-20 14.37.28

Here is a panoramic photo of the unified control centre for the Large Hadron Collider – the biggest of ‘big data’ projects. Notice details like how they have removed all pillars so nothing interrupts visual communication between teams.

Screenshot 2017-03-20 15.31.33

Now contrast these rooms with rooms from politics.

Here is the Cabinet room. I have been in this room. There are effectively no tools. In the 19th Century at least Lord Salisbury used the fireplace as a tool. He would walk around the table, gather sensitive papers, and burn them at the end of meetings. The fire is now blocked. The only other tool, the clock, did not work when I was last there. Over a century, the physical space in which politicians make decisions affecting potentially billions of lives has deteriorated.

British Cabinet room practically as it was July 1914

Screenshot 2017-03-20 15.42.59

Here are JFK and EXCOM making decisions during the Cuban Missile Crisis that moved much faster than July 1914, compressing decisions leading to the destruction of global civilisation potentially into just minutes.

Screenshot 2019-02-14 16.06.04

Here is the only photo in the public domain of the room known as ‘COBRA’ (Cabinet Office Briefing Room) where a shifting set of characters at the apex of power in Britain meet to discuss crises.

Screenshot 2017-03-20 14.39.41

Notice how poor it is compared to NASA, the LHC etc. There has clearly been no attempt to learn from our best examples about how to use the room as a tool. The screens at the end are a late add-on to a room that is essentially indistinguishable from the room in which Prime Minister Asquith sat in July 1914 while doodling notes to his girlfriend as he got bored. I would be surprised if the video technology used is as good as what is commercially available cheaper, the justification will be ‘security’, and I would bet that many of the decisions about the operation of this room would not survive scrutiny from experts in how to construct such rooms.

I have not attended a COBRA meeting but I’ve spoken to many who have. The meetings, as you would expect looking at this room, are often normal political meetings. That is:

  • aims are unclear,
  • assumptions are not made explicit,
  • there is no use of advanced tools,
  • there is no use of quantitative models,
  • discussions are often dominated by lawyers so many actions are deemed ‘unlawful’ without proper scrutiny (and this device is routinely used by officials to stop discussion of options they dislike for non-legal reasons),
  • there is constant confusion between policy, politics and PR then the cast disperses without clarity about what was discussed and agreed.

Here is a photo of the American equivalent – the Situation Room.

Screenshot 2017-03-20 15.51.12.png

It has a few more screens but the picture is essentially the same: there are no interactive tools beyond the ability to speak and see someone at a distance which was invented back in the 1950s/1960s in the pioneering programs of SAGE (automated air defence) and Apollo (man on the moon). Tools to help thinking in powerful ways are not taken seriously. It is largely the same, and decisions are made the same, as in the Cuban Missile Crisis. In some ways the use of technology now makes management worse as it encourages Presidents and their staff to try to micromanage things they should not be managing, often in response to or fear of the media.

Individual ministers’ officers are also hopeless. The computers are old and rubbish. Even colour printing is often a battle. Walls are for kids’ pictures. In the DfE officials resented even giving us paper maps of where schools were and only did it when bullied by the private office. It was impossible for officials to work on interactive documents. They had no technology even for sharing documents in a way that was then (2011) normal even in low-performing organisations. Using GoogleDocs was ‘against the rules’. (I’m told this has slightly improved.) The whole structure of ‘submissions’ and ‘red boxes’ is hopeless. It is extremely bureaucratic and slow. It prevents serious analysis of quantitative models. It reinforces the lack of proper scientific thinking in policy analysis. It guarantees confusion as ministers scribble notes and private offices interpret rushed comments by exhausted ministers after dinner instead of having proper face-to-face meetings that get to the heart of problems and resolve conflicts quickly. The whole approach reinforces the abject failure of the senior civil service to think about high performance project management.

Of course, most of the problems with the standards of policy and management in the civil service are low or no-tech problems — they involve the ‘unrecognised simplicities’ that are independent of, and prior to, the use of technology — but all these things negatively reinforce each other. Anybody who wants to do things much better is scuppered by Whitehall’s entangled disaster zone of personnel, training, management, incentives and tools.

*

Dynamic Land: ‘amazing’

I won’t go into this in detail. Dynamic Land is in a building in Berkeley. I visited last year. It is Victor’s attempt to turn the ideas above into a sort of living laboratory. It is a large connected set of rooms that have computing embedded in surfaces. For example, you can scribble equations on a bit of paper, cameras in the ceiling read your scribbles automatically, turn them into code, and execute them — for example, by producing graphics. You can then physically interact with models that appear on the table or wall while the cameras watch your hands and instantly turn gestures into new code and change the graphics or whatever you are doing. Victor has put these cutting edge tools into a space and made it open to the Berkeley community. This is all hard to explain/understand because you haven’t seen anything like it even in sci-fi films (it’s telling the media still uses the 15 year-old Minority Report as its sci-fi illustration for such things).

This video gives a little taste. I visited with a physicist who works on the cutting edge of data science/AI. I was amazed but I know nothing about such things — I was interested to see his reaction as he scribbled gravitational equations on paper and watched the cameras turn them into models on the table in real-time, then he changed parameters and watched the graphics change in real-time on the table (projected from the ceiling): ‘Ohmygod, this is just obviously the future, absolutely amazing.’ The thought immediately struck us: imagine the implications of having policy discussions with such tools instead of the usual terrible meetings. Imagine discussing HS2 budgets or possible post-Brexit trading arrangements with the models running like this for decision-makers to interact with.

Video of Dynamic Land: the bits of coloured paper are ‘code’, graphics are projected from the ceiling

 

screenshot 2019-01-29 15.01.20

screenshot 2019-01-29 15.27.05

*

3. Michael Nielsen and cognitive technologies

Connected to Victor’s ideas are those of the brilliant physicist, Michael Nielsen. Nielsen wrote the textbook on quantum computation and a great book, Reinventing Discovery, on the evolution of the scientific method. For example, instead of waiting for the coincidence of Grossmann helping out Einstein with some crucial maths, new tools could create a sort of ‘designed serendipity’ to help potential collaborators find each other.

In his essay Thought as a Technology, Nielsen describes the feedback between thought and interfaces:

‘In extreme cases, to use such an interface is to enter a new world, containing objects and actions unlike any you’ve previously seen. At first these elements seem strange. But as they become familiar, you internalize the elements of this world. Eventually, you become fluent, discovering powerful and surprising idioms, emergent patterns hidden within the interface. You begin to think with the interface, learning patterns of thought that would formerly have seemed strange, but which become second nature. The interface begins to disappear, becoming part of your consciousness. You have been, in some measure, transformed.’

He describes how normal language and computer interfaces are cognitive technologies:

‘Language is an example of a cognitive technology: an external artifact, designed by humans, which can be internalized, and used as a substrate for cognition. That technology is made up of many individual pieces – words and phrases, in the case of language – which become basic elements of cognition. These elements of cognition are things we can think with…

‘In a similar way to language, maps etc, a computer interface can be a cognitive technology. To master an interface requires internalizing the objects and operations in the interface; they become elements of cognition. A sufficiently imaginative interface designer can invent entirely new elements of cognition… In general, what makes an interface transformational is when it introduces new elements of cognition that enable new modes of thought. More concretely, such an interface makes it easy to have insights or make discoveries that were formerly difficult or impossible. At the highest level, it will enable discoveries (or other forms of creativity) that go beyond all previous human achievement.’

Nielsen describes how powerful ways of thinking among mathematicians and physicists are hidden from view and not part of textbooks and normal teaching.

The reason is that traditional media are poorly adapted to working with such representations… If experts often develop their own representations, why do they sometimes not share those representations? To answer that question, suppose you think hard about a subject for several years… Eventually you push up against the limits of existing representations. If you’re strongly motivated – perhaps by the desire to solve a research problem – you may begin inventing new representations, to provide insights difficult through conventional means. You are effectively acting as your own interface designer. But the new representations you develop may be held entirely in your mind, and so are not constrained by traditional static media forms. Or even if based on static media, they may break social norms about what is an “acceptable” argument. Whatever the reason, they may be difficult to communicate using traditional media. And so they remain private, or are only discussed informally with expert colleagues.’

If we can create interfaces that reify deep principles, then ‘mastering the subject begins to coincide with mastering the interface.’ He gives the example of Photoshop which builds in many deep principles of image manipulation.

‘As you master interface elements such as layers, the clone stamp, and brushes, you’re well along the way to becoming an expert in image manipulation… By contrast, the interface to Microsoft Word contains few deep principles about writing, and as a result it is possible to master Word‘s interface without becoming a passable writer. This isn’t so much a criticism of Word, as it is a reflection of the fact that we have relatively few really strong and precise ideas about how to write well.’

He then describes what he calls ‘the cognitive outsourcing model’: ‘we specify a problem, send it to our device, which solves the problem, perhaps in a way we-the-user don’t understand, and sends back a solution.’ E.g we ask Google a question and Google sends us an answer.

This is how most of us think about the idea of augmenting the human intellect but it is not the best approach. ‘Rather than just solving problems expressed in terms we already understand, the goal is to change the thoughts we can think.’

‘One challenge in such work is that the outcomes are so difficult to imagine. What new elements of cognition can we invent? How will they affect the way human beings think? We cannot know until they’ve been invented.

‘As an analogy, compare today’s attempts to go to Mars with the exploration of the oceans during the great age of discovery. These appear similar, but while going to Mars is a specific, concrete goal, the seafarers of the 15th through 18th centuries didn’t know what they would find. They set out in flimsy boats, with vague plans, hoping to find something worth the risks. In that sense, it was even more difficult than today’s attempts on Mars.

‘Something similar is going on with intelligence augmentation. There are many worthwhile goals in technology, with very specific ends in mind. Things like artificial intelligence and life extension are solid, concrete goals. By contrast, new elements of cognition are harder to imagine, and seem vague by comparison. By definition, they’re ways of thinking which haven’t yet been invented. There’s no omniscient problem-solving box or life-extension pill to imagine. We cannot say a priori what new elements of cognition will look like, or what they will bring. But what we can do is ask good questions, and explore boldly.

In another essay, Using Artificial Intelligence to Augment Human Intelligence, Nielsen points out that breakthroughs in creating powerful new cognitive technologies such as musical notation or Descartes’ invention of algebraic geometry are rare but ‘modern computers are a meta-medium enabling the rapid invention of many new cognitive technologies‘ and, further, AI will help us ‘invent new cognitive technologies which transform the way we think.’

Further, historically powerful new cognitive technologies, such as ‘Feynman diagrams’, have often appeared strange at first. We should not assume that new interfaces should be ‘user friendly’. Powerful interfaces that repay mastery may require sacrifices.

‘The purpose of the best interfaces isn’t to be user-friendly in some shallow sense. It’s to be user-friendly in a much stronger sense, reifying deep principles about the world, making them the working conditions in which users live and create. At that point what once appeared strange can instead becomes comfortable and familiar, part of the pattern of thought…

‘Unfortunately, many in the AI community greatly underestimate the depth of interface design, often regarding it as a simple problem, mostly about making things pretty or easy-to-use. In this view, interface design is a problem to be handed off to others, while the hard work is to train some machine learning system.

‘This view is incorrect. At its deepest, interface design means developing the fundamental primitives human beings think and create with. This is a problem whose intellectual genesis goes back to the inventors of the alphabet, of cartography, and of musical notation, as well as modern giants such as Descartes, Playfair, Feynman, Engelbart, and Kay. It is one of the hardest, most important and most fundamental problems humanity grapples with.

‘As discussed earlier, in one common view of AI our computers will continue to get better at solving problems, but human beings will remain largely unchanged. In a second common view, human beings will be modified at the hardware level, perhaps directly through neural interfaces, or indirectly through whole brain emulation.

We’ve described a third view, in which AIs actually change humanity, helping us invent new cognitive technologies, which expand the range of human thought. Perhaps one day those cognitive technologies will, in turn, speed up the development of AI, in a virtuous feedback cycle:

Screenshot 2019-02-04 18.16.42

It would not be a Singularity in machines. Rather, it would be a Singularity in humanity’s range of thought… The long-term test of success will be the development of tools which are widely used by creators. Are artists using these tools to develop remarkable new styles? Are scientists in other fields using them to develop understanding in ways not otherwise possible?’

I would add: are governments using these tools to help them think in ways we already know are more powerful and to explore new ways of making decisions and shaping the complex systems on which we rely?

Nielsen also wrote this fascinating essay ‘Augmenting long-term memory’. This involves a computer tool (Anki) to aid long-term memory using ‘spaced repetition’ — i.e testing yourself at intervals which is shown to counter the normal (for most people) process of forgetting. This allows humans to turn memory into a choice so we can decide what to remember and achieve it systematically (without a ‘weird/extreme gift’ which is how memory is normally treated). (It’s fascinating that educated Greeks 2,500 years ago could build sophisticated mnemonic systems allowing them to remember vast amounts while almost all educated people now have no idea about such techniques.)

Connected to this, Nielsen also recently wrote an essay teaching fundamentals of quantum mechanics and quantum computers — but it is an essay with a twist:

‘[It] incorporates new user interface ideas to help you remember what you read… this essay isn’t just a conventional essay, it’s also a new medium, a mnemonic medium which integrates spaced-repetition testing. The medium itself makes memory a choice This essay will likely take you an hour or two to read. In a conventional essay, you’d forget most of what you learned over the next few weeks, perhaps retaining a handful of ideas. But with spaced-repetition testing built into the medium, a small additional commitment of time means you will remember all the core material of the essay. Doing this won’t be difficult, it will be easier than the initial read. Furthermore, you’ll be able to read other material which builds on these ideas; it will open up an entire world…

‘Mastering new subjects requires internalizing the basic terminology and ideas of the subject. The mnemonic medium should radically speed up this memory step, converting it from a challenging obstruction into a routine step. Frankly, I believe it would accelerate human progress if all the deepest ideas of our civilization were available in a form like this.’

This obviously has very important implications for education policy. It also shows how computers could be used to improve learning — something that has generally been a failure since the great hopes at PARC in the 1970s. I have used Anki since reading Nielsen’s blog and I can feel it making a big difference to my mind/thoughts — how often is this true of things you read? DOWNLOAD ANKI NOW AND USE IT!

We need similarly creative experiments with new mediums that are designed to improve  standards of high stakes decision-making.

*

4. Summary

We could create systems for those making decisions about m/billions of lives and b/trillions of dollars, such as Downing Street or The White House, that integrate inter alia:

  • Cognitive toolkits compressing already existing useful knowledge such as checklists for rational thinking developed by the likes of Tetlock, Munger, Yudkowsky et al.
  • A Nielsen/Victor research program on ‘Seeing Rooms’, interface design, authoring tools, and cognitive technologies. Start with bunging a few million to Victor immediately in return for allowing some people to study what he is doing and apply it in Whitehall, then grow from there.
  • An alpha data science/AI operation — tapping into the world’s best minds including having someone like David Deutsch or Tim Gowers as a sort of ‘chief rationalist’ in the Cabinet (with Scott Alexander as deputy!) — to support rational decision-making where this is possible and explain when it is not possible (just as useful).
  • Tetlock/Hanson prediction tournaments could easily and cheaply be extended to consider ‘clusters’ of issues around themes like Brexit to improve policy and project management.
  • Groves/Mueller style ‘systems management’ integrated with the data science team.
  • Legally entrenched Red Teams where incentives are aligned to overcoming groupthink and error-correction of the most powerful. Warren Buffett points out that public companies considering an acquisition should employ a Red Team whose fees are dependent on the deal NOT going ahead. This is the sort of idea we need in No10.

Researchers could see the real operating environment of decision-makers at the apex of power, the sort of problems they need to solve under pressure, and the constraints of existing centralised systems. They could start with the safe level of ‘tools that we already know work really well’ — i.e things like cognitive toolkits and Red Teams — while experimenting with new tools and new ways of thinking.

Hedge funds like Bridgewater and some other interesting organisations think about such ideas though without the sophistication of Victor’s approach. The world of MPs, officials, the Institute for Government (a cheerleader for ‘carry on failing’), and pundits will not engage with these ideas if left to their own devices.

This is not the place to go into how to change this. We know that the normal approach is doomed to produce the normal results and normal results applied to things like repeated WMD crises means disaster sooner or later. As Buffett points out, ‘If there is only one chance in thirty of an event occurring in a given year, the likelihood of it occurring at least once in a century is 96.6%.’ It is not necessary to hope in order to persevere: optimism of the will, pessimism of the intellect…

*

A final thought…

A very interesting comment that I have heard from some of the most important scientists involved in the creation of advanced technologies is that ‘artists see things first’ — that is, artists glimpse possibilities before most technologists and long before most businessmen and politicians.

Pixar came from a weird combination of George Lucas getting divorced and the visionary Alan Kay suggesting to Steve Jobs that he buy a tiny special effects unit from Lucas, which Jobs did with completely wrong expectations about what would happen. For unexpected reasons this tiny unit turned into a huge success — as Jobs put it later, he was ‘sort of snookered’ into creating Pixar. Now Alan Kay says he struggles to get tech billionaires to understand the importance of Victor’s ideas.

The same story repeats: genuinely new ideas that could create huge value always seem so odd that almost all people in almost all organisations cannot see new possibilities. If this is true in Silicon Valley, how much more true is it in Whitehall or Washington… 

If one were setting up a new party in Britain, one could incorporate some of these ideas. This would of course also require recruiting very different types of people to the norm in politics. The closed nature of Westminster/Whitehall combined with first-past-the-post means it is very hard to solve the coordination problem of how to break into this system with a new way of doing things. Even those interested in principle don’t want to commit to a 10-year (?) project that might get them blasted on the front pages. Vote Leave hacked the referendum but such opportunities are much rarer than VC-funded ‘unicorns’. On the other hand, arguably what is happening now is a once in 50 or 100 year crisis and such crises also are the waves that can be ridden to change things normally unchangeable. A second referendum in 2020 is quite possible (or two referendums under PM Corbyn, propped up by the SNP?) and might be the ideal launchpad for a completely new sort of entity, not least because if it happens the Conservative Party may well not exist in any meaningful sense (whether there is or isn’t another referendum). It’s very hard to create a wave and it’s much easier to ride one. It’s more likely in a few years you will see some of the above ideas in novels or movies or video games than in government — their pickup in places like hedge funds and intelligence services will be discrete — but you never know…

*

Ps. While I have talked to Michael Nielsen and Bret Victor about their ideas, in no way should this blog be taken as their involvement in anything to do with my ideas or plans or agreement with anything written above. I did not show this to them or even tell them I was writing about their work, we do not work together in any way, I have just read and listened to their work over a few years and thought about how their ideas could improve government.

Further Reading

If interested in how to make things work much better, read this (lessons for government from the Apollo project) and this (lessons for government from ARPA-PARC’s creation of the internet and PC).

Links to recent reports on AI/ML.

On the referendum #32: Science/productivity — a) small teams are more disruptive, b) ‘science is becoming far less efficient’

This blog considers two recent papers on the dynamics of scientific research: one in Nature and one by the brilliant physicist, Michael Nielsen, and the brilliant founder of Stripe, Patrick Collison, who is a very unusual CEO. These findings are very important to the question: how can we make economies more productive and what is the relationship between basic science and productivity? The papers are also interesting for those interested in the general question of high performance teams.

These issues are also crucial to the debate about what on earth Britain focuses on now the 2016 referendum has destroyed the Insiders’ preferred national strategy of ‘influencing the EU project’.

For as long as I have watched British politics carefully (sporadically since about 1998) these issues about science, technology and productivity have been almost totally ignored in the Insider debate because the incentives + culture of Westminster programs this behaviour: people with power are not incentivised to optimise for ‘improve science research and productivity’. E.g Everything Vote Leave said about funding science research during the referendum (including cooperation with EU programs) was treated as somewhere between eccentric, irrelevant and pointless by Insiders.

This recent Nature paper gives evidence that a) small teams are more disruptive in science research and b) solo researchers/small teams are significantly underfunded.

‘One of the most universal trends in science and technology today is the growth of large teams in all areas, as solitary researchers and small teams diminish in prevalence . Increases in team size have been attributed to the specialization of scientific activities, improvements in communication technology, or the complexity of modern problems that require interdisciplinary solutions. This shift in team size raises the question of whether and how the character of the science and technology produced by large teams differs from that of small teams. Here we analyse more than 65 million papers, patents and software products that span the period 1954–2014, and demonstrate that across this period smaller teams have tended to disrupt science and technology with new ideas and opportunities, whereas larger teams have tended to develop existing ones. Work from larger teams builds on more recent and popular developments, and attention to their work comes immediately. By contrast, contributions by smaller teams search more deeply into the past, are viewed as disruptive to science and technology and succeed further into the future — if at all. Observed differences between small and large teams are magnified for higher impact work, with small teams known for disruptive work and large teams for developing work. Differences in topic and research design account for a small part of the relationship between team size and disruption; most of the effect occurs at the level of the individual, as people move between smaller and larger teams. These results demonstrate that both small and large teams are essential to a flourishing ecology of science and technology, and suggest that, to achieve this, science policies should aim to support a diversity of team sizes

‘Although much has been demonstrated about the professional and career benefits of team size for team members, there is little evidence that supports the notion that larger teams are optimized for knowledge discovery and technological invention. Experimental and observational research on groups reveals that individuals in large groups … generate fewer ideas, recall less learned information, reject external perspectives more often and tend to neutralize each other’s viewpoints

‘Small teams disrupt science and technology by exploring and amplifying promising ideas from older and less-popular work. Large teams develop recent successes, by solving acknowledged problems and refining common designs. Some of this difference results from the substance of science and technology that small versus large teams tackle, but the larger part appears to emerge as a consequence of team size itself. Certain types of research require the resources of large teams, but large teams demand an ongoing stream of funding and success to ‘pay the bills’, which makes them more sensitive to the loss of reputation and support that comes from failure. Our findings are consistent with field research on teams in other domains, which demonstrate that small groups with more to gain and less to lose are more likely to undertake new and untested opportunities that have the potential for high growth and failure

‘In contrast to Nobel Prize papers, which have an average disruption among the top 2% of all contemporary papers, funded papers rank near the bottom 31%. This could result from a conservative review process, proposals designed to anticipate such a process or a planning effect whereby small teams lock themselves into large-team inertia by remaining accountable to a funded proposal. When we compare two major policy incentives for science (funding versus awards), we find that Nobel-prize-winning articles significantly oversample small disruptive teams, whereas those that acknowledge US National Science Foundation funding oversample large developmental teams. Regardless of the dominant driver, these results paint a unified portrait of underfunded solo investigators and small teams who disrupt science and technology by generating new directions on the basis of deeper and wider information search. These results suggest the need for government, industry and non-profit funders of science and technology to investigate the critical role that small teams appear to have in expanding the frontiers of knowledge, even as large teams rapidly develop them.’

Recently Michael Nielsen and Patrick Collison published some research on the question:

‘are we getting a proportional increase in our scientific understanding [for increased investment]?  Or are we investing vastly more merely to sustain (or even see a decline in) the rate of scientific progress?

They explored, inter alia, ‘how scientists think the quality of Nobel Prize–winning discoveries has changed over the decades.’

They conclude:

‘The picture this survey paints is bleak: Over the past century, we’ve vastly increased the time and money invested in science, but in scientists’ own judgement, we’re producing the most important breakthroughs at a near-constant rate. On a per-dollar or per-person basis, this suggests that science is becoming far less efficient.’

It’s also interesting that:

‘In fact, just three [physics] discoveries made since 1990 have been awarded Nobel Prizes. This is too few to get a good quality estimate for the 1990s, and so we didn’t survey those prizes. However, the paucity of prizes since 1990 is itself suggestive. The 1990s and 2000s have the dubious distinction of being the decades over which the Nobel Committee has most strongly preferred to skip, and instead award prizes for earlier work. Given that the 1980s and 1970s themselves don’t look so good, that’s bad news for physics.’

There is a similar story in chemistry.

Why has science got so much more expensive without commensurate gains in understanding?

‘A partial answer to this question is suggested by work done by the economists Benjamin Jones and Bruce Weinberg. They’ve studied how old scientists are when they make their great discoveries. They found that in the early days of the Nobel Prize, future Nobel scientists were 37 years old, on average, when they made their prizewinning discovery. But in recent times that has risen to an average of 47 years, an increase of about a quarter of a scientist’s working career.

‘Perhaps scientists today need to know far more to make important discoveries. As a result, they need to study longer, and so are older, before they can do their most important work. That is, great discoveries are simply getting harder to make. And if they’re harder to make, that suggests there will be fewer of them, or they will require much more effort.

‘In a similar vein, scientific collaborations now often involve far more people than they did a century ago. When Ernest Rutherford discovered the nucleus of the atom in 1911, he published it in a paper with just a single author: himself. By contrast, the two 2012 papers announcing the discovery of the Higgs particle had roughly a thousand authors each. On average, research teams nearly quadrupled in size over the 20th century, and that increase continues today. For many research questions, it requires far more skills, expensive equipment, and a large team to make progress today.

They suggest that ‘the optimistic view is that science is an endless frontier, and we will continue to discover and even create entirely new fields, with their own fundamental questions’. If science is slowing now, then perhaps it ‘is because science has remained too focused on established fields, where it’s becoming ever harder to make progress. We hope the future will see a more rapid proliferation of new fields, giving rise to major new questions. This is an opportunity for science to accelerate.’ They give the example of the birth of computer science after Gödel’s and Turing’s papers in the 1930s.

They also consider the arguments among economists concerning productivity slowdown. Tyler Cowen and others have argued that the breakthroughs in the 19th and early 20th centuries were more significant than recent discoveries: e.g the large-scale deployment of powerful general-purpose technologies such as electricity, the internal-combustion engine, radio, telephones, air travel, the assembly line, fertiliser and so on. Productivity growth in the 1950s was ‘roughly six times higher than today. That means we see about as much change over a decade today as we saw in 18 months in the 1950s.’ Yes the computer and internet have been fantastic but they haven’t, so far, contributed as much as all those powerful technologies like electricity.

They also argue ‘there has been little institutional response’ either among the scientific community or government.

‘Perhaps this lack of response is in part because some scientists see acknowledging diminishing returns as betraying scientists’ collective self-interest. Most scientists strongly favor more research funding. They like to portray science in a positive light, emphasizing benefits and minimizing negatives. While understandable, the evidence is that science has slowed enormously per dollar or hour spent. That evidence demands a large-scale institutional response. It should be a major subject in public policy, and at grant agencies and universities. Better understanding the cause of this phenomenon is important, and identifying ways to reverse it is one of the greatest opportunities to improve our future.’

Slate Star Codex also discussed these issues recently. We often look at charts of exponential progress like Moore’s Law but:

‘There are eighteen times more people involved in transistor-related research today than in 1971. So if in 1971 it took 1000 scientists to increase transistor density 35% per year, today it takes 18,000 scientists to do the same task. So apparently the average transistor scientist is eighteen times less productive today than fifty years ago. That should be surprising and scary.’

Similar arguments seem to apply in many areas.

‘All of these lines of evidence lead me to the same conclusion: constant growth rates in response to exponentially increasing inputs is the null hypothesis. If it wasn’t, we should be expecting 50% year-on-year GDP growth, easily-discovered-immortality, and the like.’

SSC also argues that the explanation for this phenomenon is the ‘low hanging fruit argument’:

‘For example, element 117 was discovered by an international collaboration who got an unstable isotope of berkelium from the single accelerator in Tennessee capable of synthesizing it, shipped it to a nuclear reactor in Russia where it was attached to a titanium film, brought it to a particle accelerator in a different Russian city where it was bombarded with a custom-made exotic isotope of calcium, sent the resulting data to a global team of theorists, and eventually found a signature indicating that element 117 had existed for a few milliseconds. Meanwhile, the first modern element discovery, that of phosphorous in the 1670s, came from a guy looking at his own piss. We should not be surprised that discovering element 117 needed more people than discovering phosphorous

‘I worry even this isn’t dismissive enough. My real objection is that constant progress in science in response to exponential increases in inputs ought to be our null hypothesis, and that it’s almost inconceivable that it could ever be otherwise.

How likely is it that this will change radically?

‘At the end of the conference, the moderator asked how many people thought that it was possible for a concerted effort by ourselves and our institutions to “fix” the “problem”… Almost the entire room raised their hands. Everyone there was smarter and more prestigious than I was (also richer, and in many cases way more attractive), but with all due respect I worry they are insane. This is kind of how I imagine their worldview looking:

Screenshot 2019-03-10 18.07.52.png

*

I don’t know what the answers are to the tricky questions explored above. I do know that the existing systems for funding science are bad and we already have great ideas about how to improve our chances of making dramatic breakthroughs, even if we cannot escape the general problem that a lot of low-hanging fruit in traditional subjects like high energy physics is gone.

I have repeated this theme ad nauseam on this blog:

1) We KNOW how effective the very unusual funding for computer science was in the 1960s/1970s — ARPA-PARC created the internet and personal computing — and there are other similar case studies but

2) almost no science is funded in this way and

3) there is practically no debate about this even among scientists, many of whom are wholly ignorant about this. As Alan Kay has observed, there is an amazing contrast between the huge amount of interest in the internet/PC revolution and the near-zero interest in what created the super-productive processes that sparked this revolution.

One of the reasons is the usual problem of bad incentives reinforcing a dysfunctional equilibrium: successful scientists have a lot of power and have a strong personal interest in preserving current funding systems that let them build empires. These empires include often bad treatment of young postdocs who are abused as cheap labour. This is connected to the point above about the average age of Nobel-winners growing. Much of the 1930s quantum revolution was done by people aged ~20-35 and so was the internet/PC revolution in the 1960s/1970s. The latter was deliberate: Licklider et al deliberately funded not short-term projects but creating whole new departments and institutions for young people. They funded a healthy ecosystem: people not projects was one of the core principles. People in their twenties now have very little power or money in the research ecosystem. Further, they have to operate in an appalling time-wasting-grant-writing bureaucracy that Heisenberg, Dirac et al did not face in the 1920s/30s. The politicians and officials don’t care so there is no force to push sensible experiments with new ideas. Almost all ‘reform’ from the central bureaucracy pushes in the direction of more power for the central bureaucracy, not fixing problems.

For example, for an amount of money that the Department for Education loses every week without ministers/officials even noticing it’s lost — I know from experience this is single figure millions — we could transform the funding of masters and PhDs in maths, physics, chemistry, biology, and computer science. There is so much good that could be done for trivial money that isn’t even wasted in the normal sense of ‘spent on rubbish gimmicks and procurement disasters’, it just disappears into the aether without anybody noticing.

The government spends about 250 billion pounds a year with extreme systematic incompetence. If we ‘just’ applied what we know about high performance project management and procurement we could take savings from this budget and put it into ARPA-PARC style high-risk-high-payoff visions including creating whole new fields. This would create powerful self-reinforcing dynamics that would give Britain real assets of far, far greater value than the chimerical ‘influence’ in Brussels meeting rooms where ‘economic and monetary union’ is the real focus.

A serious government or a serious new party (not TIG obviously which is business as usual with the usual suspects) would focus on these things. Under Major, Blair, Brown, Cameron and May these issues have been neglected for quarter of a century. The Conservative Party now has almost no intellectual connection to crucial debates about the ecosystem of science, productivity, universities, funding, startups and so on. I know from personal experience that even billionaire entrepreneurs whose donations are vital to the survival of CCHQ cannot get people like Hammond to listen to anything about all this — Hammond’s focus is obeying his orders from Goldman Sachs. Downing Street is much more interested in protecting corporate looting by large banks and companies and protecting rent-seekers than they are in productivity and entrepreneurs. Having an interest in this subject is seen as a sign of eccentricity to say the least while the ambitious people focus on ‘strategy’, speeches, interviews and all the other parts of their useless implicit ‘model for effective action’. The Tories are reduced to slogans about ‘freedom’, ‘deregulation’ and so on which provide no answers to our productivity problem and, ironically, lie between pointless and self-destructive for them politically but, crucially, play in the self-referential world of Parliament, ‘think tanks’, and pundit-world who debate ‘the next leader’ and which provides the real incentives that drive behaviour.

There is no force in British politics that prioritises science and productivity. Hopefully soon someone will say ‘there is such a party’…

Further reading

If interested in practical ideas for changing science funding in the UK, read my paper last year, which has a lot of links to important papers, or this by two brilliant young neuroscientists who have experienced the funding system’s problems.

For example:

  • Remove bureaucracy like the multi-stage procurement processes for buying a lightbulb. ‘Rather than invigilate every single decision, we should do spot checks retrospectively, as is done with tax returns.’
  • ‘We should return to funding university departments more directly, allowing more rapid, situation-aware decision-making of the kind present in start-ups, and create a diversity of funding systems.’
  • There are many simple steps like guaranteed work visas for spouses that could make the UK a magnet for talented young scientists.

Unrecognised simplicities of effective action #2: ‘Systems’ thinking — ideas from the Apollo programme for a ‘systems politics’

This is the second in a series: click this link 201702-effective-action-2-systems-engineering-to-systems-politics. The first is HERE.

This paper concerns a very interesting story combining politics, management, institutions, science and technology. When high technology projects passed a threshold of complexity post-1945 amid the extreme pressure of the early Cold War, new management ideas emerged. These ideas were known as ‘systems engineering’ and ‘systems management’. These ideas were particularly connected to the classified program to build the first Intercontinental Ballistic Missiles (ICBMs) in the 1950s and successful ideas were transplanted into a failing NASA by George Mueller and others from 1963 leading to the successful moon landing in 1969.

These ideas were then applied in other mission critical teams and could be used to improve government performance. Urgently needed projects to lower the probability of catastrophes for humanity will benefit from considering why Mueller’s approach was 1) so successful and 2) so un-influential in politics. Could we develop a ‘systems politics’ that applies the unrecognised simplicities of effective action?

For those interested, it also looks briefly at an interesting element of the story – the role of John von Neumann, the brilliant mathematician who was deeply involved in the Manhattan Project, the project to build ICBMs, the first digital computers, and subjects like artificial intelligence, artificial life, possibilities for self-replicating machines made from unreliable components, and the basic problem that technological progress ‘gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we have known them, cannot continue.’

An obvious project with huge inherent advantages for humanity is the development of an international manned lunar base as part of developing space for commerce and science. It is the sort of thing that might change political dynamics on earth and could generate enormous support across international boundaries. After 23 June 2016, the UK has to reorient national policy on many dimensions. Developing basic science is one of the most important dimensions (for example, as I have long argued we urgently need a civilian version of DARPA similarly operating outside normal government bureaucratic systems including procurement and HR). Supporting such an international project would be a great focus for UK efforts and far more productive than our largely wasted decades of focus on the dysfunctional bureaucracy in Brussels that is dominated by institutions that fail the most important test – the capacity for error-correction the importance of which has been demonstrated over long periods and through many problems by the Anglo-American political system and its common law.

Please leave comments or email dmc2.cummings at gmail.com

 

Please help: how to make a big improvement in the alignment of political parties’ incentives with the public interest?

I am interested in these questions:

1) What incentives drive good/bad behaviour for UK political parties?

2) How could they be changed (legal and non-legal) to align interests of existing parties better with the public interest?

3) If one were setting up a new party from scratch what principles could be established in order to align the party’s interests with the public interest much more effectively than is now the case anywhere in the world, and how could one attract candidates very different to those who now dominate Parliament (cleverer, quantitative problem-solving skills, experience in managing complex organisations etc)?

4) Is there a good case for banning political parties (as sometimes was attempted in ancient Greece), how to do it, what would replace them, why would this be better etc (I assume this is a bad and/or impractical idea but it’s worth asking why)?

5) In what ways do existing or plausible technologies affect these old questions?

What are the best things written on these problems?

What are the best examples around the world of how people have made big improvements?

Assume that financial resources are effectively unlimited for the entity trying to make these changes, let me worry about things like ‘would the public buy it’ etc – focus on policy not communication/PR advice.

The more specific the better: an ideal bit of help would be detailed draft legislation. I don’t expect anybody to produce this, but just to show what I mean…

The overall problem is: how to make government performance dramatically, quantifiably, and sustainably better?

Please leave ideas in comments or email dmc2.cummings@gmail.com

Thanks

D

Times op-ed: What Is To Be Done? An answer to Dean Acheson’s famous quip

On Tuesday 2 December, the Times ran an op-ed by me you can see HERE. It got cut slightly for space. Below is the original version that makes a few other points.

I will use this as a start of a new series on what can be done to improve the system including policy, institutions, and management.

NB1. The article is not about the election or party politics. My suggested answer to Acheson is, I think, powerful partly because it is something that could be agreed upon, in various dimensions, across the political spectrum. I left the DfE in January partly because I wanted to have nothing to do with the election and this piece should not be seen as advocating ‘something Tories should say for the election’. I do not think any of the three leaders are interested in or could usefully pursue this goal – I am suggesting something for the future when they are all gone, and they could quite easily all be gone by summer 2016.

NB2. My view is not – ‘public bad, private good’. As I explained in The Hollow Men II, a much more accurate and interesting distinction is between a) large elements of state bureaucracies, dreadful NGOs like the CBI, and many large companies (that have many of the same HR and incentive problems as bureaucracies), where very similar types rise to power because the incentives encourage political skills rather than problem-solving skills, and b) start-ups, where entrepreneurs and technically trained problem-solvers can create organisations that operate extremely differently, move extremely fast, create huge value, and so on.

(For a great insight into start-up world I recommend two books. 1. Peter Thiel’s new book ‘Zero To One‘. 2. An older book telling the story of a mid-90s start-up that was embroiled in the Netscape/Microsoft battle and ended up selling itself to the much better organised Bill Gates – ‘High Stakes, No Prisoners‘ by Charles Ferguson. This blog, Creators and Rulers, by physicist Steve Hsu also summarises some crucial issues excellently.)

Some parts of government can work like start-ups but the rest of the system tries to smother them. For example, DARPA (originally ARPA) was set up as part of the US panic about Sputnik. It operates on very different principles from the rest of the Pentagon’s R&D system. Because it is organised differently, it has repeatedly produced revolutionary breakthroughs (e.g. the internet) despite a relatively tiny budget. But also note – DARPA has been around for decades and its operating principles are clear but nobody else has managed to create an equivalent (openly at least). Also note that despite its track record, D.C. vultures constantly circle trying to make it conform to the normal rules or otherwise clip its wings. (Another interesting case study would be the alternative paths taken by a) the US government developing computers with one genius mathematician, von Neumann, post-1945 (a lot of ‘start-up’ culture) and b) the UK government’s awful decisions in the same field with another genius mathematician, Turing, post-1945.)

When I talk about new and different institutions below, this is one of the things I mean. I will write a separate blog just on DARPA but I think there are two clear action points:

1. We should create a civilian version of DARPA aimed at high-risk/high-impact breakthroughs in areas like energy science and other fundamental areas such as quantum information and computing that clearly have world-changing potential. For it to work, it would have to operate outside all existing Whitehall HR rules, EU procurement rules and so on – otherwise it would be as dysfunctional as the rest of the system (defence procurement is in a much worse state than the DfE, hence, for example, billions spent on aircraft carriers that in classified war-games cannot be deployed to warzones). We could easily afford this if we could prioritise – UK politicians spend far more than DARPA’s budget on gimmicks every year – and it would provide huge value with cascading effects through universities and businesses.

2. The lessons of why and how it works – such as incentivising goals, not micromanaging methods – have general application that are useful when we think generally about Whitehall reform.

Finally, government institutions also operate to exclude from power scientists, mathematicians, and people from the start-up world – the Creators, in Hsu’s term. We need to think very hard about how to use their very rare and valuable skills as a counterweight to the inevitable psychological type that politics will always tend to promote.

Please leave comments, corrections etc below.

DC


 

What Is to Be Done?

There is growing and justified contempt for Westminster. Number Ten has become a tragi-comic press office with the prime minister acting as Über Pundit. Cameron, Miliband, and Clegg see only the news’s flickering shadows on their cave wall – they cannot see the real world behind them. As they watch floundering MPs, officials know they will stay in charge regardless of an election that won’t significantly change Britain’s trajectory.

Our institutions failed pre-1914, pre-1939, and with Europe. They are now failing to deal with a combination of debts, bad public services, security threats, and profound transitions in geopolitics, economics, and technology. They fail in crises because they are programmed to fail. The public knows we need to reorient national policy and reform these institutions. How?

First, we need a new goal. In 1962, Dean Acheson quipped that Britain had failed to find a post-imperial role. The romantic pursuit of ‘the special relationship’ and the deluded pursuit of a leading EU role have failed. This role should focus on making Britain the best country for education and science. Pericles described Athens as ‘the school of Greece’: we could be the school of the world because this role depends on thought and organisation, not size.

This would give us a central role in tackling humanity’s biggest problems and shaping the new institutions, displacing the EU and UN, that will emerge as the world makes painful transitions in coming decades. It would provide a focus for financial priorities and Whitehall’s urgent organisational surgery. It’s a goal that could mobilise very large efforts across political divisions as the pursuit of knowledge is an extremely powerful motive.

Second, we must train aspirant leaders very differently so they have basic quantitative skills and experience of managing complex projects. We should stop selecting leaders from a subset of Oxbridge egomaniacs with a humanities degree and a spell as spin doctor.

In 2012, Fields Medallist Tim Gowers sketched a ‘maths for presidents’ course to teach 16-18 year-olds crucial maths skills, including probability and statistics, that can help solve real problems. It starts next year. [NB. The DfE funded MEI to turn this blog into a real course.] A version should be developed for MPs and officials. (A similar ‘Physics for Presidents‘ course has been a smash hit at Berkeley.) Similarly, pioneering work by Philip Tetlock on ‘The Good Judgement Project‘ has shown that training can reduce common cognitive errors and can sharply improve the quality of political predictions, hitherto characterised by great self-confidence and constant failure.

New interdisciplinary degrees such as ‘World history and maths for presidents’ would improve on PPE but theory isn’t enough. If we want leaders to make good decisions amid huge complexity, and learn how to build great teams, then we should send them to learn from people who’ve proved they can do it. Instead of long summer holidays, embed aspirant leaders with Larry Page or James Dyson so they can experience successful leadership.

Third, because better training can only do so much, we must open political institutions to people and ideas from outside SW1.

A few people prove able repeatedly to solve hard problems in theoretical and practical fields, creating important new ideas and huge value. Whitehall and Westminster operate to exclude them from influence. Instead, they tend to promote hacks and apparatchiks and incentivise psychopathic narcissism and bureaucratic infighting skills – not the pursuit of the public interest.

How to open up the system? First, a Prime Minister should be able to appoint Secretaries of State from outside Parliament. [How? A quick and dirty solution would be: a) shove them in the Lords, b) give Lords ministers ‘rights of audience’ in the Commons, c) strengthen the Select Committee system.]

Second, the 150 year experiment with a permanent civil service should end and Whitehall must open to outsiders. The role of Permanent Secretary should go and ministers should appoint departmental chief executives so they are really responsible for policy and implementation. Expertise should be brought in as needed with no restrictions from the destructive civil service ‘human resources’ system that programmes government to fail. Mass collaborations are revolutionising science [cf. Michael Nielsen’s brilliant book]; they could revolutionise policy. Real openness would bring urgent focus to Whitehall’s disastrous lack of skills in basic functions such as budgeting, contracts, procurement, legal advice, and project management.

Third, Whitehall’s functions should be amputated. The Department for Education improved as Gove shrank it. Other departments would benefit from extreme focus, simplification, and firing thousands of overpaid people. If the bureaucracy ceases to be ‘permanent’, it can adapt quickly. Instead of obsessing on process, distorting targets, and micromanaging methods, it could shift to incentivising goals and decentralising methods.

Fourth, existing legal relationships with the EU and ECHR must change. They are incompatible with democratic and effective government

Fifth, Number Ten must be reoriented from ‘government by punditry’ to a focus on the operational planning and project management needed to convert priorities to reality over months and years.

Technological changes such as genetic engineering and machine intelligence are bringing revolution. It would be better to undertake it than undergo it.

 

 

UPDATE DOC – Open Policy Experiment 1: School Direct and Initial Teacher Training

This link is to a PDF of an update on the Open Policy experiment on teacher training and School Direct that I began with a blog on 22 July.

Please leave all comments / corrections etc in the comments on THIS blog, not the original (unless you are specifically replying to a comment on the original).

I do not mind any degree of disagreement with me provided it is explained. I will maintain the same strict policy on comments. Please think about your comment – how could someone use this to improve the document, or avoid a mistake that I can explain etc?

Thanks to all for making the effort to help and apologies for new errors I have introduced – please fix them.

I will watch comments and, if there is sufficient interest, I will update this document with additions, corrections of my mistakes etc.

Hopefully your collective efforts will yield some progress…

‘Given enough eyeballs, all bugs are shallow’.

DC

Ps.

I make a few references to ‘Cargo Cult science’. This refers to a famous speech by Nobel-winning physicist, Richard Feynman, about education research and scientific methodology. It explains the difference between a) the methods and ‘extreme honesty’ that, at its best, characterises the scientific method when applied to physics and b) ‘cargo cult science’, social science research that has the form of the scientific method without its substance, that characterises so much education research (and politicians’ use of research). It should be on the reading list for all trainee teachers. A PDF is here.

‘Standin’ by the window, where the light is strong’: de-extinction, machine intelligence, the search for extra-solar life, autonomous drone swarms bombing Parliament, genetics & IQ, science & politics, and much more @ SciFoo 2014

‘SciFoo’ 8-10 August 2014, the Googleplex, Silicon Valley, California.

On Friday 8 August, I woke up in Big Sur (the coast of Northern California), looked out over the waves breaking on the wild empty coastline, munched a delicious Mexican breakfast at Deetjen’s, then drove north on Highway 1 towards Palo Alto where a few hours later I found myself looking through the windows of Google’s HQ at a glittering sunset in Silicon Valley.

I was going to ‘SciFoo’. SciFoo is a weekend science conference. It is hosted by Larry Page at Google’s HQ in Silicon Valley and organised by various people including the brilliant Timo Hannay from Digital Science.

I was invited because of my essay that became public last year (cf. HERE). Of the 200+ people, I was probably the only one who made zero positive contribution to the fascinating weekend and therefore wasted a place, so although it was a fantastic experience for me the organisers should not invite me back and I feel guilty about the person who could not go because I was there. At least I can let others know about some of the things discussed… (Although it was theoretically ‘on the record unless stated otherwise’, I could tell that many scientists were not thinking about this and so I have left out some things that I think they would not want attributed. Given they were not experienced politicians being interviewed but scientists at a scientific conference, I’m erring on the side of caution, particularly given the subjects discussed.)

It was very interesting to see many of the people whose work I mentioned in my essay and watch them interacting with each other – intellectually and psychologically / physically.

I will describe some of the things that struck me though, because there are about 7-10 sessions going on simultaneously, this is only a small snapshot.

In my essay, I discuss some of the background to many of these subjects so I will put references [in square brackets] so people can refer to it if they want.

Please note that below I am reporting what I think others were saying – unless it is clear, I am not giving my own views. On technical issues, I do not have my ‘own’ views – I do not have relevant skills. All I can do is judge where consensus lies and how strong it is. Many important issues involve asking at least 1) is there a strong scientific consensus on X among physical scientists with hard quantitative data to support their ideas (uber-example, the Standard Model of particle physics), b) what are the non-science issues, such as ‘what will it cost, who pays/suffers and why?’ On A, I can only try to judge what technically skilled people think. B is a different matter.

Whether you were there or not, please leave corrections / additions / questions in the comments box. Apologies for errors…

In a nutshell, a few likely scenarios / ideas, without spelling out caveats… 1) Extinct species are soon going to be brought back to life and the same technology will be used to modify existing species to help prevent them going extinct. 2) CRISPR  – a new gene editing technology – will be used to cure diseases and ‘enhance’ human performance but may also enable garage bio-hackers to make other species extinct. 3) With the launch of satellites in 2017/18, we may find signs of life by 2020 among the ~1011 exoplanets we now know exist just in our own galaxy though it will probably take 20-30 years, but the search will also soon get crowdsourced in a way schools can join in. 4) There is a reasonable chance we will have found many of the genes for IQ within a decade via BGI’s project, and the rich may use this information for embryo selection. 5) ‘Artificial neural networks’ are already outperforming humans on various pattern-recognition problems and will continue to advance rapidly. 6) Automation will push issues like a negative income tax onto the political agenda as millions lose their jobs to automation. 7) Autonomous drones will be used for assassinations in Europe and America shortly. 8) Read Neil Gershenfeld’s book ‘FAB’ if you haven’t and are interested in science education / 3D printing / computer science (or at least watch his TED talks). 9) Scientists are desperate to influence policy and politics but do not know how.

Biological engineering / computational biology / synthetic biology [Section 4]

George Church (Harvard), a world-leading biologist, spoke at a few sessions and his team’s research interests were much discussed.  (Don’t assume he said any specific thing below.)

The falling cost of DNA sequencing continues to spur all sorts of advances. It has fallen from a billion dollars per genome a decade ago to less than a thousand dollars now (a million-fold improvement), and the Pentagon is planning on it reaching $100 soon. We can also sequence cancer cells to track their evolution in the body.

CRISPR. CRISPR is a new (2012) and very hot technology that is a sort of ‘cut and paste’ gene editing tool. It allows much more precise and effective engineering of genomes. Labs across America are rushing to apply it to all sorts of problems. In March this year, it was used to correct faulty genes in mice and cure them of a liver condition. It plays a major part in many of the biological issues sketched below.

‘De-extinction’ (bringing extinct species back to life). People are now planning the practical steps for de-extinction to the extent that they are scoping out land in Siberia where woolly mammoths will roam. As well as creating whole organisms, they will also grow organs modified by particular genes to test what specific genes and combinations do. This is no longer sci-fi – it is being planned and is likely to happen. The buffalo population was recently re-built (Google serves buffalo burgers in its amazing kitchens) from a tiny population to hundreds of thousands and there seems no reason to think it is impossible to build a significant population from scratch.

What does this mean? You take the DNA from an animal, say a woolly mammoth buried in the ground, sequence it, then use the digitised genome to create an embryo and either grow it in a similar animal (e.g. elephant for a mammoth) or in an artificial womb. (I missed the bit explaining the rationale for some of the proposed projects but, apart from the scientific reasons, one rationale for the mammoth was described as a conservation effort to preserve the frozen tundra and prevent massive amounts of greenhouse gases being released from beneath it.)

There are also possibilities of using this technology for conservation. For example, one could re-engineer the Asian elephant so that it could survive in less hospitable climates (e.g. modify the genes that produce haemoglobin so it is viable in colder places).

Now that we have sequenced the genome for Neanderthals (and learned that humans interbred with them, so you have traces of their DNA – unless you’re an indigenous sub-Saharan African), there is no known physical reason why we could not bring a Neanderthal back to life once the technology has been refined on other animals. This obviously raises many ethical issues – e.g. if we did it, they would have to be given the same legal rights as us (one distinguished person said that if there were one in the room with us we would not notice, contra the pictures often used to illustrate them). It is assumed by many that this will happen (nobody questioned the assumption) – just as it seemed to be generally assumed that human cloning will happen – though probably not in a western country but somewhere with fewer legal restrictions, after the basic technologies have been refined. (The Harvard team gets emails from women volunteering to be the Neanderthal’s surrogate mum.)

‘Biohacking’. Biohacking is advancing faster than Moore’s Law. CRISPR editing will allow us to enhance ourselves. E.g. Tibetans have evolved much more efficient systems for coping with high altitude, and some Africans have much stronger bones than the rest of us (see below). Will we reengineer ourselves to obtain these advantages? CRISPR obviously also empowers all sorts of malevolent actors too – cf. this very recent paper (by Church et al). It may soon be possible for people in their garages to edit genomes and accidentally or deliberately drive species to extinction as well as attempt to release deadly pathogens. I could not understand why people were not more worried about this – I hope I was missing a lot. (Some had the attitude that ‘nature already does bio-terrorism’ so we should relax. I did not find this comforting and I’m sure I am in the majority so for anybody influential reading this I would strongly advise you not to use this argument in public advocacy or it is likely to accelerate calls for your labs to be shut down.)

‘Junk’. There is more and more analysis of what used to be called ‘junk DNA’. It is now clear that far from being ‘junk’ much of this has functions we do not understand. This connects to the issue that although we sequenced the human genome over a decade ago, the quality of the ‘reference’ version is not great and (it sounded like from the discussions) it needs upgrading.

‘Push button’ cheap DNA sequencers are around the corner. Might such devices become as ubiquitous as desktop printers? Why doesn’t someone create a ‘gene web browser’ that can cope with all the different data formats for genomes?

Privacy. There was a lot of talk about ‘do you want your genome on the web?’. I asked a quick informal pop quiz (someone else’s idea): there was unanimity that ‘I’d much rather my genome was on the web than my browsing history’. [UPDATE: n<10 and perhaps they were tongue in cheek!? One scientist pointed out in a session that when he informed his insurance company, after sequencing his own genome, that he had a very high risk of getting colon cancer, they raised his premiums. There are all sorts of reasons one would want to control genomic information and I was being a bit facetious.]

In many ways, computational biology and synthetic biology have that revolutionary feeling of the PC revolution in the 1970s – huge energy, massive potential for people without big resources to make big contributions, the young crowding in, the feeling of dramatic improvements imminent. Will this all seem ‘too risky’? It’s hard to know how the public will respond to risk. We put up with predictable annual carnage from car accidents but freak out over trivia. We ignore millions of deaths in the Congo but freak out over a handful in Israel/Gaza. My feeling is some of the scientists are too blasé about how the public will react to the risks, but I was wrong about how much fear there would be about the news that scientists recently deliberately engineered a much more dangerous version of an animal flu.

AI / machine learning / neuroscience [Section 5].

Artificial neural networks (NNs), now often referred to as ‘deep learning’, were first created 50 years ago but languished for a while when progress slowed. The field is now hot again. (Last year Google bought some companies leading the field, and a company, Boston Dynamics, that has had a long-term collaboration with DARPA.)

Jurgen Schmidhuber explained progress and how NNs have recently approached or surpassed human performance in various fields. E.g. recently NNs have surpassed human performance in recognising traffic signals (0.56% error rate for the best NN versus 1.16% for humans). Progress in all sorts of pattern recognition problems is clearly going to continue rapidly. E.g. NNs are now being used to automate a) the analysis of scans for cancer cells and b) the labelling of scans of human brains – so artificial neural networks are now scanning and labelling natural neural networks.

Steve Hsu has blogged about this session here:

http://infoproc.blogspot.co.uk/2014/08/neural-networks-and-deep-learning.html?m=1

Michael Nielsen is publishing an education project online for people to teach themselves the basics of neural networks. It is brilliant and I would strongly advise teachers reading this blog to consider introducing it into their schools and doing the course with the pupils.

http://neuralnetworksanddeeplearning.com

Neil Gershenfeld (MIT) gave a couple of presentations. One was on developments in computer science connecting: non-‘von Neumann architecture’, programmable matter, 3D printing, ‘the internet of things’ etc. [Cf. Section 3.] NB. IBM announced this month substantial progress in their quest for a new computer architecture that is ‘non-Von Neumann’: cf. this –

http://venturebeat.com/2014/08/07/ibms-synapse-marshals-the-power-of-the-human-brain-in-a-computer/view-all/

Another was on the idea of an ‘interspecies internet’. We now know many species can recognise each other, think, and communicate much better than we realised. He showed bonobos playing music with Peter Gabriel and dolphins communicating. He and others are plugging them into the internet. Some are doing this to help the general goal of figuring out how we might communicate with intelligent aliens – or how they might communicate with us.

(Gershenfeld’s book FAB led me to push 3D printing into the new National Curriculum and I would urge school science teachers to watch his TED talks and read this book. [INSERTED LATER: Some people have asked about this point. I (I thought obviously) did not mean I wrote the NC document. I meant – I pushed the subject into the discussions with the committees/drafters who wrote the NC. Experts in the field agreed it belonged. When it came out, this was not controversial. We also funded pilots with 3D printers so schools could get good advice about how to teach the subject well.] His point about 3D printers restoring the connection between thinking and making – lost post-Renaissance – is of great importance and could help end the foolishly entrenched ‘knowledge’ vs ‘skills’ and academic vs vocational trench wars. Gove actually gave a speech about this not long before he was moved and as far as I could tell it got less coverage than any speech he ever gave, thus proving the cliché about speeches on ‘skills’.)

There were a few presentations about ‘computational neuroscience’. I could not understand anything much as they were too technical. It was clear that there is deep concern among EU neuroscientists about the EU’s  huge funding for Henry Markram’s Human Brain Project. One leading neuroscientist said to me that the whole project is misguided as it does not have clear focused goals and the ‘overhype’ will lead to public anger in a few years. Apparently, the EU is reconsidering the project and its goals. I have no idea about the merits of these arguments. I have a general prejudice that, outside special circumstances, experience suggests that it is better to put funding into many pots and see what works, as DARPA does.

There are all sorts of crossovers between: AI / neuroscience / big data / NNs / algorithmic pattern recognition in other fields.

Peter Norvig, a leader in machine intelligence, said that he is more worried about the imminent social implications of continued advances making millions unemployed than he is about a sudden ‘Terminator / SKYNET’ scenario of a general purpose AI bootstrapping itself to greater than human intelligence and exterminating us all. Let’s hope so. It is obvious that this field is going to keep pushing boundaries – in open, commercial, and classified projects – so we are essentially going to be hoping for the best as we make more and more advances in AI. The idea of a ‘negative income tax’ – or some other form of essentially paying people X just to live – seems bound to return to the agenda. I think it could be a way around all sorts of welfare arguments. The main obstacle, it seems to me, is that people won’t accept paying for it if they think uncontrolled immigration will continue as it is now.

Space [Section 2]

There was great interest in various space projects and some senior people from NASA. There is much sadness at how NASA, despite many great people, has become a normal government institution – ie. caught in DC politics, very bureaucratic, and dysfunctional in various ways. On the other hand, many private ventures are now growing. E.g. Elon Musk is lowering the $/kg of getting material into orbit and planning a non-government Mars mission. As I said in my essay, really opening up space requires a space economy – not just pure science and research (such as putting telescopes on the far side of the moon, which we obviously should do). Columbus opened up America – not the Vikings.

There is another obvious motive. As Carl Sagan said, if the dinosaurs had had a space programme, they’d still be here. In the long-term we either develop tools for dealing with asteroids or we will be destroyed. We know this for sure. I think I heard that NASA is planning to park a small asteroid close to the moon around 2020 but I may have misheard / misunderstood.

Mario Livio led a great session on the search for life on exoplanets. The galaxy has ~1011 stars and there is ~1 planet on average per star. There are ~1011 galaxies, so a Fermi estimate is there are ~1022 planets – 10 billion trillion planets – in the observable universe (this number is roughly 1,000 times bigger than the number you get in the fable of putting a grain of rice on the first square of a chessboard and doubling on each subsequent square). Many of them are in the ‘habitable zone’ around stars.

In 2017/18, there are two satellites launching that will be able to do spectroscopy on exoplanets – i.e. examine their atmospheres and detect things like oxygen and water. ‘If we get lucky’, these satellites will find ‘bio-signatures’ of life. If they find life having looked at only a few planets, then it would mean that life is very common. ‘More likely’ is it will take 20-30 years and a new generation of space-based telescopes to find life. If planets are found with likely biosignatures, then it would make sense to turn SETI’s instruments towards them to see if they find anything. (However, we are already phasing out the use of radio waves for various communications – perhaps the use of radio waves is only a short window in the lifetime of a civilisation.) There are complex Bayesian arguments about what we might infer about our own likely future given various discoveries but I won’t go into those now. (E.g. if we find life is common but no traces of intelligent life, does this mean a) the evolution of complex life is not a common development from simple life; b) intelligent life is also common but it destroys itself; c) they’re hiding, etc.)

A very impressive (and helpful towards the ignorant like me) young scientist working on exoplanets called Oliver Guyon demonstrated a fascinating project to crowdsource the search for exoplanets by building a global network of automated cameras – PANOPTES (www.projectpanoptes.org). His team has built a simple system that can find exoplanets using normal digital cameras costing less than $1,000. They sit in a box connected to a 12V power supply, automatically take pictures of the night sky every few seconds, then email the data to the cloud. There, the data is aggregated and algorithms search for exoplanets. These units are cheap (can’t remember what he said but I think <$5,000). Everything is open-source, open-hardware. They will start shipping later this year and will make a brilliant school science project. Guyon has made the project with schools in mind so that assembling and operating the units will not require professional level skills. They are also exploring the next move to connect smartphone cameras.

Building the >15m diameter space telescopes we need to search for life seems to me an obvious priority for scientific budgets –  it is one of the handful of the most profound questions facing us.

There was an interesting cross-over discussion about ‘space and genetics’ in which people discussed various ways in which space exploration would encourage / require genetic modification. E.g.1 some sort of rocket fuel has recently been discovered to exist in large quantities on Mars. This is very handy but the substance is toxic. It might therefore make sense to modify humans going to live on Mars to be resistant. E.g.2 Space travel weakens bones. It has been discovered that mutations in the human population can improve bone strength by 8 standard deviations. This is a massive improvement – for comparison, 8 SDs in IQ covers people from severely mentally disabled to Nobel-winners. This was discovered by a team of scientists in Africa who noticed that people in a local tribe who got hit by cars did not suffer broken bones, so they sequenced the locals’ genomes. (Someone said there have already been successful clinical trials testing this discovery in a real drug to deal with osteoporosis.) E.g.3 Engineering E. Coli shows that just four mutations can improve resistance to radiation by ?1,000 times (can’t read my note).

Craig Venter and others are thinking about long-term projects to send ‘von Neumman-bots’ (self-replicating space drones) across the universe containing machines that could create biological life once they arrive somewhere interesting, thus avoiding the difficult problems of keeping humans alive for thousands of years on spaceships. (Nobel-winning physicist Gerard t’ Hooft explains the basic principles of this in his book Playing with planets.)

This paper (August 2014) summarises issues in the search for life:

http://www.pnas.org/content/early/2014/08/01/1304213111.full.pdf

Finding the genes for IQ and engineering possibilities [Section 5].

When my essay came out last year, there was a lot of mistaken reporting that encouraged many in the education world to grab the wrong end of the stick about IQ, though the BBC documentary about the controversy (cf. below) was excellent and a big step forward. It remains the case that very few people realise that in the last couple of years direct examination of DNA has now vindicated the consistent numbers on IQ heritability from decades of twin/adoption studies.

The rough heritability numbers for IQ are no longer in doubt among physical scientists who study this field: it is roughly 50% heritable at age ~18-20 and this number rises towards 70-80% for older adults. This is important because IQ is such a good predictor of the future – it is a better predictor than social class. E.g. The long-term Study of Mathematically Precocious Youth, which follows what has happened to children with 1:10,000 ability, shows among many things that a) a simple ‘noisy’ test administered at age 12-13 can make amazingly accurate predictions about their future, and b) achievements such as scientific breakthroughs correlate strongly with IQ. (If people looked at the data from SMPY, then I think some of the heat and noise in the debate  would fade but it is a sad fact that approximately zero senior powerful people in the English education world had even heard of this study before the furore over Plomin last year.)

Further, the environmental effects that are important are not the things that people assume. If you test the IQ of an adopted child in adulthood and the parents who adopted it, you find approximately zero correlation – all those anguished parenting discussions had approximately no measurable impact on IQ. (This does not mean that ‘parenting doesn’t matter’ – parents can transfer narrow skills such as playing the violin.) In the technical language, the environmental effects that are important are ‘non-shared’ environmental effects – i.e. they are things that two identical twins do not experience in the same way. We do not know what they are. It is reasonable to think that they are effectively random tiny events with nonlinear effects that we may never be able to track in detail – cf. this paper for a discussion of this issue in the context of epidemiology: http://ije.oxfordjournals.org/content/40/3/537.full.pdf+html

There remains widespread confusion on this subject among social scientists, education researchers, and the worlds of politics and the media where people were told misleading things in the 1980s and 1990s and do not realise that the debates have been transformed. To be fair, however, it was clear from this weekend that even many biologists do not know about new developments in this field so it is not surprising that political journalists and education researchers do not.

(An example of confusion in the political/media world… In my essay, I used the technical term ‘heritable’ which is a population statistic – not a statement about an individual. I also predicted that media coverage would confuse the subject (e.g. by saying things like ‘70% of your IQ comes from genes’). Sure enough some journalists claimed I said the opposite of what I actually said then they quoted scientists attacking me for making a mistake that not only did I not make but which I actually warned about. Possibly the most confused sentence of all those in the media about my essay was the line ‘wealth is more heritable than genes’, which was in Polly Toynbee’s column and accompanying headline in the Guardian. This sentence is a nonsense sentence as it completely mangles the meaning of the term ‘heritable’. Much prominent commentary from politicians and sociologists/economists on ‘social mobility’ is gibberish because of mistaken assumptions about genes and environment. The Endnote in my essay has links to work by Plomin, Hsu et al that explains it all properly. This interview with Plomin is excellent: http://www.spectator.co.uk/features/8970941/sorry-but-intelligence-really-is-in-the-genes/. This recent BBC radio programme is excellent and summarises the complex issues well: http://www.bbc.co.uk/programmes/b042q944/episodes/guide)

I had a fascinating discussion/tutorial at SciFoo with Steve Hsu. Steve Hsu is a professor of theoretical physics (and successful entrepreneur) with a long interest in IQ (he also runs a brilliant blog that will keep you up to speed on all sorts). He now works part time on the BGI project in China to discover the genes responsible for IQ.

IQ is very similar to height from the perspective of behavioural genetics. Height has the advantage that it is obviously easier to measure than IQ but it has roughly the same heritability. Large scale GWAS are already identifying some of the genes responsible for height. Hsu recently watched a talk by Fields Medallist Terry Tao and realised that a branch of maths could be used to examine the question – how many genomes do we need to scan to identify a substantial number of the genes for IQ? His answer: ‘roughly 10k moderately rare causal variants of mostly negative effect are responsible for normal population variation’ and finding them will require sequencing roughly a million genomes. The falling cost of sequencing DNA means that this is within reach. ‘At the time of this writing SNP genotyping costs are below $50 USD per individual, meaning that a single super-wealthy benefactor could independently fund a crash program for less than $100 million’ (Hsu).

The BGI project to find these genes has hit some snags recently (e.g. a US lawsuit between the two biggest suppliers of gene sequencing machines). However, it is now expected to start again soon. Hsu thinks that within a decade we could find many of the genes responsible for IQ. He has just put his fascinating paper on this subject on his blog (there is also a Q&A on p.27 that will be very useful for journalists):

http://infoproc.blogspot.co.uk/2014/08/genetic-architecture-of-intelligence.html

Just discovering a substantial fraction of the genes would be momentous in itself but there is more. It is already the case that farmers use genomes to make predictions about cows’ properties and behaviour (‘genotype to phenotype’ predictions). It is already the case that rich people could use in vitro fertilisation to select the egg which they think will be most advantageous, because they can sequence genomes of multiple eggs and examine each one to look for problems then pick the one they prefer. Once we identify a substantial number of IQ genes, there is no obvious reason why rich people will not select the egg that has the highest prediction for IQ. 

This clearly raises many big questions. If the poor cannot do the same, then the rich could quickly embed advantages and society could become not only more unequal but also based on biological classes. One response is that if this sort of thing does become possible, then a national health system should fund everybody to do this. (I.e. It would not mandate such a process but it would give everybody a choice of whether to make use of it.) Once the knowledge exists, it is hard to see what will stop some people making use of it and offering services to – at least – the super-rich.

It is vital to separate two things: a) the basic science of genetics and cognition (which must be allowed to develop), and b) the potential technological applications and their social implications. The latter will rightly make people deeply worried, given our history, and clearly require extremely serious public debate. One of the reasons I wrote my essay was to try to stimulate such debate on the biggest – and potentially most dangerous – scientific issues. By largely ignoring such issues, Westminster, Whitehall, and the political media are wasting the time we have to discuss them so technological breakthroughs will be unnecessarily  shocking when they come.

Hsu’s contribution to this research – and his insight when listening to Tao about how to apply a branch of mathematics to a problem – is also a good example of how the more abstract fields of maths and physics often make contributions to the messier study of biology and society. The famous mathematician von Neumann practically invented some new fields outside maths and made many contributions to others. The physicist-mathematician Freeman Dyson recently made a major contribution to Game Theory which had lain unnoticed for decades until he realised that a piece of maths could be applied to uncover new strategies (Google “Dyson zero determinant strategies” and cf. this good piece: http://www.americanscientist.org/issues/id.16112,y.0,no.,content.true,page.1,css.print/issue.aspx).

However, this also raises a difficult issue. There is a great deal of Hsu’s paper – and the subject of IQ and heritability generally – that I do not have the mathematical skills to understand. This will be true of a large fraction of education researchers in education departments – I would bet a large majority. This problem is similar for many other vital issues (and applies to MPs and their advisers) and requires general work on translating such research into forms that can be explained by the media.

Kathryn Ashbury also did a session on genes and education but I went to a conflicting one with George Church so unfortunately I missed it.

‘Big data’, simulations, and distributed systems [Section 6&7]

The rival to Markram’s Brain Project for mega EU funding was Dirk Helbing (ETH Zurich) and his project for new simulations to aid policy-making. Helbing was also at SciFoo and gave a couple of presentations. I will write separately about this.

Helbing says convincingly: ‘science must become a fifth pillar of democracies, besides legislation, executive, jurisdiction, and the public media’. Many in politics hope that technology will help them control things that now feel out of control. This is unlikely. The amount of data is growing at a faster rate than the power of processing and the complexity of networked systems grows factorially therefore top-down control will become less and less effective.

The alternative? ‘Distributed (self-)control, i.e. bottom-up self-regulation’. E.g. Helbing’s team has invented self-regulating traffic lights driven by traffic flows that can ‘outperform the classical top-down control by a conventional traffic center.’

‘Can we transfer and extend this principle to socio-economic systems? Indeed, we are now developing mechanisms to overcome coordination and cooperation failures, conflicts, and other age-old problems. This can be done with suitably designed social media and sensor networks for real-time measurements, which will eventually weave a Planetary Nervous System. Hence, we can finally realize the dream of self-regulating systems… [S]uitable institutions such as certain social media – combined with suitable reputation systems – can promote other-regarding decision-making. The quick spreading of social media and reputation systems, in fact, indicates the emergence of a superior organizational principle, which creates collective intelligence by harvesting the value of diversity…’

His project’s website is here:

http://www.futurict.eu

I wish MPs and spads in all parties would look at this project and Helbing’s work. It provides technologically viable and theoretically justifiable mechanisms to avoid the current sterile party debates about delivery of services. We must move from Whitehall control to distributed systems…

Science and politics

Unsurprisingly, there was a lot of grumbling about politicians, regulation, Washington gridlock, bureaucracy and so on.

Much of it is clearly justified. Some working in genetics had stories about how the regulations forbid them to tell people about imminently life threatening medical problems they discover. Others were bemoaning the lack of action on asteroid defence and climate change.

Some of these problems are inherently extremely difficult, as I discuss in my essay. On top of this, though, is the problem that many (most?) scientists do not know how to go about changing things.

It was interesting that some very eminent scientists, all much cleverer than ~100% of those in politics [INSERT: better to say ‘all with higher IQ than ~100% of those in politics’], have naive views about how politics works. In group discussions, there was little focused discussion about how they could influence politics better even though it is clearly a subject that they care about very much. (Gershenfeld said that scientists have recently launched a bid to take over various local government functions in Barcelona, which sounds interesting.)

A few times I nearly joined in the discussion but I thought it would disrupt things and distract them. In retrospect, I think this may have been a mistake and I should have spoken up. But also I am not articulate and I worried I would not be able to explain their errors and it would waste their time.

I will blog on this issue separately. A few simple observations…

To get things changed in politics, scientists need mechanisms a) to agree priorities in order to focus their actions on b) roadmaps with specifics. Generalised whining never works. The way to influence politicians is to make it easy for them to fall down certain paths without much thought, and this means having a general set of goals but also a detailed roadmap the politicians can apply, otherwise they will drift by default to the daily fog of chaos and moonlight.

Scientists also need to be prepared to put their heads above the parapet and face controversy. Many comments amounted to ‘why don’t politicians do the obviously rational thing without me having to take a risk of being embroiled in media horrors’. Sorry guys but this is not how it works.

Many academics are entirely focused on their research and do not want to lose time to politics. This is entirely reasonable. But if you won’t get involved you can have little influence other than lending your name to the efforts of others.

Working in the Department for Education, I have experienced in England that very few scientists were prepared to face controversy over the issue of A Levels (exams at 18) and university entry / undergraduate standards even though this problem directly affected their own research area. Many dozens sought me out 2007-14 to complain about existing systems. I can count on the fingers of one hand those who rolled the dice and did things in the public domain that could have caused them problems. I have heard many scientists complain about media reports but when I’ve said – ‘write a blog explaining why they’re wrong’, the answer is almost invariably ‘oh, the VC’s office would go mad’. If they won’t put their heads above the parapet on an issue that directly touches their own subject and career, how much are they likely to achieve in moving political debate in areas outside their own fields?

Provided scientists a) want to avoid controversy and b) are isolated, they cannot have the leverage they want. The way to minimise controversy is to combine in groups – for the evolutionary biologists reading this, think SHOALS! – so that each individual is less exposed. But you will only join a shoal if you agree a common purpose.

I’m going to do a blog on ‘How scientists can learn from Bismarck and Jean Monnet to influence politics‘. Monnet avoided immediate battles for power in favour of ‘preparing the future’ – i.e. having plans in his pocket for when crises hit and politicians were desperate. He created the EEC in this way. In the same way people find it extremely hard to operationalise the lessons of Thucydides or Bismarck, they do not operationalise the lessons from Monnet. It would be interesting if scientists did this in a disciplined way. In some ways, it seems to me vital if we are to avoid various disasters. It is also necessary, however, to expose scientists to the non-scientific factors in play.

Anyway, it would be worth exploring this question: can very high IQ people with certain personality traits (like von Neumann, not like Gödel) learn enough in half a day’s exposure to case studies of successful political action to enable them to change something significant in politics, provided someone else can do most of the admin donkey work? I’m willing to bet the answer is YES. Whether they will then take personal risks by ACTING is another question.

A physicist remarked: ‘we’re bitching about politicians but we can’t even sort out our own field of scientific publishing which is a mess’.

NB. for scientists who haven’t read anything I’ve read before, do not make the mistake of thinking I am defending politicians. If you read other stuff I’ve written you will see that I have made all the criticisms that you have. But that doesn’t mean that scientists cannot do much better than they are at influencing policy.

A few general comments

1. It has puzzled me for over a decade that a) one of the few things the UK still has that is world class is Oxbridge, b) we have the example of Silicon Valley and our own history of post-1945 bungling to compare it with (e.g. how the Pentagon treated von Neumann and how we treated Turing viz the issue of developing computer science), yet c) we persistently fail to develop venture capital-based hubs around Oxbridge on the scale they deserve. As I pottered down University Avenue in Palo Alto looking for a haircut, past venture capital offices that can provide billions in start-up investment, I thought: you’ve made a few half-hearted attempts to persuade people to do more on this, when you get home try again. So I will…

2. It was interesting to see how physicists have core mathematical skills that allow them to grasp fundamentals of other fields without prior study. Watching them reminded me of Mandelbrot’s comment that:

‘It is an extraordinary feature of science that the most diverse, seemingly unrelated, phenomena can be described with the same mathematical tools. The same quadratic equation with which the ancients drew right angles to build their temples can be used today by a banker to calculate the yield to maturity of a new, two-year bond. The same techniques of calculus developed by Newton and Leibniz two centuries ago to study the orbits of Mars and Mercury can be used today by a civil engineer to calculate the maximum stress on a new bridge… But the variety of natural phenomena is boundless while, despite all appearances to the contrary, the number of really distinct mathematical concepts and tools at our disposal is surprisingly small… When we explore the vast realm of natural and human behavior, we find the most useful tools of measurement and calculation are based on surprisingly few basic ideas.’

3. High status people have more confidence in asking basic / fundamental / possibly stupid questions. One can see people thinking ‘I thought that but didn’t say it in case people thought it was stupid and now the famous guy’s said it and everyone thinks he’s profound’. The famous guys don’t worry about looking stupid and they want to get down to fundamentals in fields outside their own.

4. I do not mean this critically but watching some of the participants I was reminded of Freeman Dyson’s comment:

‘I feel it myself, the glitter of nuclear weapons. It is irresistible if you come to them as a scientist. To feel it’s there in your hands. To release the energy that fuels the stars. To let it do your bidding. And to perform these miracles, to lift a million tons of rock into the sky, it is something that gives people an illusion of illimitable power, and it is in some ways responsible for all our troubles, I would say, this is what you might call ‘technical arrogance’ that overcomes people when they see what they can do with their minds.’ 

People talk about rationales for all sorts of things but looking in their eyes the fundamental driver seems to be – am I right, can I do it, do the patterns in my mind reflect something real? People like this are going to do new things if they can and they are cleverer than the regulators. As a community I think it is fair to say that outside odd fields like nuclear weapons research (which is odd because it still requires not only a large collection of highly skilled people but also a lot of money and all sorts of elements that are hard (but not impossible) for a non-state actor to acquire and use without detection), they believe that pushing the barriers of knowledge is right and inevitable. Fifteen years on from the publication by Silicon Valley legend Bill Joy of his famous essay (‘Why the future doesn’t need us’), it is clear that many of the things he feared have proceeded and there remains no coherent government approach or serious international discussion. (I am not suggesting that banning things is generally the way forward.)

5. The only field where there was a group of people openly lobbying for something to be made illegal was the field of autonomous lethal drones. (There is a remorseless logic that means that countermeasures against non-autonomous drones (e.g. GPS-spoofing) incentivises one to make one’s drones autonomous. They can move about waiting to spot someone’s face then destroy them without any need for human input.) However, the discussion confirmed my view that even if this might be a good idea – it is doomed, in the short-term at least. I wonder what is to stop someone sending a drone swarm across the river and bombing Parliament during PMQs. Given it will be possible to deploy autonomous drones anonymously, it seems there may be a new era of assassinations coming, apart from all the other implications of drones. Given one may need a drone swarm to defend against drone swarm, I can’t see them being outlawed any time soon. (Cf. Suarez’s Kill Decision for a great techno-thriller on the subject.)

(Also, I thought that this was an area where those involved in cutting edge issues could benefit from talking to historians. E.g. my understanding is that we filmed the use of anthrax on a Scottish island and delivered the footage to the Nazis with the message that we would anthrax Germany if they used chemical weapons – i.e. the lack of chemical warfare in WWII was a case of successful deterrence, not international law.)

6. A common comment is – ‘technology X [e.g. in vitro fertilisation] was denounced at the time but humans adapt to such changes amazingly fast, so technology Y will be just the same’. This is a reasonable argument in some ways but I cannot help but think that many will think de-extinction, engineered bio-weapons, or human clones are going to be perceived as qualitative changes far beyond things like in vitro fertilisation.

7. Daniel Suarez told me what his next techno-thriller is about but if I put it on my blog he will deploy an autonomous drone with face recognition AI to kill me, so I’m keeping quiet. If you haven’t read Daemon, read it – it’s a rare book that makes you laugh out loud about how clever the plot is.

8. Von Neumann was heavily involved not only in the Manhattan Project but also the birth of the modern computer, the creation of the hydrogen bomb, and nuclear strategy. Before his tragic early death, he wrote a brilliant essay about the political problem of dealing with advanced technology which should be compulsory reading for all politicians aspiring to lead. It summarises the main problems that we face – ‘for progress, there is no cure…’

http://features.blogs.fortune.cnn.com/2013/01/13/can-we-survive-technology/

As I said at the top, any participants please tell me where I went wrong, and thanks for such a wonderful weekend.