On the referendum #33: High performance government, ‘cognitive technologies’, Michael Nielsen, Bret Victor, & ‘Seeing Rooms’

On the referendum #33: High performance government, ‘cognitive technologies’, Michael Nielsen, Bret Victor, & ‘Seeing Rooms’

‘People, ideas, machines — in that order!’ Colonel Boyd.

‘The main thing that’s needed is simply the recognition of how important seeing is, and the will to do something about it.’ Bret Victor.

‘[T]he transfer of an entirely new and quite different framework for thinking about, designing, and using information systems … is immensely more difficult than transferring technology.’ Robert Taylor, one of the handful most responsible for the creation of the internet and personal computing, and in inspiration to Bret Victor.

‘[M]uch of our intellectual elite who think they have “the solutions” have actually cut themselves off from understanding the basis for much of the most important human progress.’ Michael Nielsen, physicist. 

Introduction

This blog looks at an intersection of decision-making, technology, high performance teams and government. It sketches some ideas of physicist Michael Nielsen about cognitive technologies and of computer visionary Bret Victor about the creation of dynamic tools to help understand complex systems and ‘argue with evidence’, such as tools for authoring dynamic documents’, and ‘Seeing Rooms’ for decision-makers — i.e rooms designed to support decisions in complex environments. It compares normal Cabinet rooms, such as that used in summer 1914 or October 1962, with state-of-the-art Seeing Rooms. There is very powerful feedback between: a) creating dynamic tools to see complex systems deeper (to see inside, see across time, and see across possibilities), thus making it easier to work with reliable knowledge and interactive quantitative models, semi-automating error-correction etc, and b) the potential for big improvements in the performance of political and government decision-making.

It is relevant to Brexit and anybody thinking ‘how on earth do we escape this nightmare’ but 1) these ideas are not at all dependent on whether you support or oppose Brexit, about which reasonable people disagree, and 2) they are generally applicable to how to improve decision-making — for example, they are relevant to problems like ‘how to make decisions during a fast moving nuclear crisis’ which I blogged about recently, or if you are a journalist ‘what future media could look like to help improve debate of politics’. One of the tools Nielsen discusses is a tool to make memory a choice by embedding learning in long-term memory rather than, as it is for almost all of us, an accident. I know from my days working on education reform in government that it’s almost impossible to exaggerate how little those who work on education policy think about ‘how to improve learning’.

Fields make huge progress when they move from stories (e.g Icarus)  and authority (e.g ‘witch doctor’) to evidence/experiment (e.g physics, wind tunnels) and quantitative models (e.g design of modern aircraft). Political ‘debate’ and the processes of government are largely what they have always been largely conflict over stories and authorities where almost nobody even tries to keep track of the facts/arguments/models they’re supposedly arguing about, or tries to learn from evidence, or tries to infer useful principles from examples of extreme success/failure. We can see much better than people could in the past how to shift towards processes of government being ‘partially rational discussion over facts and models and learning from the best examples of organisational success‘. But one of the most fundamental and striking aspects of government is that practically nobody involved in it has the faintest interest in or knowledge of how to create high performance teams to make decisions amid uncertainty and complexity. This blindness is connected to another fundamental fact: critical institutions (including the senior civil service and the parties) are programmed to fight to stay dysfunctional, they fight to stay closed and avoid learning about high performance, they fight to exclude the most able people.

I wrote about some reasons for this before the referendum (cf. The Hollow Men). The Westminster and Whitehall response was along the lines of ‘natural party of government’, ‘Rolls Royce civil service’ blah blah. But the fact that Cameron, Heywood (the most powerful civil servant) et al did not understand many basic features of how the world works is why I and a few others gambled on the referendum — we knew that the systemic dysfunction of our institutions and the influence of grotesque incompetents provided an opportunity for extreme leverage. 

Since then, after three years in which the parties, No10 and the senior civil service have imploded (after doing the opposite of what Vote Leave said should happen on every aspect of the negotiations) one thing has held steady — Insiders refuse to ask basic questions about the reasons for this implosion, such as: ‘why Heywood didn’t even put together a sane regular weekly meeting schedule and ministers didn’t even notice all the tricks with agendas/minutes etc’, how are decisions really made in No10, why are so many of the people below some cognitive threshold for understanding basic concepts (cf. the current GATT A24 madness), what does it say about Westminster that both the Adonis-Remainers and the Cash-ERGers have become more detached from reality while a large section of the best-educated have effectively run information operations against their own brains to convince themselves of fairy stories about Facebook, Russia and Brexit…

It’s a mix of amusing and depressing — but not surprising to me — to hear Heywood explain HERE how the British state decided it couldn’t match the resources of a single multinational company or a single university in funding people to think about what the future might hold, which is linked to his failure to make serious contingency plans for losing the referendum. And of course Heywood claimed after the referendum that we didn’t need to worry about the civil service because on project management it has ‘nothing to learn’ from the best private companies. The elevation of Heywood in the pantheon of SW1 is the elevation of the courtier-fixer at the expense of the thinker and the manager — the universal praise for him recently is a beautifully eloquent signal that those in charge are the blind leading the blind and SW1 has forgotten skills of high value, the skills of public servants such as Alanbrooke or Michael Quinlan.

This blog is hopefully useful for some of those thinking about a) improving government around the world and/or b) ‘what comes after the coming collapse and reshaping of the British parties, and how to improve drastically the performance of critical institutions?’

Some old colleagues have said ‘Don’t put this stuff on the internet, we don’t want the second referendum mob looking at it.’ Don’t worry! Ideas like this have to be forced down people’s throats practically at gunpoint. Silicon Valley itself has barely absorbed Bret Victor’s ideas so how likely is it that there will be a rush to adopt them by the world of Blair and Grieve?! These guys can’t tell the difference between courtier-fixers and people with models for truly effective action like General Groves (HERE). Not one in a thousand will read a 10,000 word blog on the intersection of management and technology and the few who do will dismiss it as the babbling of a deluded fool, they won’t learn any more than they learned from the 2004 referendum or from Vote Leave. And if I’m wrong? Great. Things will improve fast and a second referendum based on both sides applying lessons from Bret Victor would be dynamite.

NB. Bret Victor’s project, Dynamic Land, is a non-profit. For an amount of money that a government department like the Department for Education loses weekly without any minister realising it’s lost (in the millions per week in my experience because the quality of financial control is so bad), it could provide crucial funding for Victor and help itself. Of course, any minister who proposed such a thing would be told by officials ‘this is illegal under EU procurement law and remember minister that we must obey EU procurement law forever regardless of Brexit’ — something I know from experience officials say to ministers whether it is legal or not when they don’t like something. And after all, ministers meekly accepted the Kafka-esque order from Heywood to prioritise duties of goodwill to the EU under A50 over preparations to leave A50, so habituated had Cameron’s children become to obeying the real deputy prime minister…

Below are 4 sections:

  1. The value found in intersections of fields
  2. Some ideas of Bret Victor
  3. Some ideas of Michael Nielsen
  4. A summary

*

1. Extreme value is often found in the intersection of fields

The legendary Colonel Boyd (he of the ‘OODA loop’) would shout at audiences ‘People, ideas, machines — in that order.‘ Fundamental political problems we face require large improvements in the quality of all three and, harder, systems to integrate all three. Such improvements require looking carefully at the intersection of roughly five entangled areas of study. Extreme value is often found at such intersections.

  • Explore what we know about the selection, education and training of people for high performance (individual/team/organisation) in different fields. We should be selecting people much deeper in the tails of the ability curve — people who are +3 (~1:1,000) or +4 (~1:30,000) standard deviations above average on intelligence, relentless effort, operational ability and so on (now practically entirely absent from the ’50 most powerful people in Britain’). We should  train them in the general art of ‘thinking rationally’ and making decisions amid uncertainty (e.g Munger/Tetlock-style checklists, exercises on SlateStarCodex blog). We should train them in the practical reasons for normal ‘mega-project failure’ and case studies such as the Manhattan Project (General Groves), ICBMs (Bernard Schriever), Apollo (George Mueller), ARPA-PARC (Robert Taylor) that illustrate how the ‘unrecognised simplicities’ of high performance bring extreme success and make them work on such projects before they are responsible for billions rather than putting people like Cameron in charge (after no experience other than bluffing through PPE then PR). NB. China’s leaders have studied these episodes intensely while American and British institutions have actively ‘unlearned’ these lessons.
  • Explore the frontiers of the science of prediction across different fields from physics to weather forecasting to finance and epidemiology. For example, ideas from physics about early warning systems in physical systems have application in many fields, including questions like: to what extent is it possible to predict which news will persist over different timescales, or predict wars from news and social media? There is interesting work combining game theory, machine learning, and Red Teams to predict security threats and improve penetration testing (physical and cyber). The Tetlock/IARPA project showed dramatic performance improvements in political forecasting are possible, contra what people such as Kahneman had thought possible. A recent Nature article by Duncan Watts explained fundamental problems with the way normal social science treats prediction and suggested new approaches — which have been almost entirely ignored by mainstream economists/social scientists. There is vast scope for applying ideas and tools from the physical sciences and data science/AI — largely ignored by mainstream social science, political parties, government bureaucracies and media — to social/political/government problems (as Vote Leave showed in the referendum, though this has been almost totally obscured by all the fake news: clue — it was not ‘microtargeting’).
  • Explore technology and tools. For example, Bret Victor’s work and Michael Nielsen’s work on cognitive technologies. The edge of performance in politics/government will be defined by teams that can combine the ancient ‘unrecognised simplicities of high performance’ with edge-of-the-art technology. No10 is decades behind the pace in old technologies like TV, doesn’t understand simple tools like checklists, and is nowhere with advanced technologies.
  • Explore the frontiers of communication (e.g crisis management, applied psychology). Technology enables people to improve communication with unprecedented speed, scale and iterative testing. It also allows people to wreak chaos with high leverage. The technologies are already beyond the ability of traditional government centralised bureaucracies to cope with. They will develop rapidly such that most such centralised bureaucracies lose more and more control while a few high performance governments use the leverage they bring (c.f China’s combination of mass surveillance, AI, genetic identification, cellphone tracking etc as they desperately scramble to keep control). The better educated think that psychological manipulation is something that happens to ‘the uneducated masses’ but they are extremely deluded — in many ways people like FT pundits are much easier to manipulate, their education actually makes them more susceptible to manipulation, and historically they are the ones who fall for things like Russian fake news (cf. the Guardian and New York Times on Stalin/terror/famine in the 1930s) just as now they fall for fake news about fake news. Despite the centrality of communication to politics it is remarkable how little attention Insiders pay to what works — never mind the question ‘what could work much better?’.  The fact that so much of the media believes total rubbish about social media and Brexit shows that the media is incapable of analysing the intersection of politics and technology but, although it is obviously bad that the media disinforms the public, the only rational planning assumption is that this problem will continue and even get worse. The media cannot explain either the use of TV or traditional polling well, these have been extremely important for over 70 years, and there is no trend towards improvement so a sound planning assumption is surely that the media will do even worse with new technologies and data science. This will provide large opportunities for good and evil. A new approach able to adapt to the environment an order of magnitude faster than now would disorient political opponents (desperately scrolling through Twitter) to such a degree — in Boyd’s terms it would ‘collapse their OODA loops’ — that it could create crucial political space for focus on the extremely hard process of rewiring government institutions which now seems impossible for Insiders to focus on given their psychological/operational immersion in the hysteria of 24 hour rolling news and the constant crises generated by dysfunctional bureaucracies.
  • Explore how to re-program political/government institutions at the apex of decision-making authority so that a) people are more incentivised to optimise things we want them to optimise, like error-correction and predictive accuracy, and less incentivised to optimise bureaucratic process, prestige, and signalling as our institutions now do; b) institutions are incentivised to build high performance teams rather than make this practically illegal at the apex of government; and c) we have ‘immune systems’ based on decentralisation and distributed control to minimise the inevitable failures of even the best people and teams.

Example 1: Red Teams and pre-mortems can combat groupthink and normal cognitive biases but they are practically nowhere in the formal structure of governments. There is huge scope for a Parliament-mandated small and extremely elite Red Team operating next to, and in some senses above, the Cabinet Office to ensure diversity of opinions, fight groupthink and other standard biases, make sure lessons are learned and so on. Cost: a few million that it would recoup within weeks by stopping blunders.

Example 2: prediction tournaments/markets could improve policy and project management, with people able to ‘short’ official delivery timetables — imagine being able to short Grayling’s transport announcements, for example. In many areas new markets could help — e.g markets to allow shorting of house prices to dampen bubbles, as Chris Dillow and others have suggested. The way in which the IARPA/Tetlock work has been ignored in SW1 is proof that MPs and civil servants are not actually interested in — or incentivised to be interested in — who is right, who is actually an ‘expert’, and so on. There are tools available if new people do want to take these things seriously. Cost: a few million at most, possibly thousands, that it would recoup within a year by stopping blunders.

Example 3: we need to consider projects that could bootstrap new international institutions that help solve more general coordination problems such as the risk of accidental nuclear war. The most obvious example of a project like this I can think of is a manned international lunar base which would be useful for a) basic science, b) the practical purposes of building urgently needed near-Earth infrastructure for space industrialisation, and c) to force the creation of new practical international institutions for cooperation between Great Powers. George Mueller’s team that put man on the moon in 1969 developed a plan to do this that would have been built by now if their plans had not been tragically abandoned in the 1970s. Jeff Bezos is explicitly trying to revive the Mueller vision and Britain should be helping him do it much faster. The old institutions like the UN and EU — built on early 20th Century assumptions about the performance of centralised bureaucracies — are incapable of solving global coordination problems. It seems to me more likely that institutions with qualities we need are much more likely to emerge out of solving big problems than out of think tank papers about reforming existing institutions. Cost = 10s/100s of billions, return = trillions, or near infinite if shifting our industrial/psychological frontiers into space drastically reduces the chances of widespread destruction.

A) Some fields have fantastic predictive models and there is a huge amount of high quality research, though there is a lot of low-hanging fruit in bringing methods from one field to another.

B) We know a lot about high performance including ‘systems management’ for complex projects but very few organisations use this knowledge and government institutions overwhelmingly try to ignore and suppress the knowledge we have.

C) Some fields have amazing tools for prediction and visualisation but very few organisations use these tools and almost nobody in government (where colour photocopying is a major challenge).

D) We know a lot about successful communication but very few organisations use this knowledge and most base action on false ideas. E.g political parties spend millions on spreading ideas but almost nothing on thinking about whether the messages are psychologically compelling or their methods/distribution work, and TV companies spend billions on news but almost nothing understanding what science says about how to convey complex ideas — hence why you see massively overpaid presenters like Evan Davis babbling metaphors like ‘economic takeoff’ in front of an airport while his crew films a plane ‘taking off’, or ‘the economy down the plughole’ with pictures of — a plughole.

E) Many thousands worldwide are thinking about all sorts of big government issues but very few can bring them together into coherent plans that a government can deliver and there is almost no application of things like Red Teams and prediction markets. E.g it is impossible to describe the extent to which politicians in Britain do not even consider ‘the timetable and process for turning announcement X into reality’ as something to think about — for people like Cameron and Blair the announcement IS the only reality and ‘management’ is a dirty word for junior people to think about while they focus on ‘strategy’. As I have pointed out elsewhere, it is fascinating that elite business schools have been collecting billions in fees to teach their students WRONGLY that operational excellence is NOT a source of competitive advantage, so it is no surprise that politicians and bureaucrats get this wrong.

But I can see almost nobody integrating the very best knowledge we have about A+B+C+D with E and I strongly suspect there are trillion dollar bills lying on the ground that could be grabbed for trivial cost — trillion dollar bills that people with power are not thinking about and are incentivised not to think about. I might be wrong but I would remind readers that Vote Leave was itself a bet on this proposition being right and I think its success should make people update their beliefs on the competence of elite political institutions and the possibilities for improvement.

Here I want to explore one set of intersections — the ideas of Bret Victor and Michael Nielsen.

*

2. Bret Victor: Cognitive technologies, dynamic tools, interactive quantitative models, Seeing Rooms — making it as easy to insert facts, data, and models in political discussion as it is to insert emoji 

In the 1960s visionaries such as Joseph Licklider, Robert Taylor and Doug Engelbart developed a vision of networked interactive computing that provided the foundation not just for new technologies (the internet, PC etc) but for whole new industries. Licklider, Sutherland,Taylor et al provided a model (ARPA) for how science funding can work. Taylor provided a model (PARC) of how to manage a team of extremely talented people who turned a profound vision into reality. The original motivation for the vision of networked interactive computing was to help humans make good decisions in a complex world — or, ‘augmenting human intelligence’ and ‘man-machine symbiosis’. This story shows how to make big improvements in the world with very few resources if they are structured right: PARC involved ~25 key people and tens of millions over roughly a decade and generated trillions of dollars in value. If interested in the history and the super-productive processes behind the success of ARPA-PARC read THIS.

It’s fascinating that in many ways the original 1960s Licklider vision has still not been implemented. The Silicon Valley ecosystem developed parts of the vision but not others for complex reasons I don’t understand (cf. The Future of Programming). One of those who is trying to implement parts of the vision that have not been implemented is Bret Victor. Bret Victor is a rare thing: a genuine visionary in the computing world according to some of those ‘present at the creation’ of ARPA-PARC such as Alan Kay. His ideas lie at critical intersections between fields sketched above. Watch talks such as Inventing on Principle and Media for Thinking the Unthinkable and explore his current project, Dynamic Land in Berkeley.

Victor has described, and now demonstrates in Dynamic Land, how existing tools fail and what is possible. His core principle is that creators need an immediate connection to what they are creating. Current programming languages and tools are mostly based on very old ideas before computers even had screens and there was essentially no interactivity — they date from the era of punched cards. They do not allow users to interact dynamically. New dynamic tools enable us to think previously unthinkable thoughts and allow us to see and interact with complex systems: to see inside, see across time, and see across possibilities.

I strongly recommend spending a few days exploring his his whole website but I will summarise below his ideas on two things:

  1. His ideas about how to build new dynamic tools for working with data and interactive models.
  2. His ideas about transforming the physical spaces in which teams work so that dynamic tools are embedded in their environment — people work inside a tool.

Applying these ideas would radically improve how people make decisions in government and how the media reports politics/government.

Language and writing were cognitive technologies created thousands of years ago which enabled us to think previously unthinkable thoughts. Mathematical notation did the same over the past 1,000 years. For example, take a mathematics problem described by the 9th Century mathematician al-Khwarizmi (who gave us the word algorithm):

screenshot 2019-01-28 23.46.10

Once modern notation was invented, this could be written instead as:

x2 + 10x = 39

Michael Nielsen uses a similar analogy. Descartes and Fermat demonstrated that equations can be represented on a diagram and a diagram can be represented as an equation. This was a new cognitive technology, a new way of seeing and thinking: algebraic geometry. Changes to the ‘user interface’ of mathematics were critical to its evolution and allowed us to think unthinkable thoughts (Using Artificial Intelligence to Augment Human Intelligence, see below).

Screenshot 2019-03-06 11.33.19

Similarly in the 18th Century, there was the creation of data graphics to demonstrate trade figures. Before this, people could only read huge tables. This is the first data graphic:

screenshot 2019-01-29 00.28.21

The Jedi of data visualisation, Edward Tufte, describes this extraordinary graphic of Napoleon’s invasion of Russia as ‘probably the best statistical graphic ever drawn’. It shows the losses of Napoleon’s army: from the Polish-Russian border, the thick band shows the size of the army at each position, the path of Napoleon’s winter retreat from Moscow is shown by the dark lower band, which is tied to temperature and time scales (you can see some of the disastrous icy river crossings famously described by Tolstoy). NB. The Cabinet makes life-and-death decisions now with far inferior technology to this from the 19th Century (see below).

screenshot 2019-01-29 10.37.05

If we look at contemporary scientific papers they represent extremely compressed information conveyed through a very old fashioned medium, the scientific journal. Printed journals are centuries old but the ‘modern’ internet versions are usually similarly static. They do not show the behaviour of systems in a visual interactive way so we can see the connections between changing values in the models and changes in behaviour of the system. There is no immediate connection. Everything is pretty much the same as a paper and pencil version of a paper. In Media for Thinking the Unthinkable, Victor shows how dynamic tools can transform normal static representations so systems can be explored with immediate feedback. This dramatically shows how much more richly and deeply ideas can be explored. With Victor’s tools we can interact with the systems described and immediately grasp important ideas that are hidden in normal media.

Picture: the very dense writing of a famous paper (by chance the paper itself is at the intersection of politics/technology and Watts has written excellent stuff on fake news but has been ignored because it does not fit what ‘the educated’ want to believe)

screenshot 2019-01-29 10.55.01

Picture: the same information presented differently. Victor’s tools make the information less compressed so there’s less work for the brain to do ‘decompressing’. They not only provide visualisations but the little ‘sliders’ over the graphics are to drag buttons and interact with the data so you see the connection between changing data and changing model. A dynamic tool transforms a scientific paper from ‘pencil and paper’ technology to modern interactive technology.

screenshot 2019-01-29 10.58.38

Victor’s essay on climate change

Victor explains in detail how policy analysis and public debate of climate change could be transformed. Leave aside the subject matter — of course it’s extremely important, anybody interested in this issue will gain from reading the whole thing and it would be great material for a school to use for an integrated science / economics / programming / politics project, but my focus is on his ideas about tools and thinking, not the specific subject matter.

Climate change is a great example to consider because it involves a) a lot of deep scientific knowledge, b) complex computer modelling which is understood in detail by a tiny fraction of 1% (and almost none of the social science trained ‘experts’ who are largely responsible for interpreting such models for politicians/journalists, cf HERE for the science of this), c) many complex political, economic, cultural issues, d) very tricky questions about how policy is discussed in mainstream culture, and e) the problem of how governments try to think about and act on important, complex, and long-term problems. Scientific knowledge is crucial but it cannot by itself answer the question: what to do? The ideas BV describes to transform the debate on climate change apply generally to how we approach all important political issues.

In the section Languages for technical computing, BV describes his overall philosophy (if you look at the original you will see dynamic graphics to help make each point but I can’t make them play on my blog — a good example of the failure of normal tools!):

‘The goal of my own research has been tools where scientists see what they’re doing in realtime, with immediate visual feedback and interactive exploration. I deeply believe that a sea change in invention and discovery is possible, once technologists are working in environments designed around:

  • ubiquitous visualization and in-context manipulation of the system being studied;
  • actively exploring system behavior across multiple levels of abstraction in parallel;
  • visually investigating system behavior by transforming, measuring, searching, abstracting;
  • seeing the values of all system variables, all at once, in context;
  • dynamic notations that embed simulation, and show the effects of parameter changes;
  • visually improvising special-purpose dynamic visualizations as needed.’

He then describes how the community of programming language developers have failed to create appropriate languages for scientists, which I won’t go into but which is fascinating.

He then describes the problem of how someone can usefully get to grips with a complex policy area involving technological elements.

‘How can an eager technologist find their way to sub-problems within other people’s projects where they might have a relevant idea? How can they be exposed to process problems common across many projects?… She wishes she could simply click on “gas turbines”, and explore the space:

  • What are open problems in the field?
  • Who’s working on which projects?
  • What are the fringe ideas?
  • What are the process bottlenecks?
  • What dominates cost? What limits adoption?
  • Why make improvements here? How would the world benefit?

‘None of this information is at her fingertips. Most isn’t even openly available — companies boast about successes, not roadblocks. For each topic, she would have to spend weeks tracking down and meeting with industry insiders. What she’d like is a tool that lets her skim across entire fields, browsing problems and discovering where she could be most useful…

‘Suppose my friend uncovers an interesting problem in gas turbines, and comes up with an idea for an improvement. Now what?

  • Is the improvement significant?
  • Is the solution technically feasible?
  • How much would the solution cost to produce?
  • How much would it need to cost to be viable?
  • Who would use it? What are their needs?
  • What metrics are even relevant?

‘Again, none of this information is at her fingertips, or even accessible. She’d have to spend weeks doing an analysis, tracking down relevant data, getting price quotes, talking to industry insiders.

‘What she’d like are tools for quickly estimating the answers to these questions, so she can fluidly explore the space of possibilities and identify ideas that have some hope of being important, feasible, and viable.

‘Consider the Plethora on-demand manufacturing service, which shows the mechanical designer an instant price quote, directly inside the CAD software, as they design a part in real-time. In what other ways could inventors be given rapid feedback while exploring ideas?’

Victor then describes a public debate over a public policy. Ideas were put forward. Everybody argued.

‘Who to believe? The real question is — why are readers and decision-makers forced to “believe” anything at all? Many claims made during the debate offered no numbers to back them up. Claims with numbers rarely provided context to interpret those numbers. And never — never! — were readers shown the calculations behind any numbers. Readers had to make up their minds on the basis of hand-waving, rhetoric, bombast.’

And there was no progress because nobody could really learn from the debate or even just be clear about exactly what was being proposed. Sound familiar?!! This is absolutely normal and Victor’s description applies to over 99% of public policy debates.

Victor then describes how you can take the policy argument he had sketched and change its nature. Instead of discussing words and stories, DISCUSS INTERACTIVE MODELS. 

Here you need to click to the original to understand the power of what he is talking about as he programs a simple example.

‘The reader can explore alternative scenarios, understand the tradeoffs involved, and come to an informed conclusion about whether any such proposal could be a good decision.

‘This is possible because the author is not just publishing words. The author has provided a model — a set of formulas and algorithms that calculate the consequences of a given scenario… Notice how the model’s assumptions are clearly visible, and can even be adjusted by the reader.

‘Readers are thus encouraged to examine and critique the model. If they disagree, they can modify it into a competing model with their own preferred assumptions, and use it to argue for their position. Model-driven material can be used as grounds for an informed debate about assumptions and tradeoffs.

‘Modeling leads naturally from the particular to the general. Instead of seeing an individual proposal as “right or wrong”, “bad or good”, people can see it as one point in a large space of possibilities. By exploring the model, they come to understand the landscape of that space, and are in a position to invent better ideas for all the proposals to come. Model-driven material can serve as a kind of enhanced imagination.

Victor then looks at some standard materials from those encouraging people to take personal action on climate change and concludes:

‘These are lists of proverbs. Little action items, mostly dequantified, entirely decontextualized. How significant is it to “eat wisely” and “trim your waste”? How does it compare to other sources of harm? How does it fit into the big picture? How many people would have to participate in order for there to be appreciable impact? How do you know that these aren’t token actions to assauge guilt?

‘And why trust them? Their rhetoric is catchy, but so is the horrific “denialist” rhetoric from the Cato Institute and similar. When the discussion is at the level of “trust me, I’m a scientist” and “look at the poor polar bears”, it becomes a matter of emotional appeal and faith, a form of religion.

‘Climate change is too important for us to operate on faith. Citizens need and deserve reading material which shows context — how significant suggested actions are in the big picture — and which embeds models — formulas and algorithms which calculate that significance, for different scenarios, from primary-source data and explicit assumptions.’

Even the supposed ‘pros’ — Insiders at the top of research fields in politically relevant areas — have to scramble around typing words into search engines, crawling around government websites, and scrolling through PDFs. Reliable data takes ages to find. Reliable models are even harder to find. Vast amounts of useful data and models exist but they cannot be found and used effectively because we lack the tools.

‘Authoring tools designed for arguing from evidence’

Why don’t we conduct public debates in the way his toy example does with interactive models? Why aren’t paragraphs in supposedly serious online newspapers written like this? Partly because of the culture, including the education of those who run governments and media organisations, but also because the resources for creating this sort of material don’t exist.

‘In order for model-driven material to become the norm, authors will need data, models, tools, and standards…

‘Suppose there were good access to good data and good models. How would an author write a document incorporating them? Today, even the most modern writing tools are designed around typing in words, not facts. These tools are suitable for promoting preconceived ideas, but provide no help in ensuring that words reflect reality, or any plausible model of reality. They encourage authors to fool themselves, and fool others

‘Imagine an authoring tool designed for arguing from evidence. I don’t mean merely juxtaposing a document and reference material, but literally “autocompleting” sourced facts directly into the document. Perhaps the tool would have built-in connections to fact databases and model repositories, not unlike the built-in spelling dictionary. What if it were as easy to insert facts, data, and models as it is to insert emoji and cat photos?

‘Furthermore, the point of embedding a model is that the reader can explore scenarios within the context of the document. This requires tools for authoring “dynamic documents” — documents whose contents change as the reader explores the model. Such tools are pretty much non-existent.’

These sorts of tools for authoring dynamic documents should be seen as foundational technology like the integrated circuit or the internet.

‘Foundational technology appears essential only in retrospect. Looking forward, these things have the character of “unknown unknowns” — they are rarely sought out (or funded!) as a solution to any specific problem. They appear out of the blue, initially seem niche, and eventually become relevant to everything.

‘They may be hard to predict, but they have some common characteristics. One is that they scale well. Integrated circuits and the internet both scaled their “basic idea” from a dozen elements to a billion. Another is that they are purpose-agnostic. They are “material” or “infrastructure”, not applications.’

Victor ends with a very potent comment — that much of what we observe is ‘rearranging  app icons on the deck of the Titanic’. Commercial incentives drive people towards trying to create ‘the next Facebook’ — not fixing big social problems. I will address this below.

If you are an arts graduate interested in these subjects but not expert (like me), here is an example that will be more familiar… If you look at any big historical subject, such as ‘why/how did World War I start?’ and examine leading scholarship carefully, you will see that all the leading books on such subjects provide false chronologies and mix facts with errors such that it is impossible for a careful reader to be sure about crucial things. It is routine for famous historians to write that ‘X happened because Y’ when Y happened after X. Part of the problem is culture but this could potentially be improved by tools. A very crude example: why doesn’t Kindle make it possible for readers to log factual errors, with users’ reliability ranked by others, so authors can easily check potential errors and fix them in online versions of books? Even better, this could be part of a larger system to develop gold standard chronologies with each ‘fact’ linked to original sources and so on. This would improve the reliability of historical analysis and it would create an ‘anti-entropy’ ratchet — now, entropy means that errors spread across all books on a subject and there is no mechanism to reverse this…

 

‘Seeing Rooms’: macro-tools to help make decisions

Victor also discusses another fundamental issue: the rooms/spaces in which most modern work and thinking occurs are not well-suited to the problems being tackled and we could do much better. Victor is addressing advanced manufacturing and robotics but his argument applies just as powerfully, perhaps more powerfully, to government analysis and decision-making.

Now, ‘software based tools are trapped in tiny rectangles’. We have very sophisticated tools but they all sit on computer screens on desks, just as you are reading this blog.

In contrast, ‘Real-world tools are in rooms where workers think with their bodies.’ Traditional crafts occur in spatial environments designed for that purpose. Workers walk around, use their hands, and think spatially. ‘The room becomes a macro-tool they’re embedded inside, an extension of the body.’ These rooms act like tools to help them understand their problems in detail and make good decisions.

Picture: rooms designed for the problems being tackled

Screenshot 2017-03-20 14.29.19

The wave of 3D printing has developed ‘maker rooms’ and ‘Fab Labs’ where people work with a set of tools that are too expensive for an individual. The room is itself a network of tools. This approach is revolutionising manufacturing.

Why is this useful?

‘Modern projects have complex behavior… Understanding requires seeing and the best seeing tools are rooms.’ This is obviously particularly true of politics and government.

Here is a photo of a recent NASA mission control room. The room is set up so that all relevant people can see relevant data and models at different scales and preserve a common picture of what is important. NASA pioneered thinking about such rooms and the technology and tools needed in the 1960s.

Screenshot 2017-03-20 14.35.35

Here are pictures of two control rooms for power grids.

Screenshot 2017-03-20 14.37.28

Here is a panoramic photo of the unified control centre for the Large Hadron Collider – the biggest of ‘big data’ projects. Notice details like how they have removed all pillars so nothing interrupts visual communication between teams.

Screenshot 2017-03-20 15.31.33

Now contrast these rooms with rooms from politics.

Here is the Cabinet room. I have been in this room. There are effectively no tools. In the 19th Century at least Lord Salisbury used the fireplace as a tool. He would walk around the table, gather sensitive papers, and burn them at the end of meetings. The fire is now blocked. The only other tool, the clock, did not work when I was last there. Over a century, the physical space in which politicians make decisions affecting potentially billions of lives has deteriorated.

British Cabinet room practically as it was July 1914

Screenshot 2017-03-20 15.42.59

Here are JFK and EXCOM making decisions during the Cuban Missile Crisis that moved much faster than July 1914, compressing decisions leading to the destruction of global civilisation potentially into just minutes.

Screenshot 2019-02-14 16.06.04

Here is the only photo in the public domain of the room known as ‘COBRA’ (Cabinet Office Briefing Room) where a shifting set of characters at the apex of power in Britain meet to discuss crises.

Screenshot 2017-03-20 14.39.41

Notice how poor it is compared to NASA, the LHC etc. There has clearly been no attempt to learn from our best examples about how to use the room as a tool. The screens at the end are a late add-on to a room that is essentially indistinguishable from the room in which Prime Minister Asquith sat in July 1914 while doodling notes to his girlfriend as he got bored. I would be surprised if the video technology used is as good as what is commercially available cheaper, the justification will be ‘security’, and I would bet that many of the decisions about the operation of this room would not survive scrutiny from experts in how to construct such rooms.

I have not attended a COBRA meeting but I’ve spoken to many who have. The meetings, as you would expect looking at this room, are often normal political meetings. That is:

  • aims are unclear,
  • assumptions are not made explicit,
  • there is no use of advanced tools,
  • there is no use of quantitative models,
  • discussions are often dominated by lawyers so many actions are deemed ‘unlawful’ without proper scrutiny (and this device is routinely used by officials to stop discussion of options they dislike for non-legal reasons),
  • there is constant confusion between policy, politics and PR then the cast disperses without clarity about what was discussed and agreed.

Here is a photo of the American equivalent – the Situation Room.

Screenshot 2017-03-20 15.51.12.png

It has a few more screens but the picture is essentially the same: there are no interactive tools beyond the ability to speak and see someone at a distance which was invented back in the 1950s/1960s in the pioneering programs of SAGE (automated air defence) and Apollo (man on the moon). Tools to help thinking in powerful ways are not taken seriously. It is largely the same, and decisions are made the same, as in the Cuban Missile Crisis. In some ways the use of technology now makes management worse as it encourages Presidents and their staff to try to micromanage things they should not be managing, often in response to or fear of the media.

Individual ministers’ officers are also hopeless. The computers are old and rubbish. Even colour printing is often a battle. Walls are for kids’ pictures. In the DfE officials resented even giving us paper maps of where schools were and only did it when bullied by the private office. It was impossible for officials to work on interactive documents. They had no technology even for sharing documents in a way that was then (2011) normal even in low-performing organisations. Using GoogleDocs was ‘against the rules’. (I’m told this has slightly improved.) The whole structure of ‘submissions’ and ‘red boxes’ is hopeless. It is extremely bureaucratic and slow. It prevents serious analysis of quantitative models. It reinforces the lack of proper scientific thinking in policy analysis. It guarantees confusion as ministers scribble notes and private offices interpret rushed comments by exhausted ministers after dinner instead of having proper face-to-face meetings that get to the heart of problems and resolve conflicts quickly. The whole approach reinforces the abject failure of the senior civil service to think about high performance project management.

Of course, most of the problems with the standards of policy and management in the civil service are low or no-tech problems — they involve the ‘unrecognised simplicities’ that are independent of, and prior to, the use of technology — but all these things negatively reinforce each other. Anybody who wants to do things much better is scuppered by Whitehall’s entangled disaster zone of personnel, training, management, incentives and tools.

*

Dynamic Land: ‘amazing’

I won’t go into this in detail. Dynamic Land is in a building in Berkeley. I visited last year. It is Victor’s attempt to turn the ideas above into a sort of living laboratory. It is a large connected set of rooms that have computing embedded in surfaces. For example, you can scribble equations on a bit of paper, cameras in the ceiling read your scribbles automatically, turn them into code, and execute them — for example, by producing graphics. You can then physically interact with models that appear on the table or wall while the cameras watch your hands and instantly turn gestures into new code and change the graphics or whatever you are doing. Victor has put these cutting edge tools into a space and made it open to the Berkeley community. This is all hard to explain/understand because you haven’t seen anything like it even in sci-fi films (it’s telling the media still uses the 15 year-old Minority Report as its sci-fi illustration for such things).

This video gives a little taste. I visited with a physicist who works on the cutting edge of data science/AI. I was amazed but I know nothing about such things — I was interested to see his reaction as he scribbled gravitational equations on paper and watched the cameras turn them into models on the table in real-time, then he changed parameters and watched the graphics change in real-time on the table (projected from the ceiling): ‘Ohmygod, this is just obviously the future, absolutely amazing.’ The thought immediately struck us: imagine the implications of having policy discussions with such tools instead of the usual terrible meetings. Imagine discussing HS2 budgets or possible post-Brexit trading arrangements with the models running like this for decision-makers to interact with.

Video of Dynamic Land: the bits of coloured paper are ‘code’, graphics are projected from the ceiling

 

screenshot 2019-01-29 15.01.20

screenshot 2019-01-29 15.27.05

*

3. Michael Nielsen and cognitive technologies

Connected to Victor’s ideas are those of the brilliant physicist, Michael Nielsen. Nielsen wrote the textbook on quantum computation and a great book, Reinventing Discovery, on the evolution of the scientific method. For example, instead of waiting for the coincidence of Grossmann helping out Einstein with some crucial maths, new tools could create a sort of ‘designed serendipity’ to help potential collaborators find each other.

In his essay Thought as a Technology, Nielsen describes the feedback between thought and interfaces:

‘In extreme cases, to use such an interface is to enter a new world, containing objects and actions unlike any you’ve previously seen. At first these elements seem strange. But as they become familiar, you internalize the elements of this world. Eventually, you become fluent, discovering powerful and surprising idioms, emergent patterns hidden within the interface. You begin to think with the interface, learning patterns of thought that would formerly have seemed strange, but which become second nature. The interface begins to disappear, becoming part of your consciousness. You have been, in some measure, transformed.’

He describes how normal language and computer interfaces are cognitive technologies:

‘Language is an example of a cognitive technology: an external artifact, designed by humans, which can be internalized, and used as a substrate for cognition. That technology is made up of many individual pieces – words and phrases, in the case of language – which become basic elements of cognition. These elements of cognition are things we can think with…

‘In a similar way to language, maps etc, a computer interface can be a cognitive technology. To master an interface requires internalizing the objects and operations in the interface; they become elements of cognition. A sufficiently imaginative interface designer can invent entirely new elements of cognition… In general, what makes an interface transformational is when it introduces new elements of cognition that enable new modes of thought. More concretely, such an interface makes it easy to have insights or make discoveries that were formerly difficult or impossible. At the highest level, it will enable discoveries (or other forms of creativity) that go beyond all previous human achievement.’

Nielsen describes how powerful ways of thinking among mathematicians and physicists are hidden from view and not part of textbooks and normal teaching.

The reason is that traditional media are poorly adapted to working with such representations… If experts often develop their own representations, why do they sometimes not share those representations? To answer that question, suppose you think hard about a subject for several years… Eventually you push up against the limits of existing representations. If you’re strongly motivated – perhaps by the desire to solve a research problem – you may begin inventing new representations, to provide insights difficult through conventional means. You are effectively acting as your own interface designer. But the new representations you develop may be held entirely in your mind, and so are not constrained by traditional static media forms. Or even if based on static media, they may break social norms about what is an “acceptable” argument. Whatever the reason, they may be difficult to communicate using traditional media. And so they remain private, or are only discussed informally with expert colleagues.’

If we can create interfaces that reify deep principles, then ‘mastering the subject begins to coincide with mastering the interface.’ He gives the example of Photoshop which builds in many deep principles of image manipulation.

‘As you master interface elements such as layers, the clone stamp, and brushes, you’re well along the way to becoming an expert in image manipulation… By contrast, the interface to Microsoft Word contains few deep principles about writing, and as a result it is possible to master Word‘s interface without becoming a passable writer. This isn’t so much a criticism of Word, as it is a reflection of the fact that we have relatively few really strong and precise ideas about how to write well.’

He then describes what he calls ‘the cognitive outsourcing model’: ‘we specify a problem, send it to our device, which solves the problem, perhaps in a way we-the-user don’t understand, and sends back a solution.’ E.g we ask Google a question and Google sends us an answer.

This is how most of us think about the idea of augmenting the human intellect but it is not the best approach. ‘Rather than just solving problems expressed in terms we already understand, the goal is to change the thoughts we can think.’

‘One challenge in such work is that the outcomes are so difficult to imagine. What new elements of cognition can we invent? How will they affect the way human beings think? We cannot know until they’ve been invented.

‘As an analogy, compare today’s attempts to go to Mars with the exploration of the oceans during the great age of discovery. These appear similar, but while going to Mars is a specific, concrete goal, the seafarers of the 15th through 18th centuries didn’t know what they would find. They set out in flimsy boats, with vague plans, hoping to find something worth the risks. In that sense, it was even more difficult than today’s attempts on Mars.

‘Something similar is going on with intelligence augmentation. There are many worthwhile goals in technology, with very specific ends in mind. Things like artificial intelligence and life extension are solid, concrete goals. By contrast, new elements of cognition are harder to imagine, and seem vague by comparison. By definition, they’re ways of thinking which haven’t yet been invented. There’s no omniscient problem-solving box or life-extension pill to imagine. We cannot say a priori what new elements of cognition will look like, or what they will bring. But what we can do is ask good questions, and explore boldly.

In another essay, Using Artificial Intelligence to Augment Human Intelligence, Nielsen points out that breakthroughs in creating powerful new cognitive technologies such as musical notation or Descartes’ invention of algebraic geometry are rare but ‘modern computers are a meta-medium enabling the rapid invention of many new cognitive technologies‘ and, further, AI will help us ‘invent new cognitive technologies which transform the way we think.’

Further, historically powerful new cognitive technologies, such as ‘Feynman diagrams’, have often appeared strange at first. We should not assume that new interfaces should be ‘user friendly’. Powerful interfaces that repay mastery may require sacrifices.

‘The purpose of the best interfaces isn’t to be user-friendly in some shallow sense. It’s to be user-friendly in a much stronger sense, reifying deep principles about the world, making them the working conditions in which users live and create. At that point what once appeared strange can instead becomes comfortable and familiar, part of the pattern of thought…

‘Unfortunately, many in the AI community greatly underestimate the depth of interface design, often regarding it as a simple problem, mostly about making things pretty or easy-to-use. In this view, interface design is a problem to be handed off to others, while the hard work is to train some machine learning system.

‘This view is incorrect. At its deepest, interface design means developing the fundamental primitives human beings think and create with. This is a problem whose intellectual genesis goes back to the inventors of the alphabet, of cartography, and of musical notation, as well as modern giants such as Descartes, Playfair, Feynman, Engelbart, and Kay. It is one of the hardest, most important and most fundamental problems humanity grapples with.

‘As discussed earlier, in one common view of AI our computers will continue to get better at solving problems, but human beings will remain largely unchanged. In a second common view, human beings will be modified at the hardware level, perhaps directly through neural interfaces, or indirectly through whole brain emulation.

We’ve described a third view, in which AIs actually change humanity, helping us invent new cognitive technologies, which expand the range of human thought. Perhaps one day those cognitive technologies will, in turn, speed up the development of AI, in a virtuous feedback cycle:

Screenshot 2019-02-04 18.16.42

It would not be a Singularity in machines. Rather, it would be a Singularity in humanity’s range of thought… The long-term test of success will be the development of tools which are widely used by creators. Are artists using these tools to develop remarkable new styles? Are scientists in other fields using them to develop understanding in ways not otherwise possible?’

I would add: are governments using these tools to help them think in ways we already know are more powerful and to explore new ways of making decisions and shaping the complex systems on which we rely?

Nielsen also wrote this fascinating essay ‘Augmenting long-term memory’. This involves a computer tool (Anki) to aid long-term memory using ‘spaced repetition’ — i.e testing yourself at intervals which is shown to counter the normal (for most people) process of forgetting. This allows humans to turn memory into a choice so we can decide what to remember and achieve it systematically (without a ‘weird/extreme gift’ which is how memory is normally treated). (It’s fascinating that educated Greeks 2,500 years ago could build sophisticated mnemonic systems allowing them to remember vast amounts while almost all educated people now have no idea about such techniques.)

Connected to this, Nielsen also recently wrote an essay teaching fundamentals of quantum mechanics and quantum computers — but it is an essay with a twist:

‘[It] incorporates new user interface ideas to help you remember what you read… this essay isn’t just a conventional essay, it’s also a new medium, a mnemonic medium which integrates spaced-repetition testing. The medium itself makes memory a choice This essay will likely take you an hour or two to read. In a conventional essay, you’d forget most of what you learned over the next few weeks, perhaps retaining a handful of ideas. But with spaced-repetition testing built into the medium, a small additional commitment of time means you will remember all the core material of the essay. Doing this won’t be difficult, it will be easier than the initial read. Furthermore, you’ll be able to read other material which builds on these ideas; it will open up an entire world…

‘Mastering new subjects requires internalizing the basic terminology and ideas of the subject. The mnemonic medium should radically speed up this memory step, converting it from a challenging obstruction into a routine step. Frankly, I believe it would accelerate human progress if all the deepest ideas of our civilization were available in a form like this.’

This obviously has very important implications for education policy. It also shows how computers could be used to improve learning — something that has generally been a failure since the great hopes at PARC in the 1970s. I have used Anki since reading Nielsen’s blog and I can feel it making a big difference to my mind/thoughts — how often is this true of things you read? DOWNLOAD ANKI NOW AND USE IT!

We need similarly creative experiments with new mediums that are designed to improve  standards of high stakes decision-making.

*

4. Summary

We could create systems for those making decisions about m/billions of lives and b/trillions of dollars, such as Downing Street or The White House, that integrate inter alia:

  • Cognitive toolkits compressing already existing useful knowledge such as checklists for rational thinking developed by the likes of Tetlock, Munger, Yudkowsky et al.
  • A Nielsen/Victor research program on ‘Seeing Rooms’, interface design, authoring tools, and cognitive technologies. Start with bunging a few million to Victor immediately in return for allowing some people to study what he is doing and apply it in Whitehall, then grow from there.
  • An alpha data science/AI operation — tapping into the world’s best minds including having someone like David Deutsch or Tim Gowers as a sort of ‘chief rationalist’ in the Cabinet (with Scott Alexander as deputy!) — to support rational decision-making where this is possible and explain when it is not possible (just as useful).
  • Tetlock/Hanson prediction tournaments could easily and cheaply be extended to consider ‘clusters’ of issues around themes like Brexit to improve policy and project management.
  • Groves/Mueller style ‘systems management’ integrated with the data science team.
  • Legally entrenched Red Teams where incentives are aligned to overcoming groupthink and error-correction of the most powerful. Warren Buffett points out that public companies considering an acquisition should employ a Red Team whose fees are dependent on the deal NOT going ahead. This is the sort of idea we need in No10.

Researchers could see the real operating environment of decision-makers at the apex of power, the sort of problems they need to solve under pressure, and the constraints of existing centralised systems. They could start with the safe level of ‘tools that we already know work really well’ — i.e things like cognitive toolkits and Red Teams — while experimenting with new tools and new ways of thinking.

Hedge funds like Bridgewater and some other interesting organisations think about such ideas though without the sophistication of Victor’s approach. The world of MPs, officials, the Institute for Government (a cheerleader for ‘carry on failing’), and pundits will not engage with these ideas if left to their own devices.

This is not the place to go into how to change this. We know that the normal approach is doomed to produce the normal results and normal results applied to things like repeated WMD crises means disaster sooner or later. As Buffett points out, ‘If there is only one chance in thirty of an event occurring in a given year, the likelihood of it occurring at least once in a century is 96.6%.’ It is not necessary to hope in order to persevere: optimism of the will, pessimism of the intellect…

*

A final thought…

A very interesting comment that I have heard from some of the most important scientists involved in the creation of advanced technologies is that ‘artists see things first’ — that is, artists glimpse possibilities before most technologists and long before most businessmen and politicians.

Pixar came from a weird combination of George Lucas getting divorced and the visionary Alan Kay suggesting to Steve Jobs that he buy a tiny special effects unit from Lucas, which Jobs did with completely wrong expectations about what would happen. For unexpected reasons this tiny unit turned into a huge success — as Jobs put it later, he was ‘sort of snookered’ into creating Pixar. Now Alan Kay says he struggles to get tech billionaires to understand the importance of Victor’s ideas.

The same story repeats: genuinely new ideas that could create huge value always seem so odd that almost all people in almost all organisations cannot see new possibilities. If this is true in Silicon Valley, how much more true is it in Whitehall or Washington… 

If one were setting up a new party in Britain, one could incorporate some of these ideas. This would of course also require recruiting very different types of people to the norm in politics. The closed nature of Westminster/Whitehall combined with first-past-the-post means it is very hard to solve the coordination problem of how to break into this system with a new way of doing things. Even those interested in principle don’t want to commit to a 10-year (?) project that might get them blasted on the front pages. Vote Leave hacked the referendum but such opportunities are much rarer than VC-funded ‘unicorns’. On the other hand, arguably what is happening now is a once in 50 or 100 year crisis and such crises also are the waves that can be ridden to change things normally unchangeable. A second referendum in 2020 is quite possible (or two referendums under PM Corbyn, propped up by the SNP?) and might be the ideal launchpad for a completely new sort of entity, not least because if it happens the Conservative Party may well not exist in any meaningful sense (whether there is or isn’t another referendum). It’s very hard to create a wave and it’s much easier to ride one. It’s more likely in a few years you will see some of the above ideas in novels or movies or video games than in government — their pickup in places like hedge funds and intelligence services will be discrete — but you never know…

*

Ps. While I have talked to Michael Nielsen and Bret Victor about their ideas, in no way should this blog be taken as their involvement in anything to do with my ideas or plans or agreement with anything written above. I did not show this to them or even tell them I was writing about their work, we do not work together in any way, I have just read and listened to their work over a few years and thought about how their ideas could improve government.

Further Reading

If interested in how to make things work much better, read this (lessons for government from the Apollo project) and this (lessons for government from ARPA-PARC’s creation of the internet and PC).

Links to recent reports on AI/ML.

On referendum #24L: Fake news from the fake news committee, Carole, and a rematch against the public

[Update. More fake news — claims we kept advertising during the ‘pause’ after Jo Cox’s murder. Wrong. The spreadsheet data from Facebook reflects when the ads were created, not when they were shown. AIQ was putting stuff into the system during the pause, not running ads. Again this false meme is already around the world. Alistair 45 minutes Campbell is ranting about moral cesspits. But yet again it’s fake news.

Incidentally, I opposed any pause at the time. I think the right way to deal with terrorism is to carry on with normal life, like Britain used to when it was a more serious country. (I hated the way Cameron would tweet in response to ISIS, giving them just what they want. I hated the way Cameron and Blair read out in Parliament names of people killed which had the same effect. I thought it also reflected SW1’s basic ignorance about how to deal with information operations against terrorist groups in many ways more sophisticated about communication than traditional institutions — eg. Hezbollah often does TV better than the Tory Party.) But I was outvoted by MPs who downed tools and headed back to London, giving Osborne/Dre the chance to use the news as they wished. But they botched it — in a classic case study of people fooling themselves, they thought that the country reflected the mood of Inner London. They started tweeting broken hearts and ‘we love our MP’ at each other. They therefore blew their last chance to recover from strategic misjudgements. Those who would run Remain in a second referendum remain disconnected from reality and on current form would botch a second referendum which anyway would be held in circumstances much more favourable to Leave on almost every dimension.

Also NB. Carole Cadwalladr has commented below and I will answer shortly.

Also NB. contra some reports, I was not sent the report by the Committee. I’m told they did send it to ‘witnesses’ but that did not include me. I was sent it by someone in Parliament fed up with Collins’ dishonesty and blatant use of Carole’s conspiracies for his own end of overturning the referendum result.]

A few thoughts on the last 24 hours of conspiracy theories plus a copy of the DCMS Select Committee report on fake news. They gave it to Carole for Sunday, obviously, but someone appalled at their dishonesty leaked it to me so I publish it below. It is, in keeping with their general behaviour, itself fake news.

Most of SW1 has suffered a psychological and operational implosion because of the referendum. 

Many MPs, hacks and chalatan-pundits on both sides have responded to the result by retreating to psychologically appealing parallel worlds rather than face reality — ‘the frogs before the storm’ prefer the comforting Oblonsky mental fug of groupthink.

A subset of the ERG, for example, welcomed the December agreement on the Irish backstop that actually spelled doom for their central ideas about how the negotiations were being conducted. Bernard Jenkin was so confident that he and Cash understood what was happening he cheerfully wrote that he had not needed to read it before welcoming it. This is the same group now ranting about Chequers — which was programmed by the December agreement, as are the imminent further surrenders in the autumn on Free Movement and everything else! This is the same group that tells everyone that people like me who say that serious preparations are needed to leave the EU are ‘like those peddling the Millennium Bug’. Their ideas on preparations are as accurate as their ideas on the December agreement were and of course in order to avoid facing their tragi-comic blunders of judgement over two years they are constructing parallel worlds for their minds to live in.

Hardcore Remainers are similar. They want a second referendum and this requires de-legitimising the first. They therefore hysterically spread false memes while shouting ‘liars’ at Leavers. Cash and Carole have a lot in common.

The last 24 hours has illustrated again how the entire story about Vote Leave / data / digital communications has become a great case study in contemporary politics: ubiquitous accusations of lying by people who either lie or are entirely reckless about the truth, almost nobody figuring out reality before babbling all over social media setting off cascades of false information, MPs clueless about basic legal issues also spreading false memes and so on. 

A few simple points about the new wave of fake news.

Carole has spread countless factual errors for over a year. When I explained how we had followed best practice to safeguard personal data by quickly deleting the VL electoral database containing tens of millions after the referendum, she turned this professional and ethical behaviour (not copied by the Remain campaign which kept it all) into accusations of me ‘destroying evidence’ and perverting the course of justice. This sort of thing has happened repeatedly.

Over the past 24 hours she has constructed new fake memes now spreading across the world. 

1. The latest astonishing ‘crimes’ according to Carole et al is that the VL ads did not have ‘imprints’, were ‘dark’, unethical and illegal. She has tweeted dozens of times along the lines of:

‘[Vote Leave] DELIBERATELY BROKE THE LAW by leaving off who paid for it… No wonder Dominic Cummings wouldn’t come to parliament. No wonder @facebook didn’t want to release this shit. This is truly toxic, dark, absolutely undemocratic shit at the heart of the biggest election we’ll ever see… Look at this stuff. Fake fake fake news. It’s not an ad. It’s not labelled as an ad. It doesn’t say who placed it or who paid for it or who it was targeted at or way. This is the fakiest of fake news. And until today we had no idea about any of this’. 

This is totally wrong and reflects deep misunderstandings. 

a. The campaigns were NOT legally required to carry imprints in the same way as printed  material. Carole is factually wrong about the law again. 

b. This is actually irrelevant because the VL ads that Carole claims were ‘dark’ and criminal because ‘no imprint’ actually were clearly labelled as VL. The images she is pulling from the FB data dump are raw images — they are not images of the actual ads themselves. The images sat within a ‘frame’ which everyone seeing them on Facebook would see. This included ‘Vote Leave’ and other text and also had a weblink. 

E.g Carole posts this as new evidence that I should be locked up — an image ‘without imprint’:

Screenshot 2018-07-27 11.55.21

 

(By the way, you CANNOT trust David ‘pave the road from Ankara’ Cameron on Turkey! Don’t believe me? Watch this!)

 

This is how ads actually appear on Facebook:

Screenshot 2018-07-27 11.57.39

Thousands of people are now spreading Carole’s memes across the internet. They are shocked and appalled — surely the criminal Cummings will finally be jailed etc. 

2. Amid the data dump of Facebook ads, there are claims that VL promoted BeLeave ads. This is a misunderstanding and the BBC has corrected their story. These ads appear in the 0-999 impressions box in the FB spreadsheet because the actual number of impressions was ZERO. They never ran. This issue is related to AIQ’s recent explanation of an error they made with loading audiences for BeLeave. It is detailed and technical and I won’t go into it here but in a nutshell: VL did not promote BeLeave ads. Remain, however, did do this but of course nobody cares. (It is more forgivable to make mistakes about this as it is a tricky niche issue.)

3. Another criminal conspiracy Carole is spreading across the internet concerns Brexit Central. This was set up after the vote (not by me). Grimes went to work on it and merged the BeLeave page into the BC page hence FB labels them confusingly as ‘Brexit Central/BeLeave’. Without asking anybody what it means, Carole and others have screamed ‘aha this organisation secretly existed before the vote and was illegally advertising, LOCK UP CRIMINAL CUMMINGS.’ Wrong again.

There are many other false memes spreading but there’s no point going into all of them.

Also NB. I asked months ago for Facebook to publish everything in the interests of transparency. Will Will Straw do as I did and ask Facebook to publish EVERYTHING they have about the Remain campaign? I’m not holding my breath.

HERE IS THE DCMS REPORT ON FAKE NEWS. IT IS… FAKE NEWS

The report knowingly/incompetently makes false claims regarding Vote Leave, AIQ and BeLeave. Despite nobody ever producing any evidence for Carole’s original loony conspiracy theory that I was secretly coordinating with Arron Banks, Bannon and Robert Mercer, the Committee also asks for yet another inquiry of this and of course they want the police involved to give credibility to their fantasies and legitimise their campaign for a second referendum. The MPs know Facebook has explained to them that VL COULD NOT HAVE used the notorious Facebook data acquired by Cambridge Analytica but they try to provide credibility to these conspiracy theories.

Further, these MPs have littered their report with errors and misunderstandings about the legal framework for elections, thus spreading further confusion. They haven’t even bothered to understand GDPR, which they mis-explain badly. Collins et al have shown no interest in the truth. Now MPs publish a document after months of supposed work that makes basic errors about electoral law which will debase public debate even further.

NB. I HAVE SUGGESTED TO MPS THAT I COME AND GIVE EVIDENCE AND WE ALL OPERATE UNDER OATH. NOT A SQUEAK FROM THEM.

JUST LIKE THE ELECTORAL COMMISSION REFUSED TO SPEAK TO ME OR ANYBODY ELSE FROM VOTE LEAVE OVER TWO YEARS AND THREE INQUIRIES.

WHY?

AS JACK NICHOLSON SAYS, ‘THE TRUTH? YOU CAN’T HANDLE THE TRUTH!’

If the MPs really want to get to the bottom of this, all they have to do is promise to tell the truth. Come on guys, step up to the plate…

If SW1 put 1% of the effort it’s put into spreading fake news about Vote Leave into FIXING THE LAWS as I suggested BEFORE Carole’s conspiracy theories got traction, we would be in a much healthier state. But SW1 is rotten…

Hugo Rifkind says ‘Whatever you think of the referendum result, we can’t ever let there be a campaign like this again.’

Tough luck Hugo — if your side gets its way and there is another referendum, Vote Leave 2 will be much much worse for your side than VL1 was. VL2 will win by more than VL1 and the logical corollary will be to morph into a new party and fight the next election ‘to implement the promises we made in the referendum because the MPs have proved they can’t be trusted’. At a minimum VL2 will win the referendum and destroy the strategic foundations of both main parties. The Tories will be destroyed and maybe Labour too. The rotten civil service system will be replaced and the performance of government will be transformed for the better. Investment in basic science research will flow. Long-term funding for the NHS guaranteed by law. MORE high skilled immigrants, FEWER low-skilled. An agenda that could not be described as Left or Right. The public will love it. Insiders will hate it but they will have slit their own throats and have no moral credibility. Few careers will survive.

Is there enough self-awareness and self-interest among MPs to realise the consequences? Hard to say. I’m more critical of SW1 than almost any Insider and even I have been surprised by the rottenness. It will be no surprise if they slit their own throats.

So far the MPs have botched things on an epic scale but it’s hard to break into the Westminster system — they rig the rules to stop competition. Vote Leave 1 needed Cameron’s help to hack the system. If you guys want to run with Adonis and create another wave, be careful what you wish for. ‘Unda fert nec regitur’ and VL2 will ride that wave right at — and through — the gates of Parliament.

Ps. One hack who does actually pay attention to facts on this subject is Jim Waterson. It can’t be comfortable pointing out facts at the Guardian on this story so double credit to him.

[Pps. Sorry for mis-remembering Tom Cruise/Jack Nicholson to those who messaged.]

On the referendum #24J: Collins, grandstanding, empty threats & the plan for a rematch against the public

The DCMS Select Committee has just sent me the following letter.

Screenshot 2018-05-24 13.51.14

Here is my official reply…

Dear Damian et al

As you know I agreed to give evidence.

In April, I told you I could not do the date you suggested. On 12 April I suggested July.

You ignored this for weeks.

On 3 May you asked again if I could do a date I’d already said I could not do.

I replied that, as I’d told you weeks earlier, I could not.

You then threatened me with a Summons.

On 10 May, Collins wrote:

Dear Dominic

We have offered you different dates, and as I said previously we are not prepared to wait until July for you to give evidence to the committee. We have also discussed this with the Electoral Commission who have no objection to you giving evidence to us.

We are asking you to give evidence to the committee following evidence we have received that relates to the work of Vote Leave. We have extended a similar invitiation to Arron Banks and Andy Wigmore, to respond to evidence we have received about Leave.EU, and they have both agreed to attend.

The committee will be sending you a summons to appear and I hope that you are able to respond positively to this

best wishes

I replied:

The EC has NOT told me this.

Sending a summons is the behaviour of people looking for PR, not people looking to get to the bottom of this affair.

A summons will have ZERO positive impact on my decision and is likely only to mean I withdraw my offer of friendly cooperation, given you will have shown greater interest in grandstanding than truth-seeking, which is one of the curses of the committee system.

I hope you reconsider and put truth-seeking first.

Best wishes

d

You replied starting this charade.

 

You talk of ‘contempt of Parliament’.

You seem unaware that most of the country feels contempt for Parliament and this contempt is growing.

  • You have failed miserably over Brexit. You have not even bothered to educate yourselves on the basics of ‘what the Single Market is’, as Ivan Rogers explained in detail yesterday.
  • We want £350 million a week for the NHS plus long-term consistent funding and learning from the best systems in the world and instead you funnel our money to appalling companies like the parasites that dominate defence procurement.
  • We want action on unskilled immigration and you give us bullshit promises of ‘tens of thousands’ that you don’t even believe yourselves plus, literally, free movement for murderers, then you wonder why we don’t trust you.
  • We want a country MORE friendly to scientists and people from around the world with skills to offer and you give us ignorant persecution that is making our country a bad joke.
  • We want you to take money away from corporate looters (who fund your party) and fund science research so we can ‘create the future’, and you give us Carillion and joke aircraft carriers.
  • We want to open government to the best people and ideas in the world and you keep it a closed dysfunctional shambles that steals our money and keeps power locked within two useless parties and a closed bureaucracy that excludes ~100% of the most talented people. We want real expertise and you don’t even think about what that means.
  • You spend your time on this sort of grandstanding instead of serving millions of people less fortunate than you and who rely on you.

If you had wanted my evidence you would have cooperated over dates.

You actually wanted to issue threats, watch me give in, then get higher audiences for your grandstanding.

I’m calling your bluff. Your threats are as empty as those from May/Hammond/DD to the EU. Say what you like, I will not come to your committee regardless of how many letters you send or whether you send characters in fancy dress to hand me papers.

If another Committee behaves reasonably and I can give evidence without compromising various legal actions then I will consider it. Once these legal actions have finished, presumably this year, it will be easy to arrange if someone else wants to do it.

Further, I’m told many of your committee support the Adonis/Mandelson/Campbell/Grieve/Goldman Sachs/FT/CBI campaign for a rematch against the country.

Do you know what Vote Leave 2 would feel like for the MPs who vote for that (and donors who fund it)?

It would feel like having Lawrence Taylor chasing you and smashing you into the ground over and over and over again.

Vote Leave 2 would not involve me — nobody will make that mistake again — but I know what it would feel like for every MP who votes for a rematch against the public.

Lawrence Taylor: relentless 

So far you guys have botched things on an epic scale but it’s hard to break into the Westminster system — you rig the rules to stop competition. Vote Leave 1 needed Cameron’s help to hack the system. If you guys want to run with Adonis and create another wave, be careful what you wish for. ‘Unda fert nec regitur’ and VL2 would ride that wave right at the gates of Westminster.

A second referendum would be bad for the country and I hope it doesn’t happen but if you force the issue, then Vote Leave 2 would try to create out of the smoking wreck in SW1 something that can deliver what the public wants. Imagine Amazon-style obsession on customer satisfaction (not competitor and media obsession which is what you guys know) with Silicon Valley technology/scaling and Mueller-style ‘systems politics’ combined with the wave upon wave of emotion you will have created. Here’s some free political advice: when someone’s inside your OODA loop, it feels to them like you are working for them. If you go for a rematch, then this is what you will be doing for people like me. 350m would just be the starter.

‘Mixed emotions, Buddy, like Larry Wildman going off a cliff — in my new Maserati.’

I will happily discuss this with your colleagues on a different committee if they are interested, after the legal issues are finished…

 

Best wishes

Dominic

Ps. If you’re running an inquiry on fake news, it would be better to stop spreading fake news yourselves and to correct your errors when made aware of them. If you’re running an inquiry on issues entangled with technologies, it would be better to provide yourself with technological expertise so you avoid spreading false memes. E.g your recent letter to Facebook asked them to explain to you the operational decision-making of Vote Leave. This is a meaningless question which it is impossible for Facebook to answer and could only be asked by people who do not understand the technology they are investigating.

On the referendum 24I: new research on Facebook & ‘psychographic’ microtargeting

Summary: a short blog on a new paper casting doubt on claims re microtargeting using Facebook.

The audience for conspiracy theories about microtargeting, Facebook and Brexit is large and includes a big subset of SW1 and a wider group (but much smaller than it thinks it is) that wants a rematch against the public. The audience for facts, evidence and research about microtargeting, Facebook and Brexit is tiny. If you are part of this tiny audience…

I wrote a few days ago about good evidence on microtargeting in general and Cambridge Analytica’s claims on ‘psychographics’ in particular (see HERE).

Nutshell: the evidence and science re ‘microtargeting’ does not match the story you read in the media or the conspiracy theories about the referendum, and Vote Leave did not do microtargeting in any normal sense of the term.

Another interesting paper on this subject has been published a few days ago.

Background…

One of the most influential researchers cited by the media since Brexit/Trump is Michal Kosinski who wrote a widely cited 2015 paper on predicting Big 5 personality traits from Facebook ‘likes’: Computer-based personality judgments are more accurate than those made by humans.

Duncan Watts, one of the leading scholars in computational sociology, pointed out:

‘All it shows is that algorithmic predictions of Big 5 traits are about as accurate as human predictions, which is to say only about 50 percent accurate. If all you had to do to change someone’s opinion was guess their openness or political attitude, then even really noisy predictions might be worrying at scale. But predicting attributes is much easier than persuading people.’

Kosinski published another paper recently: Psychological targeting as an effective approach to digital mass persuasion (November 2017). The core claim was:

‘In three field experiments that reached over 3.5 million individuals with psychologically tailored advertising, we find that matching the content of persuasive appeals to individuals’ psychological characteristics significantly altered their behavior as measured by clicks and purchases. Persuasive appeals that were matched to people’s extraversion or openness-to-experience level resulted in up to 40% more clicks and up to 50% more purchases than their mismatching or unpersonalized counterparts. Our findings suggest that the application of psychological targeting makes it possible to influence the behavior of large groups of people by tailoring persuasive appeals to the psychological needs of the target audiences.’

If this claim were true it would be a big deal in the advertising world. Further, Kosinski claimed that ‘The assumption is that the same effects can be observed in political messages.’ That would be an even bigger deal.

I was sceptical when I read the 2017 paper, mainly given the large amount of evidence in books like Hacking the Electorate that I touched on in the previous blog, but I didn’t have the time or expertise to investigate. I did read this Wired piece on that paper in which Watts commented:

‘Watts says that the 2017 paper didn’t convince him the technique could work, either. The results barely improve click-through rates, he says — a far cry from predicting political behavior. And more than that, Kosinski’s mistargeted openness ads — that is, the ads tailored for the opposite personality characteristic — far outperformed the targeted extraversion ads. Watts says that suggests other, uncontrolled factors are having unknown effects. “So again,” he says, “I would question how meaningful these effects are in practice.”‘

Another leading researcher, David Lazer, commented:

‘On the psychographic stuff, I haven’t see any science that really aligns with their [CA/Kosinski] claims.’

Another leading researcher, Alex Pentland at MIT (who also successfully won a DARPA project to solve a geolocation intelligence problem) was also sceptical:

‘Everybody talks about Google and Facebook, but the things that people say online are not nearly as predictive as, say, what your telephone company knows about you. Or your credit card company. Fortunately telephone companies, banks, things like that are very highly regulated companies. So we have a fair amount of time. It may never happen that the data gets loose.’

I’ve just been sent this paper (preprint link): Field studies of psychologically targeted ads face threats to internal validity (2018). It is an analysis of Kosinski’s 2017 experiments. It argues that the Kosinski experiment is NOT RANDOMISED and points out statistical and other flaws that undermine Kosinski’s claims:

‘The paper [Kosinski 2017] uses Facebook’s standard ad platform to compare how different versions of ads perform. However, this process does not create a randomized experiment: users are not randomly assigned to different ads, and individuals may even receive multiple ad types (e.g., both extroverted and introverted ads). Furthermore, ad platforms like Facebook optimize campaign performance by showing ads to users whom the platform expects are more likely to fulfill the campaign’s objective… This optimization generates differences in the set of users exposed to each ad type, so that differences in responses across ads do not by themselves indicate a causal effect.’ (Emphasis added.)

Kosinski et al reply here. They admit that the optimisation of Facebook’s ad algorithms could affect their results though they defend their work. (Campaigns face similar operational problems in figuring out ways to run experiments on Facebook without FB’s algorithms distorting them.)

I am not remotely competent to judge the conflicting claims and haven’t yet asked anybody who is though I have a (mostly worthless) hunch that the criticisms will stack up. I’ll add an update in the future when this is resolved.

Big claims require good evidence and good science — not what Feynman called ‘cargo cult science’ which accounts for a lot of social science research. Most claims you read about psychological manipulation are rubbish. There are interesting possibilities for applying advanced technology, as I wrote in my last blog, but a) almost everything you read about is not in this class and b) I am sceptical in general that ideas in published work on using Big 5 personality traits could add anything more than a very small boost to political campaigns at best and it can also easily blow up in your face, as Hersh’s evidence to the Senate shows. I strongly suspect that usually the ‘gains’ are less than the fees of the consultants flogging the snake oil — i.e a net loss for campaigns.

If you believe, like the Observer, that the US/UK military and/or intelligence services have access to technological methods of psychological manipulation that greatly exceed what is done commercially, you misunderstand their real capabilities. For example, look at how the commander of US classified special forces (JSOC), Stanley McChrystal, recruited civilians for his propaganda operations in Afghanistan because the military did not know what to do. The evidence since 9/11 is of general failure in the UK/USA viz propaganda / ‘information war’ / ‘hybrid war’ etc. Further, if you want expertise on things like Facebook and Google, the place to look is Silicon Valley, not the Pentagon. Look at how recent UK Prime Ministers have behaved. Look at how Cameron tweeted about rushing back from Chequers in the middle of the night to deal with ISIS beheadings. Look at how Blair, Brown and Cameron foolishly read out the names of people killed in the Commons. Of course it is impossible from the outside to know how much of this is because Downing Street mangles advice and operations and how much is failure elsewhere. I assume there are lots of good people in the system but, like elsewhere in modern Whitehall, expertise is suppressed by centralised hierarchies (as with Brexit).

On campaigns and in government, figuring out the answers to a few deep questions is much more important than practically anything you read about technology issues like microtargeting. But focus and priorities are very hard for big organisations including parties and governments, because they are mostly dominated by seniority, groupthink, signalling, distorted incentives and so on. A lack of focus means they spread intelligent effort too widely and don’t think enough about deep questions that overwhelmingly determine their fate.

Of course, it is possible to use technology to enhance campaigns and it is possible to devise messages that have game-changing effects but the media focus on microtargeting is almost completely misguided and the Select Committee’s inquiry into fake news has mostly spread fake news. There has been zero scrutiny, as far as I have seen, on the evidence from reputable scholars like Duncan Watts or Eitan Hersh on the facts and evidence about microtargeting and fake news in relation to Trump/Brexit. Sadly they are more interested in grandstanding than truth-seeking, which is why the Committee turned down my offer to arrange a time to give evidence and instead tried to grab headlines. I offered friendly cooperation, as the government should have done with Brexit, but the Committee went for empty threats, as per May and Hammond, and this approach will be as successful as this government’s negotiating strategy.

On the referendum #24H: Facebook, data science, technology, elections, and transparency

This blog has two short parts: A) a simple point about Wednesday’s committee hearing, B) some interesting evidence from a rare expert on the subject of data and campaigns, and a simple idea to improve regulation of elections. (And a PS. on hack Jane Merrick spreading more fake news.) There is a very short UPDATE re Facebook posted the next day, highlighted in BOLD below.

A. Re Wednesday’s Select Committee and Facebook letter

Correspondence from Facebook was published and used by the Committee to suggest that Vote Leave/AIQ have lied about when they started working together.

Henry de Zoete was introduced to AIQ on 31 March 2016. (This is all clear in emails that I think have been given to the Electoral Commission — if not they easily could be.)

AIQ did zero work for VL before then and, obviously, did not have access to VL’s Facebook page before we had even spoken to them.

If Facebook is saying that AIQ was running ads for VL in February 2016, then Facebook is wrong. [UPDATE: actually, if you read Facebook’s letter carefully, they correct their own error in a table where they use the timeframe for AIQ activity of “15 April – 23 June”. “15 April” of course fits with the date of VL’s introduction to AIQ I gave in this blog, and is the first day of the official campaign. The MPs either didn’t read the letter properly or chose to use the date which gave them a news story.]

VL was running stuff on FB in February as Facebook says. But this was done by us, NOT by AIQ.

Probably Facebook has looked at the VL FB page, seen activity in February, seen AIQ doing stuff shortly after and wrongly concluded that the earlier activity was also done by AIQ. It wasn’t and any further investigations will show this.

This isn’t actually important viz the legal claims and the EC investigation but I make the point in the interests of trying to clarify FACTS — so far the fake news inquiry has spread fake news around the world and clarified little. Also note how the Committee drops correspondence on the day of the hearing to maximise their chances of creating embarrassing moments for witnesses. This is the behaviour of people happy to see false memes spread, not the behaviour of truth-seeking MPs.

The Committee is now threatening me with ‘contempt of Parliament’. Their behaviour in seeking headlines rather than cooperating with witnesses over dates for evidence is  the sort of behaviour that has increased the contempt of the public for MPs over the last 20 years, which of course contributed to the referendum result. The Committee doesn’t understand Vote Leave. We had to deal with threats from MPs every day for a year, including from the PM/Chancellor and their henchmen who could actually back up serious threats. We ignored that. Why would you think we’re going to worry about EMPTY threats? If you think I care about ‘reputational damage’, you are badly advised.

B. Rare expertise on the subject of data and elections from Eitan Hersh to US Senate

Eitan Hersh wrote a book in 2015 called Hacking the Electorate. It’s pretty much the best book I’ve seen on the use of data science in US elections and what good evidence shows works and does not work.

As I wrote after the referendum, we tried hard in Vote Leave to base decisions on the best EVIDENCE for what works in campaigns and we spent time tracking down a wide variety of studies. Usually in politics everything is done on hunches. Inevitably, the world of ‘communications’ / PR / advertising / marketing is full of charlatans flogging snake oil. It is therefore very easy to do things and spend money just because it’s conventional. Because we were such a huge underdog we had to take some big gambles and we wanted to optimise the effectiveness of our core message as much as possible — if you know the science, you can focus more effectively. The constraints of time, money, and the appalling in-fighting meant we never pushed this nearly as far as I wanted but we tried hard.

For example, one of the few things about advertising which seems logical and has good evidence to support it is — try to get your message in front of people as close to the decision point as possible. That’s why we spent almost the whole campaign testing things (via polls, focus groups, online etc) then dropped most of our marketing budget in the last few days of the campaign. Similarly, Robert Cialdini wrote one of the few very good books on persuasion — Influence — and ideas from that informed how we wrote campaign materials. We were happy to take risks and look stupid. We came across a study where researchers had used as a control a leaflet with zero branding only to find, much to their surprise if I remember right, that it worked much better than all the other examples. We therefore experimented with leaflets stripped of all branding (‘The Facts’) which unleashed another wave of attacks from SW1 (‘worst thing I’ve seen in politics, amateur hour’ etc), but sure enough in focus groups people loved it (the IN campaign clearly found the same because they started copying this).

Of course, all sorts of decisions could not be helped by reliable evidence. But it is a much healthier process to KNOW when you’re taking a punt. Most political operations — and government — don’t try to be rigorous about decision-making or force themselves to think about what they know with what confidence. They are dominated by seniority, not evidence. Our focus on evidence was connected to creating a culture in which people could say to senior people ‘you’re wrong’. This is invaluable. I made many awful mistakes but was mostly saved from the consequences because we had a culture in which people could say ‘you’re wrong’ and fix them fast.

This is relevant to Hersh’s evidence and the conspiracy theories…

Hersh’s evidence should be read by everybody interested in the general issues of data and elections and the recent conspiracy theories in particular. I won’t go into these conspiracies again.

Here are some quotes…

‘Based on the information I have seen from public reports about Cambridge Analytica, it is my opinion that its targeting practices in 2016 ought not to be a major cause for concern in terms of unduly influencing the election outcome…

‘In every election, the news media exaggerate the technological feats of political campaigns…

‘The latest technology used by the winning campaign is often a good storyline, even if it’s false. Finally, campaign consultants have a business interest in appearing to offer a special product to future clients, and so they are often eager to embellish their role in quotes to the media…

‘I found that commercial data did not turn out to be very useful to campaigns. Even while campaigns touted the hundreds or thousands of data points they had on individuals, campaigns’ predictive models did not rely very much on these fields. Relative to information like  age, gender, race, and party affiliation, commercial measures of product preferences did not add very much explanatory power about Americans’ voting behavior…

‘Many commercial fields simply are not highly correlated with political dispositions. And even those that are might not provide added information to a campaign’s predictive models…

‘Nearly everything Mr. Nix articulates here [in a video describing CA’s methods] is not new. Based on what we know from past work,  it is also likely to have been ineffective. Cambridge Analytica’s definition of a persuadable voter is someone who is likely to vote but the campaign isn’t sure who they will vote for. This is a common campaign convention for defining persuadability. It also bears virtually no relationship to which voters are actually persuadable, undecided, or cross-pressured on issues, as I discuss in Hacking the Electorate… Cambridge Analytica’s strategy of contacting likely voters who are not surely supportive of one candidate over the other but who support gun rights and who are predicted to bear a particular personality trait is likely to give them very little traction in moving voters’ opinions. And indeed, I have seen no evidence presented by the firm or by anyone suggesting the firm’s strategies were effective at doing this 

‘As many journalists have observed, building a psychological profile by connecting Facebook “likes” to survey respondents who took a personality test would lead to inaccurate predictions. Facebook “likes” might be correlated with traits like openness and neuroticism, but the correlation is likely to be weak. The weak correlation means that the prediction will have lots of false positives…

‘In campaign targeting models I have studied, predictions of which voters are black or Hispanic are wrong about 25-30% of the time. Models of traits such as issue positions or personality traits are likely to be much less accurate. They are less accurate because they are less stable and because available information like demographic correlates and Facebook “likes” are probably only weakly related to them…

‘In a series of experiments, a colleague and I found that voters penalize candidates for mis-targeting such that any gains made through a successful target are often canceled out by losses attributable to mistargets… 

I am skeptical that Cambridge Analytica manipulated voters in a way that affected the election 

[Hersh then says ‘The skepticism I offer comes with a high degree of uncertainty’ and describes some of the gaps in what we know about such things. He also calls on Facebook to make its data available to researchers.]

‘News, both real and fake, is disseminated among users because it feels good to share. The kinds of news and content that often piques our interest appeals to our basest instincts; we are drawn to extremism, provocation, and outrage.’

Transparency — two simple ideas to improve things

In the last section Hersh discusses some broad points about transparency and social media. These things are important as I said after the referendum. Sadly, the focus on conspiracy theories has diverted the media and MPs away from serious issues.

I have zero legal responsibility for Vote Leave now — I ceased to be a director as part of our desperate rearguard action during the coup that kicked off on 25 January 2016. But I wouldn’t mind if Facebook wanted to take ALL of Vote Leave’s Facebook data that may be still sitting in ad manager etc — data normally considered very sensitive and never published by campaigns — and put the whole lot on its website available for download by anybody (excluding personal data so no individuals could be identified, which presumably would be illegal).

Why?

  1. In principle I agree with Hersh and think serious academic scrutiny would be good.
  2. In the interests of the VL team, it would prove what I have been saying and prove aspects of the conspiracy theories wrong. We never saw/used/wanted the data improperly acquired by CA. We did practically no ‘microtargeting’ in the normal sense of the term and zero using so-called ‘psychographics’ for exactly the reason described above — we tried to base decisions on good evidence and the good evidence from experts like Hersh was that it was not a good use of time and money. We focused on other things.

Here is another idea.

Why not have a central platform (managed by a much-reformed and updated Electoral Commission with serious powers) and oblige all permitted participants in elections to upload samples of all digital ads to this platform (say daily?) for public inspection by anybody who wants to look. After the election, further data on buy size, audience etc could be made automatically available alongside each sample. This would add only a tiny admin burden to a campaign but it would ensure that there is a full and accurate public record of digital campaigning.

Of course, this idea highlights an obvious point — there has never been any requirement on the parties to do this with paper documents. Part of the reason for the rage against Vote Leave in SW1 is that the referendum victory was something done to SW1 and the parties, not something done by them, hence partly their scrutiny of our methods. (This is also partly why the MPs are struggling so much to get to grips with the consequences.) There are no silver bullets but this simple measure would do some good and I cannot see a reasonable objection. Professional campaigners and marketers would hate this as they profit from a lack of transparency and flogging snake oil but their concerns should be ignored. Will the parties support such transparency for themselves in future elections?

One of the many opportunities of Brexit, as I’ve said before, comes in how we regulate such things. American law massively reflects the interests of powerful companies. EU law, including GDPR, is a legal and bureaucratic nightmare. The UK has, thanks to Brexit, a chance to regulate data better than either. This principle applies to many other fields, from CRISPR and genetic engineering to artificial intelligence and autonomous vehicles, which in the EU will be controlled by the ECJ interpreting the Charter of Fundamental Rights (and be bad for Europe’s economies and democracies). MPs could usefully consider these great opportunities instead of nodding along as officials do their best to get ministers to promise to maintain every awful set of EU rules until judgement day.

The issues of data-technology-elections is going to become more and more important fast. While the field is dominated by charlatans, it is clear that there is vast scope for non-charlatans to exploit technology and potentially do things far more effective, and potentially dangerous for democracy, than CA has claimed (wrongly) to do. Having spent some time in Silicon Valley since the referendum, it is obvious that it is/will be possible to have a decisive impact on a UK election using advanced technology. The limiting factors will be cash and a very small number of highly able people: i.e an operation to change an election could scale very effectively and stay hidden to a remarkable degree. The laws are a joke. MPs haven’t mastered the 70 year old technology of TV. How do you think they’d cope with people using tools like Generative Adversarial Networks (GANs) — never mind what will be available within five years? The gaps in technical skill between commercial fields are extreme and getting wider as the west coast of America and coastal China suck in people with extreme skills. Old media companies already cannot compete with the likes of Google and the skill gaps  — and their consequences — grow every day.

But but but — technology alone will very rarely be the decisive factor: ‘people, ideas, and machines — in that order’, shouted Colonel Boyd at audiences, and this will remain true until/unless the machines get smarter than the people. The most important thing for campaigns (and governments) to get right is how they make decisions. If you do this right, you will exploit technology successfully. If you don’t — like the Tories in 2017 who created a campaign organisation violating every principle of effective action — no advantage in technology or cash will save you. And to get this right, you should study examples from the ancient world to modern projects like ARPA-PARC and Apollo (see here).

Anyway, I urge you to read Hersh’s evidence and ponder his warnings at the end, it will only take ~15 minutes. If interested, I also urge you to read some of the work by Rand Waltzman who ran a DARPA project on technology and social media. He has mostly been ignored in Washington as far as I can see but he should not be. He would be one of the most useful people in the world for MPs and hacks interested in these issues to speak to.

https://www.eitanhersh.com/uploads/7/9/7/5/7975685/hersh_written_testimony_senate_judiciary.pdf


PS. I’ve just been sent a blog on The Times website by Jane Merrick. It includes this regarding the latest odd news about a C4 drama:

‘Yet as with Mandelson, Cummings seems to complain about everything that is ever written about him, and so his reaction from his Twitter account — @odysseanproject (don’t ask) — was this: “What’s the betting this will be a Remain love-in and dire.” Oh how humbly he does brag!

I’ve had hacks email me asking me to ‘defend’ things on that Twitter account.

1. That is not my twitter account — it is a fake account. It’s interesting how many hacks complain about fake news while spreading it themselves. If you’re going to make claims about anonymous Twitter accounts (as she does elsewhere in her blog), try not to get confused by obvious parodies.

2. She also doesn’t mention that her husband, Toby Helm, was the SW1 equivalent of the guy in Scream chasing me and Henry de Zoete around Westminster for two years with a carving knife and a scream mask. The Observer promised the lobby I’d be marched out of the DfE in handcuffs. Nothing happened. Why? Because hate clouded their judgement, they botched the facts, and their claims were bullshit. Sound familiar?

[Update: The Times has cut that passage from the blog.]

On the referendum #24E: Facebook proves central allegation in Observer/Channel 4 conspiracy theory is wrong

Facebook has provided evidence to Parliament and the ICO and Electoral Commission relevant to the recent stories about whistleblowers and the referendum.

It proves exactly what I have said about the Observer/C4 conspiracy theory that Vote Leave/I were secretly coordinating with Leave.EU/Cambridge Analytica and using the infamous Kogan/Cambridge Analytica data.

TIYDL refers to the infamous data collected by Kogan and given to Cambridge Analytica.

Use of TIYDL data – When an advertiser runs an ad campaign on Facebook one way they can target their ads is to use a list of email addresses (such as customers who signed up to their mailing list). AIQ used this method for many of their advertising campaigns during the Referendum. The data gathered through the TIYDL app did not include the email addresses of app installers or their friends. This means that AIQ could not have obtained these email addresses from the data TIYDL gathered from Facebook. AIQ must have obtained these email addresses for British voters targeted in these campaigns from a different source. We also conducted an analysis of the audiences targeted by AIQ in its Referendum-related ads, on the one hand, and UK user data potentially collected by TIYDL, on the other hand, and found very little overlap (fewer than 4% of people were common to both data sets, which is the same overlap we would find with random chance). This further suggests that the data from TIYDL was not used to build AIQ’s data sets in connection with the Referendum campaigns, although only AIQ has access to complete information about how it generated these data sets.’ [Emphasis added]

Note — this is not a statement about probabilities, it is certain: ‘AIQ could not have… AIQ must have…’ The emails used by AIQ for targeting ‘COULD NOT HAVE’ come from CA. This flatly contradicts Wylie.

Further, Facebook looked to see if there was evidence of targeting via a different route and found that the overlap with TIYDL data is ‘the same overlap we would find with random chance’. This flatly contradicts Wylie. 

The central claims of the Observer, Channel 4, Michael Crick, Jon Snow, Wylie, Shahmir et al used to support their overall conspiracy theory are factually wrong. As I said weeks ago, Wylie’s claims about VL’s use of data were obviously technically laughable. Other libellous claims by the Observer/C4 concerning the ‘destruction of evidence’ on the VL google drive will similarly be shown to be factually wrong, showing neither who or what the Observer/C4 claimed.  

Hopefully honest and professional media organisations will not repeat their conspiracy theories.

As I have said repeatedly, no reasonable person could think that the battle between Vote Leave/me and Leave.EU/Banks to control the official campaign really was a deep cover operation to hide our secret coordination over data.

There are serious issues concerning data, marketing and elections as I said before this conspiracy theory got going. It would be much better for the media to focus on these issues than persist Trump-like with claims that black = white.

Facebook evidence here

https://www.parliament.uk/documents/commons-committees/culture-media-and-sport/Written-evidence-Facebook.pdf