The most secure bio-labs routinely make errors that could cause a global pandemic & are about to re-start experiments on pathogens engineered to make them mammalian-airborne-transmissible

‘Although the institutions of our culture are so amazingly good that they have been able to manage stability in the face of rapid change for hundreds of years, the knowledge of what it takes to keep civilization stable in the face of rapidly increasing knowledge is not very widespread. In fact, severe misconceptions about several aspects of it are common among political leaders, educated people, and society at large. We’re like people on a huge, well-designed submarine, which has all sorts of lifesaving devices built in, who don’t know they’re in a submarine. They think they’re in a motorboat, and they’re going to open all the hatches because they want to have a nicer view.’ David Deutsch, the physicist who extended Alan Turing’s 1936 paper on classical computation to quantum computation. 

Experiments on viruses that could cause a global pandemic killing many millions were halted but were recently cleared to resume and will be conducted in these ‘top security’ labs.

The new Bulletin of Atomic Scientists carries research showing how the supposedly most secure bio-labs have serious security problems and clearly present an unacceptable risk of causing a disastrous pandemic:

Incidents causing potential exposures to pathogens occur frequently in the high security laboratories often known by their acronyms, BSL3 (Biosafety Level 3) and BSL4. Lab incidents that lead to undetected or unreported laboratory-acquired infections can lead to the release of a disease into the community outside the lab; lab workers with such infections will leave work carrying the pathogen with them. If the agent involved were a potential pandemic pathogen, such a community release could lead to a worldwide pandemic with many fatalities. Of greatest concern is a release of a lab-created, mammalian-airborne-transmissible, highly pathogenic avian influenza virus, such as the airborne-transmissible H5N1 viruses created in the laboratories of Ron Fouchier in the Netherlands and Yoshihiro Kawaoka In Madison Wisconsin.

Such releases are fairly likely over time, as there are at least 14 labs (mostly in Asia) now carrying out this research. Whatever release probability the world is gambling with, it is clearly far too high a risk to human lives. Mammal-transmissible bird flu research poses a real danger of a worldwide pandemic that could kill human beings on a vast scale.

Human error is the main cause of potential exposures of lab workers to pathogens. Statistical data from two sources show that human error was the cause of, according to my research, 67 percent and 79.3 percent of incidents leading to potential exposures in BSL3 labs…

‘A key observation is that human error in the lab is mostly independent of pathogen type and biosafety level. Analyzing the likelihood of release from laboratories researching less virulent or transmissible pathogens therefore can serve as a reasonable surrogate for how potential pandemic pathogens are handled. (We are forced to deal with surrogate data because, thank goodness, there are little data on the release of potentially pandemic agents.) Put another way, surrogate data allows us to determine with confidence the probability of release of a potentially pandemic pathogen into the community. In a 2015 publication, Fouchier describes the careful design of his BSL3+ laboratory in Rotterdam and its standard operating procedures, which he contends should increase biosafety and reduce human error. Most of Fouchier’s discussion, however, addresses mechanical systems in the laboratory.

But the high percentage of human error reported here calls into question claims that state-of-the-art design of BSL3, BSL3+ (augmented BSL3), and BSL4 labs will prevent the release of dangerous pathogens. How much lab-worker training might reduce human error and undetected or unreported laboratory acquired infections remains an open question. Given the many ways by which human error can occur, it is doubtful that Fouchier’s human-error-prevention measures can eliminate release of airborne-transmissible avian flu into the community through undetected or unreported lab infections…

‘In its 2016 study for the NIH, “Risk and Benefit Analysis of Gain of Function Research,” Gryphon Scientific looked to the transportation, chemical, and nuclear sectors to define types of human error and their probabilities. As Gryphon summarized in its findings, the three types of human error are skill-based (errors involving motor skills involving little thought), rule-based (errors in following instructions or set procedures accidentally or purposely), and knowledge-based (errors stemming from a lack of knowledge or a wrong judgment call based on lack of experience). Gryphon claimed that “no comprehensive Human Reliability Analysis (HRA) study has yet been completed for a biological laboratory… . This lack of data required finding suitable proxies for accidents in other fields.”

‘But mandatory incident reporting to FSAP and NIH actually does provide sufficient data to quantify human error in BSL3 biocontainment labs…

[An example] A fourth release in 2014 from the CDC labs occurred when “Scientists inadvertently switched samples designated for live Ebola virus studies with samples intended for studies with inactivated material. As a result, the samples with viable Ebola virus, instead of the samples with inactivated Ebola virus, were transferred out of a BSL-4 laboratory to a laboratory with a lower safety level for additional analysis. While no one contracted Ebola virus in this instance, the consequences could have been dire for the personnel involved as there are currently no approved treatments or vaccines for this virus.”…

‘ In an analysis circulated at the 2017 meeting for the Biological Weapons Convention, a conservative estimate shows that the probability is about 20 percent for a release of a mammalian-airborne-transmissible, highly pathogenic avian influenza virus into the community from at least one of 10 labs over a 10-year period of developing and researching this type of pathogen… Analysis of the FOIA NIH data gives a much higher release probability — that is, a factor five to 10 times higher

‘The avian flu virus H5N1 kills 60 percent of people who become infected from direct contact with infected birds. The mammalian-airborne-transmissible, highly pathogenic avian influenza created in the Fouchier and Kawaoka labs should be able to infect humans through the air, and the viruses could be deadly.

A release into the community of such a pathogen could seed a pandemic with a probability of perhaps 15 percent. This estimate is from an average of two very different approaches…

‘Combining release probability with the not insignificant probability that an airborne-transmissible influenza virus could seed a pandemic, we have an alarming situation…

‘Those who support mammalian-airborne-transmissible, highly pathogenic avian influenza experiments either believe the probability of community release is infinitesimal or the benefits in preventing a pandemic are great enough to justify the risk. For this research, it would take extraordinary benefits and significant risk reduction via extraordinary biosafety measures to correct such a massive overbalance of highly uncertain benefits to too-likely risks.

Whatever probability number we are gambling with, it is clearly far too high a risk to human lives. There are experimental approaches that do not involve live mammalian-airborne-transmissible, highly pathogenic avian influenza which identify mutations involved in mammalian airborne transmission. These “safer experimental approaches are both more scientifically informative and more straightforward to translate into improved public health…” Asian bird flu virus research to develop live strains transmissible via aerosols among mammals (and perhaps some other potentially pandemic disease research as well), should for the present be restricted to special BSL4 laboratoriesor augmented BSL3 facilities where lab workers are not allowed to leave the facility until it is certain that they have not become infected.’

This connects to my blog last week on nuclear/AGI safety and how to turn government institutions responsible for decisions about billions of lives and trillions of dollars from hopeless to high performance. Science is ‘a blind search algorithm’. New institutions are needed that incentivise hard thinking about avoiding disasters…

As the piece above stresses and lessons from nuclear safety also show, getting the physical security right is only one hard problem. Most security failings happen because of human actions that are not envisaged when designing systems. This is why Red Teams are so vital but they cannot solve the problem of broken political institutions. Remember: Red Teams told the federal government all about the failures of airline security at the airports used by the 9/11 attackers before 9/11. Those who wrote the reports were DEMOTED and the Red Team was CLOSED: those with power did not want to hear.

The problems considered above are ‘accidents’ — what if these systems were subject to serious penetration testing by the likes of Chris Vickery? (Also consider that there is a large network of Soviet scientists that participated in the covert Soviet bio-weapons program that the West was almost completely ignorant about until post-1991. Many of these people have scattered to places unknown with who knows what.)

Pop Quiz…

A. How many MPs understand security protocols in UK facilities rated ‘most secure’?

B. Does the minister responsible? Have they ever had a meeting with experts about this? Is the responsible minister even aware of this very recent research above? Are they aware that these experiments are about to restart? When was the last time a very high level Red Team test of supposedly ‘top secret/secure’ UK facilities was conducted using teams with expertise in breaking into secure facilities by any means necessary, legal or illegal (i.e a genuine ‘free play’ exercise, not a scripted game where the Red Team is prohibited from being too ‘extreme’)? Has this happened at all in the last 10 years? How bad were the results? Were any ministers told? Have any asked? Does any minister even know who is responsible for such things? Are officials of the calibre of those who routinely preside over procurement disasters in charge (back in SW1) of the technical people working on such issues (after all, some play senior roles in Brexit negotiations)?

C. How much coverage of the above finding has appeared in newspapers like the FT?

My answers would be: A. ~0. B. Near total general failure. C. ~0.

A hypothesis that should be tested: With a) <£1million to play with, b) the ability to recruit a team from among special forces/intel services/specialist criminals/whoever, and c) no rules (so for example they could deploy honey traps on the head of security), a Red Team would break into the most secure UK bio-research facilities and acquire material that could be released publicly in order to cause deaths on the scale of millions. A serious test will also reveal that there is no serious attempt to incentivise the stars of Whitehall to work on such important issues or involve extremely able people from outside Whitehall.

As I wrote last week, it was clear years ago that a smart teen could take out any world leaders using a drone in Downing Street — they can’t even install decent CCTV and audio — but we should be much more worried about bio-facilities.

3 thoughts on “The most secure bio-labs routinely make errors that could cause a global pandemic & are about to re-start experiments on pathogens engineered to make them mammalian-airborne-transmissible

  1. This was a really excellent post. I have an interest in biosecurity as a risk worth spending far more to manage than we currently are; are there any organizations you think are doing good work to establish better policy (or at least better knowledge) in this area?

    Like

  2. “As I wrote last week, it was clear years ago that a smart teen could take out any world leaders using a drone in Downing Street — they can’t even install decent CCTV and audio — but we should be much more worried about bio-facilities.”

    Yet, no world leader has been killed in a drone attack and the only attempt (Venezuela) failed miserably. Presumably, there are a fair number of people willing to sacrifice their lives to kill a major world-leader (I believe ISIS had a fair few doctors within its ranks). What is the explanation for this seemingly paradoxical state of affairs?

    Like

  3. I would have thought that the relevant ministers would be the SoS for Health and the SoS for Defence, along with the Chiefs of Staff, the MHRA and the HPA.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s