Synthetic Biology and the “Proactionary Principle”

by Andrew Maynard on July 11, 2013

In yesterday’s piece on the Guardian’s Political Science blog, Steve Fuller – author and professor of social epistemology at the University of Warwick –  questions whether the time has come for policy-makers and scientists to move on from the precautionary principle and adopt a proactionary principle as a means of promoting calculated risk-taking.

The proactionary principle can be summarized as:

Assessing risks and opportunities [associated with technology innovation] using an objective, open, and comprehensive, yet simple decision process based on science rather than collective emotional reactions. The principle account for the costs of restrictions and lost opportunities as fully as direct effects. It favors measures that are proportionate to the probability and magnitude of impacts, and that have the highest payoff relative to their costs. And it gives a high priority to people’s freedom to learn, innovate, and advance.

Steve argues that, while the precautionary principle stifles progress by reducing risk and encouraging policies that play it safe, the proactionary principle “valorises calculated risk-taking as essential to human progress, where the capacity for progress is taken to define us as a species.”  He also explores the idea of “seasteading” as a way of following this principle outside of governmental constraints, by setting up research ships/floating communities outside territorial waters of nations that adhere to the precautionary principle.

As I describe in my response to Steve’s piece on the Guardian blog, I find such a proactionary response seductive as a scientist.  But like most seductive ideas, it appeals to my personal interest rather than my social conscience.  As with the previous post in the series by Tracey Brown, I’d encourage you to read Steve’s piece in full.  But here are my edited responses to his article (see the original comments on the Guardian website):

As a scientist, there are aspects of the “proactionary principle” as outlined by Steve that resonate with me – I get excited by “what if…” questions, and would quite happily explore new ideas and avenues because they were intellectually stimulating to myself. And I buy into the idea that progress entails a certain degree of risk – I am after all the Director of an academic center that explores the science of risk. But at the same time, I am aware that unfettered research without a guiding ethic that is embedded in policy and process is potentially highly destructive.

For example, this week the 6th International Meeting on Synthetic Biology is being held in London, where scientists from around the world are exchanging ideas on radically altering living organisms through redesigning and engineering their genetic code. At its simplest conceptualization, synthetic biology enables new code to be developed and experimented with on computers – and then “printed out” in real life and inserted into cells. It’s an incredibly powerful emerging area of science and technology – but also one that could be highly destructive if developed without constraints.

Imagine a “seastead” where researchers had no ethical or social constraints whatsoever on their synthetic biology research. Recreating deadly viruses would be relatively easy – making them even more deadly, or targeted to specific genetic traits within individuals and populations not beyond plausibility. Modifying gut bacteria so that they increase nutrient uptake from food, or protected against viruses and harmful bacterial, or promote benign physiological changes that indicate the development of chronic disease, wouldn’t be beyond the realms of possibility. Neither would the emergence of potentially devastating and irreversible adverse health effects arising from supposedly beneficial technologies that hadn’t been comprehensively tested. Gene therapies could be developed that provide people short term genetic enhancements with no thought to long term consequences. Hybrid animals could be engineered that were intellectually interesting to create, but ethically questionable. The list of the possible in this field is a long one.

Why would researchers do this? Because they could. And because a personal ethic of what is acceptable and what is not, differs significantly from working within a broader social ethic.

But such “proactionary” research is not occurring – or at least is not widespread.  And it isn’t because the global synthetic biology community recognize that such a potentially powerful technology needs to be handled responsibly, and that precautions need to be taken to ensure the ability and enthusiasm of researchers is channeled toward outcomes that are socially, environmentally and economically beneficial and sustainable.

I’m not sure how much talk of the precautionary principle there is at this week’s meeting in London. But I can guarantee that there is a lot of discussion surrounding the principle of precaution. And I may be wrong, but I suspect that many of these researchers at the cutting edge of technology would shy away from a “proactionary principle” – simply because it lacks the constraints necessary to ensure the long-term positive impact of a field they feel is socially important, as well as personally exciting.

Previous post:

Next post: