Quantitative Risk Assessment is hardly a topic that is likely to be seen trending on Twitter or going viral on YouTube anytime soon. But it is important. As I teach my students, how we assess and address human health risks affects almost every aspect of our lives. Beyond the obvious benefits to our health and well-being, risk assessment plays a role in economic growth, sustainable development, social and environmental justice, and many other facets of life in today’s complex society. The reason is that all risk-related decisions carry with them personal, social, environmental, economic and political costs and benefits. And getting things wrong can really screw up the risk-benefit math – whether considering social and environmental implications or the long-term impact on profit margins.
For nearly three decades, human health risk assessment in the US has been built on a firm foundation of science. Founded in the framework established in the 1983 National Academies of Science “Red Book”, Quantitative Risk Assessment follows a process of deriving a assessment of risk associated with a given agent that is based on the data, the whole data and nothing but the data. The aim was and remains to provide an independent, transparent, unbiased, science-based evaluation of risk that can inform the far messier process of deciding what to do with the numbers that emerge from the analysis.
But what if these numbers themselves are flawed, or open to misinterpretation, or just plain misleading? Where does this leave us in an increasingly complex world of risk-decisions?
In the September 6 edition of Nature magazine, George Gray and Josh Cohen address this question in a commentary, and challenge the efficacy of the US Environmental Protection Agency’s current approach to risk assessment. Their concern stems from the need for the EPA to keep up with an increasing need for evidence-based evaluations of risk that support informed and balanced decision-making. It’s a concern that is supported in part in reviews by the National Academies of Science and the US Congress. As Gray and Cohen state in the commentary,
“[The US EPA’s] flagship Integrated Risk Information System (IRIS), which develops risk values for human chemical exposure that are used by regulators and others, is being widely criticized for being too slow and scientifically flawed”
Gray and Cohen go on to cite the sheer slowness of the EPA risk assessment process, and the scientific credibility of the resulting analyses, as critical flaws in a system ripe for an overhaul.
Moving down in granularity, Gray and Cohen highlight four challenges they see as existing within the current system:
Prioritized reviews of previously assessed chemicals. The authors raise concerns that reviews of chemicals like mercury and dioxin are sucking up risk assessment bandwidth, at the expense of first-time chemical evaluations.
Slow progress leading to crucial data gaps. There is a fear expressed by Gray and Cohen that commercial chemicals that lack an IRIS assessment are perceived as safer alternatives to those within the system, even though in practice they may not be safer.
Extrapolating risk evaluations to low human exposure levels. Risk data are usually associated with the relatively high exposures needed to get an observable effect within a reasonable time period. But this makes the process of extrapolating to human-realistic exposures complex and controversial. Gray and Cohen are concerned that the process of extrapolation – especially when using animal studies – errs too far on the side of caution, and that this leads to socially and economically questionable risk decisions.
Conservative risk assessments based on possibility rather than plausibility. Gray and Cohen express concern that over-cautious approaches to risk assessment that rely on the possibility of harm occurring rather than the plausibility of likely harm can lead to “overly-stringent regulation and can scramble agency priorities because the degree of precaution differs across chemicals”.
While there is far from universal agreement over these perceived flaws, the issues raised clearly warrant close attention if effective risk assessment is to continue to support informed decision-making. In their own response to what they see as a risk assessment approach that needs to be rethought, Gray and Cohen propose four areas that should be addressed:
- EPA should expand IRIS to include sources of information that are not currently used. These should include risk values developed by international public health agencies and other agencies, and by private groups.
- EPA should integrate data from its internal programs into IRIS, such as the Provisional Peer-Reviewed Toxicity Values database.
- EPA should expedite its exploration of high throughput screening and incorporate resulting evaluations into IRIS
- EPA should replace risk values that are built on assumptions embedded within science policy with risk values that acknowledge uncertainty.
Perhaps most significantly, Gray and Cohen propose that the US EPA
“should not see assessment as a search for scientific truth, but as a way to bring available information to bear on regulatory and public-health decisions.”
This is a deceptively strong statement, and challenges the very basis of quantitative risk assessment within the US federal government. Rather than viewing quantitative risk assessment as an independent process that brings the authority of science to bear on risk decisions, Gray and Cohen argue that a far more pragmatic and outcomes-focused approach is needed if existing and emergent risks are to be handled effectively.
Whether such a radical re-alignment is warranted remains the subject of deep debate within expert circles. Yet given the importance of effective risk-based decisions on the lives and livelihoods of pretty much everyone, maybe its time it had a larger audience. And while Gray and Cohen’s analysis and proposals are controversial, they do open up the discussion on how best to meet new challenges and opportunities in the quest to make a rapidly developing world a safer, better place.