When Risk Communications Are Precise, Accurate and Utterly Meaningless

by Brian Zikmund-Fisher on January 30, 2013

I hope the title of this post got your attention.

It’s a statement that seems to violate the fundamental concept of risk communication.

How is it possible that risk communications could be accurate yet meaningless?

Isn’t the whole point of risk communication to help people to quantify the uncertainty in their lives? Isn’t it better to know the chance that I will get cancer or have a car accident or develop Alzheimer’s disease more and more precisely?

How could more data, better data, be less helpful?

Perhaps it will be easier if I rephrase the question:

How is it possible that risk data could be precise estimates of the likelihood of events happening yet simultaneously useless for decision making?

The answer is easy:
Risk communications can be accurate and precise representations of risk likelihoods yet meaningless when their quantitative precision is both (a) unnecessary for effective decision making and (b) and distracting, thereby preventing the audience from understanding the simpler “gist” that they do need for decision making.

The February 2013 supplement to the journal Medical Care Research and Review contains a set of papers all under the broad heading: Differing Levels of Clinical Evidence: Exploring Communication Challenges in Shared Decision Making. These papers all stem from presentations at the AHRQ-funded 2011 Eisenberg Center Conference Series. (Videos of the presentations are available online through the previous link.)

My contribution to this volume is a paper titled, “The Right Tool Is What They Need, Not What We Have: A Taxonomy of Appropriate Levels of Precision in Patient Risk Communication.

In it, I outline a taxonomy of seven levels of precision in quantitative risk communications, ranging from possibility statements (e.g., “X could happen”) through relative possibility statements (“X is more likely than Y”), categorical possibility statements (e.g., “you have a high chance of X happening”), to precise quantitative concepts like comparative probability statements (“there is an A% chance of X happening, compared to a B% chance of Y happening”) and incremental probability statements (e.g., “the risk of X will change by A% if I do Z”).

Why did I do this? Two reasons.

First, I think we have to acknowledge that the definition of “risk communications” encompasses a broad range of statements. Stating “it is possible that the vaccine will cause febrile seizures” is very different than saying “the risk of febrile seizures goes up by 0.001%” (or whatever the number might be) if a person has a vaccine. We need to recognize that these messages are fundamentally different, even if both are accurate risk communications.

Second, and more importantly, we need to acknowledge that we often provide risk data to people who need risk understanding and personally relevant meaning without actually translating the former into the latter. In the paper, I use the example of “Robert” to make this point.

“Robert” goes to an online cardiovascular risk calculator, enters his information, and is told his 10 year risk of cardiovascular disease is 14.523%.

This number could be completely accurate (in that it represents the output of the best clinical algorithm known to medical science). It is certainly very precise, down to the thousandth of a percent. (BUT, see here for a paper arguing such precision undermines trust in risk calculators.)

But, here are the key questions:

Does “Robert” know if he is at higher than average risk? (NO.) Does he know whether this value should be taken as a signal to act, e.g., by seeing his doctor? (NO.) Does it accomplish the single most important goal of risk calculators, i.e., motivating behavior change among high risk users? (I really doubt it!)

Is “Robert” informed of his risk? Not in my book.

The 14.523% risk estimate lacks what Christopher Hsee termed “evaluability.” It may be numerically accurate but lacks sufficient context to be seen as good or bad. I, and others, have written before (e.g., here) about how the lack of evaluability can lead people to not use different types of numerical data.

As medical and public health professionals, we spend much of our lives in pursuit of data to inform the estimation and management of health risks. These data are incredibly important to have at our disposal, in part because our professional lives are filled with other data that provide context.

The problem is, most of the people we are trying to communicate with lack the very experience and knowledge that made the data meaningful to us in the first place.

As a result, just because we have a precise risk number does NOT mean that providing that number to the patient, policy maker, or community member is the best way to inform them about this risk.

Let me be clear: I am not arguing that risk data communications are never valuable. Far from it. In fact, the paper outlines a set of different patient needs, some of which can only be met through precise quantitative communications of risk data.

My point is that it is the responsibility of every risk communicator to have a specific purpose in mind at the time of a communication AND to select the risk format that is most congruent with the recipient’s informational needs.

Sometimes those needs can be met with simple statements that raise awareness of the existence of risk, order risks, or categorize them. Other times those needs can only be met through more precise, quantitative data regarding probabilities.

The fact that people may have multiple needs at other points in time does not change the requirement to design our communications to match our primary objective.

In the case of “Robert”, that objective is to have him understand whether his risk is “high” or “average” and to have that information motivate him to modify his behavior accordingly. Anything else is secondary.

Because, sometimes, imprecise risk information can be perfectly meaningful.

Brian J. Zikmund-Fisher is an Assistant Professor of Health Behavior & Health Education at the University of Michigan School of Public Health and a member of the University of Michigan Risk Science Center and the Center for Bioethics and Social Sciences in Medicine. He specializes in risk communication to inform health and medical decision making.

Previous post:

Next post: