We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit SciDev.Net — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on SciDev.Net” containing a link back to the original article.
  4. If you want to also take images published in this story you will need to confirm with the original source if you're licensed to use them.
  5. The easiest way to get the article on your site is to embed the code below.
For more information view our media page and republishing guidelines.

The full article is available here as HTML.

Press Ctrl-C to copy

Scientists should adopt a systematic approach to explaining what they do, and do not, know, says Baruch Fischhoff.
Science can provide the best evidence on many questions: how likely is a disease to spread? How likely is a new seed to produce greater yields? How likely is a training programme to produce better jobs?

However, that evidence is always incomplete. Indeed, scientists keep doing research because they know that they don’t know everything. The value of their work depends on how well they communicate not just their best guess about the state of the world, but also the strength of their evidence supporting it. [1]

Why does the strength of the evidence matter? There are two main, contrasting reasons. If people overestimate how much scientists know, then they risk being too bold, unwittingly gambling on uncertain strategies while paying too little attention to signs that things might be going wrong. Think of patients who expect too much from a new medical procedure or farmers who pin too much hope on a new seed.

On the other hand, if people underestimate how much science knows, then they risk being too hesitant — wasting time and resources looking for better evidence while hoping for greater certainty than science can provide. Think of people who insist on absolute proof of climate change or vaccine safety.

But scientists often struggle with how to communicate how much they know, without making unsupportable claims, on the one hand, or making science seem like guesswork, on the other. Achieving that balance begins by identifying the uncertainty that matters to their audience, and then conveying it in a credible, comprehensible way.

Causes of scientific uncertainty

The sources of scientific uncertainty are familiar to anyone. Scientists may not have studied a specific topic, creating gaps in their knowledge Their knowledge may be undermined by a changing world — for example, how climate change might affect local rainfall patterns. Their measurements may be less precise than they would like, due to the limitations of their instruments or the resources for deploying them. Their theories may not (yet) work very well.

“Scientists, like everyone else, tend to overestimate how well they are understood and how well they understand others.”

Baruch Fischhoff

In some sense, people know that science is incomplete in all these ways. The challenge is to explain how these general problems emerge in specific settings. For a financial decision, that might mean conveying how much is known about the black economy and how it affects a given industry. For a medical decision, that might mean getting across how well a drug has been tested, and how confidently a patient can expect results similar to those seen in tests.

Scientists can usually explain these uncertainties to people without technical expertise if they have enough time to interact with them. They do that when they are teaching and when they discuss their work with friends and family.

Communication is tough, though, when the audience is unfamiliar and distant, which makes it hard for scientists to see how well they are doing. Here, a scientific approach to science communication can help. [2,3]

The secret is listening
Scientists, like everyone else, tend to overestimate how well they are understood and how well they understand others. As a result, effective communication requires them to create the respectful two-way conversation needed to correct any misunderstanding.

That might involve one-on-one discussions, an advisory committee or research, with behavioural scientists doing the listening through surveys or interviews. Whatever the forum, non-scientists must feel that they are being served, not tested — and that the goal is making science useful to them.

The first step in the communication process is letting people talk about the decisions that they face, until scientists can paraphrase what people say well enough to be told: “Yes, you understand us.”

The second step is stepping back to analyse the science to identify the few facts that non-scientists most need to know, from among the many facts that it would be nice to know. It wastes people’s time — and trust — to tell them things that they already know or to treat their requests for help as a ‘teachable moment’ for talking about basic science.

The third step is to consult the science communication research literature for how best to communicate the kinds of facts that people need. [4] That research finds, for example, that it is better to use numbers than words when expressing probabilities (for example, to say a “30 per cent chance of rain” rather than it “might rain”). It also finds that when small risks mount up over time, one should do the maths for people (for instance, calculating the lifetime probability of road accidents or the expected spread of invasive species).

And it finds that people often need to be reminded of the opportunity costs of waiting for better evidence — for example, the risks of postponing action on climate change because forecasts are still uncertain.

The fourth step is to draft messages, and then ask people to think aloud as they read those drafts, making it clear that it is the messages being tested, not the readers. When problems arise, as is inevitable with initial drafts of any message, it needs to be revised and tested again — until people understand it well enough to make sound, if still uncertain, choices.

Scientists cannot fulfil their duty to inform without the two-way conversation needed to understand their audience’s information needs, and their success in filling them. That dialogue will keep scientists from losing faith in the public, because they can’t get their message across, and keep the public from losing faith in scientists, because their messages aren’t serving its needs.

Baruch Fischhoff is professor in the departments of Engineering and Public Policy and of Social and Decision Sciences at Carnegie Mellon University in the United States. He can be contacted at [email protected]


[1] Baruch Fischhoff and John Kadvany Risk: A very short introduction (Oxford University Press, 2011)
[2] The science of science communication (Proceedings of the National Academy of Sciences, 2013)
[3] The science of science communication II (Proceedings of the National Academy of Sciences, 2014)
[4] Baruch Fischhoff and others Communicating risks and benefits: an evidence-based user’s guide (US Food and Drug Administration, 2011)