We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit SciDev.Net — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on SciDev.Net” containing a link back to the original article.
  4. If you want to also take images published in this story you will need to confirm with the original source if you're licensed to use them.
  5. The easiest way to get the article on your site is to embed the code below.
For more information view our media page and republishing guidelines.

The full article is available here as HTML.

Press Ctrl-C to copy

Science can help and hinder knowledge of sustainability so we need a better grasp of its role, says Erik Millstone.

Debates about the ecological unsustainability of industrial economies emerged because of scientific research into environmental changes — research that produced evidence of harm to natural resources, animals and people. While this illustrates that science is essential, we should recognise that it is also often problematic, especially in policy debates.

Conflicting assumptions

Few would argue against the goals of ‘sustainability’. But the different parties involved — governments, organisations or companies, for example — make many conflicting assumptions about what should be sustained and what should be modified or eradicated. This accounts for the different perspectives of organisations and individuals who pursue their interests by trying to impose their perspectives, for instance by trying to control policy and research agendas, assessments and conclusions.

Take the case of policymakers: to control unsustainable economic practices, they tend to establish regulations based on advice from expert scientific panels. To set drinking water pollution standards, for instance, scientists may recommend safe limits of a contaminant such as lead using concepts such as an ‘acceptable daily intake’ or ‘recommended daily allowance’. In the United States, drinking water should not contain more than 15 micrograms of lead per litre; the corresponding limit in the European Union is 10 micrograms per litre. These numbers are obtained by analysing data on possible risks to human health.

“For policy decisions to be both scientifically and politically legitimate, the contributions of scientific evidence and expertise, as well as non-scientific considerations need to be transparent and accountable.”

Erik Millstone, University of Sussex

Officials often portray such concepts as reliable measures of natural laws, constants or phenomena. But in practice, they are socially constructed hybrids of both scientific and policy considerations. Though they masquerade as purely scientific, they always involve non-scientific assumptions.

In order to determine what level of lead pollution is acceptable, for example, scientists need to first decide what counts as a risk or a benefit, what the relevant evidence is for each and how much evidence is necessary or sufficient to warrant particular recommendations.

Where science is insufficient

Why is the science not sufficient? The answer has a lot to do with scientific uncertainties. Some of these uncertainties arise when risk assessments are not based on studies directly on people or relevant features of our environment, but using models. Chemicals are often tested on rodents rather than people, for example, and climate change is studied using computer models rather than by experimenting on the atmosphere.

But the relevance of such models is more often assumed than it is demonstrated. In the case of climate change, some computer models of the impact of greenhouse gases on climate might usefully approximate to global realities. In the case of chemical toxicity testing, the reliability of rats and mice as models of the effects on people has yet to be established.

Such nuances are rarely acknowledged in scientific advice to policymakers. Science advisers often ignore or conceal key uncertainties when offering judgements, perhaps catering to policymakers’ preference for reassuring oversimplifications — because when expert panels highlight uncertainties, policymakers then have to take responsibility for subsequent decisions.

Secrecy protects from scrutiny

Uncertainties mean there can be no single authoritative answer, and this creates a space for a range of possible but competing scientific assertions.

In response, some stakeholders might claim a uniquely authoritative understanding of an issue based on evidence, while others might emphasise or exaggerate uncertainties. Institutions without clear accountability mechanisms or transparency can then impose policies that selectively acknowledge or conceal uncertainties (both their prevalence and significance). Conducting deliberations in this way amounts to secrecy — or as they prefer to call it ‘confidentiality’ — that protects them from scrutiny.

“Uncertainties mean there can be no single authoritative answer, and this creates a space for a range of possible but competing scientific assertions”

Erik Millstone, University of Sussex

Authorities in governments, commerce and industry may then offer misleading reassurances that a product or process is entirely safe. And they may claim that sustainability concerns can be addressed uniquely well with their preferred technological solutions: genetically engineered crops as a solution to chronic hunger or nuclear power as a solution to carbon-based electricity generation.

However, research into regulatory disputes has shed light on why expert advisors reach different conclusions about the safety of products and practices. It suggests that they often do so not because only one side is doing good science, or because they are reaching contrary interpretations of shared and agreed evidence — but because they are asking and answering different questions, and consequently gathering and analysing different bodies of evidence.

The tactic of invoking science

Incumbent authorities often invoke ‘science’ as if it is sufficient to decide policy. It is a familiar tactic, and it can also be hard to contest — especially for communities and governments in developing countries, where diverse independent sources of expertise may be scarce or inaccessible. But while scientific knowledge is often necessary, it can never on its own be sufficient. It can only constructively contribute to sustainability if certain social, cultural and political conditions are met.

English philosopher and scientist Francis Bacon argued that “knowledge is power”, but that was inaccurate: though knowledge is necessary for power, it is never sufficient. For policy decisions to be both scientifically and politically legitimate, the contributions of scientific evidence and expertise, as well as non-scientific considerations need to be transparent and accountable.

The evidence, its uncertainties and the exercise of expert judgements all need to be in the public domain. It will then be clear that much of the science on particular policy-relevant issues is incomplete and uncertain, and that interpretations of the science are framed by non-scientific assumptions about, for example, what counts as a benefit or as a risk. If some of those assumptions were articulated, then organisations and citizens could better understand and make sense of competing claims. And this will increase the chances of scientific knowledge truly contributing to sustainability.

Erik Millstone is professor of science policy at the Science Policy Research Unit of the University of Sussex. He can be contacted at [email protected]