19/03/15

Why we need to do better on adaptation indicators

Sea Wall_Jocelyn Carlin_Panos
Copyright: Jocelyn Carlin/Panos

Speed read

  • Indicators are critical to delivering climate change adaptation
  • But they can end up being unused, invalid, unfunded or even unknown
  • Use small sets of meaningful, purpose-driven and decision-relevant indicators

Send to a friend

The details you provide on this page will not be used to send unsolicited email, and will not be sold to a 3rd party. See privacy policy.

Let’s measure only what really counts to make progress on climate change adaptation, says Susanne Moser.

“What gets measured, gets done!” — those were the words spoken ten years ago by a municipal planner from a large Californian city at a workshop I was cohosting on early responses to climate change. She explained all the details of how the city mobilised its staff in changing policy and clinched it with simple advice on how to do good governance, with a big dose of plain old experience: set a target, track progress and tell people that somebody important (the city council, say, or the mayor) is watching. The message was: that’s how measurement becomes an engine of action.

I’ve never forgotten this simple message. And seemingly the whole world must have heard it. Donors now want to know whether their investment into adaptation is making a difference. Then the next question governments, foundations and NGOs ask is: how would we know we’re succeeding?

To no one’s surprise, ‘M&E’ — monitoring and evaluation — has become a crucial topic in the adaptation field. Coincidentally, the International Organization for Cooperation in Evaluation, which counts several UN agencies as partners, has declared 2015 the International Year of Evaluation. But is it really true that what we measure, gets done?

A critical look at just a handful of the many problems with adaptation indicators suggests that researchers, funders and policymakers have good reason to be sceptical.

Some inconvenient indicator truths

For starters, researchers and programme managers track many things that simply end up in databases or on shelves — the problem of unused measures. As examples from the United States and elsewhere attest, simply having an adaptation plan doesn’t necessarily mean that adaptation actions are being implemented. So merely counting how many plans have been written may, by itself, be practically meaningless.

We also measure things that do not really reflect the notion they purport to convey — the problem of invalid measures. Take the example of money spent on adaptation, a comparatively easy measure a donor, business or government might track. But what exactly would that tell us about whether we’re making progress in the right direction? By itself, it does not tell us whether those resources are used to fund interventions that are environmentally beneficial, economically helpful and socially acceptable — or ones that ultimately cause more harm than good.

“A small set of purpose-driven, decision-relevant and meaningful indicators could really matter.”

Susanne Moser

Another concern is that, even when it is agreed that long-term monitoring on systems and communities is needed, funders often don’t provide the necessary resources — this is the problem of unfunded measures.

The collection of decent-quality demographic and socioeconomic data to gauge social welfare and wellbeing is a good example of this. There are inadequacies in collecting such data in developed nations and huge gaps in many developing countries. This is expensive to remedy, and it severely limits analysts’ ability to understand the social vulnerability and adaptive capacity of countless communities, even entire nations.

Tracking how ecosystem restoration projects perform in the long term is another example. Their funding rarely exceeds an initial five-year monitoring period, but many ecosystems, such as coastal wetlands or dry grassland, take far longer to be fully re-established, while increasing stresses from climate change affect that recovery process. Only if changing conditions are monitored over time, alongside the effectiveness of interventions, will managers know how to adjust those adaptive strategies.

And, finally, we often don’t know how to measure the things that would be truly valuable and impactful in practice — the problem of unknown measures. Which national leader could really say whether her state is better prepared today than five years ago for the growing risk of rising sea levels or infectious diseases? The answer, of course, is predicated on knowing what it means to be “prepared” in different contexts — quite different for, say, a global company such as Coca-Cola, than for a farmer in Cambodia or the president of Mozambique. The uncertainties of a changing climate make this even harder.

A few principles

I began working on these perpetual conundrums with colleagues in a transdisciplinary project I have co-led on the US West Coast. Researchers from the natural, social, ecological and engineering sciences, and from the fields of law and philosophy, together with coastal practitioners from local, regional, state, federal and tribal governments and NGOs, spent two years trying to delineate key dimensions of adaptation success. A few indicator-related principles emerged.

First, there is no single indicator that should be used, but bundles of indicators. For example, tracking only adaptation planning or particular adaptive actions would never be enough for anyone to say:  “We’re succeeding”. Only if several indicators are considered can stakeholders or observers track what is truly happening, and be able to evaluate — an inevitably subjective exercise — whether progress is deemed positive, acceptable, efficient, effective and so on.

Second, meaningful indicators are purpose-driven. That sounds too obvious to be taken seriously, but think about it: does the same measure of success meaningfully justify a budget expenditure as well as support good governance? Rarely. For example, to justify a costly adaptation action such as building a seawall, an important measure might be its cost-effectiveness in reducing flood losses or avoiding loss of life. But cost-effectiveness would say nothing about the transparency or inclusiveness of arriving at the decision to build one. Different measures need to be used for different reasons.

Third, indicators need to be decision-relevant. The most influential measures in decision-making are those that hit a nerve. For example, if adaptive actions reduce local tax income because they fail to sustain local business activity, decision-makers would have a great incentive to take notice. Indicators that can truly move stakeholders to act should receive priority.

Finally, it’s best to stick to a small set of indicators. The potential universe of adaptation indicators is huge. But, given limited resources, do we need hundreds, or even dozens? Instead, perhaps we need to simplify by creating an indicator framework that can address critical decision-maker questions such as “are we better prepared now than before?” This would encourage aggregation, where possible, of the actions being taken at different scales and locations, and in different sectors.

In summary, a small set of purpose-driven, decision-relevant and meaningful indicators could really matter. To identify such measures, scientists and practitioners must collaborate because, separately, they may not know or agree on what matters. The most scientifically credible, socially acceptable, practically feasible and politically impactful indicators will come from collaborative research that launches from clear decision-making needs. To return to the words of my colleague from California: only what matters should get measured to ensure it gets done.

Susanne Moser is director and principal researcher of Susanne Moser Research & Consulting in California, United States, and a social science research fellow at Stanford University’s Woods Institute for the Environment. She can be contacted at [email protected]