Send to a friend
The last thing I expected to hear at this week’s workshop in London, United Kingdom, on complexity science was that this is an approach that can lead to realistic solutions to development challenges from the bottom up. But it is one of the messages that came through from freelance science writer Philip Ball, as well as Ben Ramalingam, author of the widely acclaimed book Aid on the Edge of Chaos, which draws on complexity research to argue for its role in rethinking development aid.
Previous encounters with the ideas in Ramalingam’s book, including an editorial and commentaries in our own pages, left me thinking mostly about the relevance of complexity science to big-picture debates. For development, it challenges how aid works; and for science, it challenges the reductionist tradition that still underpins much of the research aiming to address unwieldy challenges that have long ago left the safe environment of the lab.
So how did the speakers at the workshop hosted by the UKCDS, a group of 14 UK government departments and research funders working in international development, chart the path from concept-rich talk of complex adaptive systems, network analyses and models, to “realistic solutions from the bottom up”? Although Ramalingam’s challenge to the narrative of smallpox eradication got to this first (more on smallpox later), it was Ball who spelled it out in his follow-up critique.
Thinking about complexity in social systems has validity and value, said Ball. One point of value is its potential to recognise where existing approaches to development practice work, and where they don’t.
But, he then asked, “what happens when ‘is’ meets ‘ought’?” Meaning, what happens when practitioners come face to face with the tension between dealing with realities on the ground and the top-down goals they work towards.
In Ball’s view, there must be a compromise between how things appear to happen and how things should be — and that’s where he sees a role for systems modelling. He used the example of the route of park trails being designed by first trying to understand how people actually use them.
Non-intuitive behaviour can be modelled, he said, and can lead to bottom-up solutions. There’s potential value in figuring out how people are naturally predisposed, rather than imposing a structure or an idea from the top down.
Ramalingam got to the same point early in his talk, when he spoke of a misplaced emphasis on what led to the successful eradication of smallpox. It wasn’t so much the top-down vaccination campaigns that did it, he said, but “adapted networked experimentation” — on-the-ground surveillance and efforts to contain the disease.
The ensuing discussion with the audience meandered along a series of concerns that naturally follow from the prospect of applying complexity science to development practice —how can it be done in practice, for example, without overburdening staff on the ground; or how to encourage a culture of experimentation. One of the advantages of complexity science is that it’s adaptive, said Ball: you can learn from mistakes. But that’s only possible when mistakes are seen as opportunities to learn, and not as failures. The sort of institutional change that allows that to happen is what’s needed to take advantage of its potential.
But to take things once again from the big picture down to the ground: leaving the workshop I got the distinct sense that what was missing from the discussion is the not-so-minor issue of data — the building blocks for basic statistics, never mind systems modelling. Missing or incomplete data have held back much less complex notions of science from doing more for development (monitoring the Millennium Development Goals comes to mind). It seems to me that the potential of complexity science is yet another reason to support any effort that aims to strengthen data collection and statistical capabilities in the developing world.