Send to a friend
When new technologies begin to go mainstream many assume this leads to a narrowing of the digital divide. The obvious example is the mobile phone. But this common belief may not hold true, according to Martin Hilbert, coordinator of the Information Society Programme at the UN Economic Commission for Latin America and the Caribbean.
He told a SciDev.Net meeting on big data this week that the immediate effect of mobile phones was in fact to widen the digital divide, as the developed nations bought into them first, initially leaving developing nations in their wake (though they are now catching up).
For example, in 2001 for every data handling piece of tech (such as computers and mobile phones) held by people in developing nations, there were six items in developed nations. In 2006 the developed world had only three times as many. So by that measure we could conclude that the gap has narrowed.
Yet, this conclusion may be wrong. This is because “we measure the digital part wrong,” Hilbert said.
The real measure of the digital divide is what people do with that technology, not only how many items of it they have. In this measure though, the gap is broadening. Hilbert said that though the capacity to process information is on the rise globally, it is accelerated in the developed world. In 2001 the developed world had 10 times as much information processing capacity as the developing world, and in 2006 it had 14 times as much.
This is all very sobering in the age of big data that promises to unleash a revolution for development — albeit one that may increase social control and reduce accountability if we’re not careful.
Another take-home message that emerged was that big data projects should be “hyper-localised” so they make sense to the people in developing countries who need information most. Geo-referenced data has the potential to allow this.
Brad Parks, a CEO of AidData argued that “liberating data is a departure point, not a destination” and that geo-referenced data could help prevent the “well intended but poorly designed projects” we often see in development.
Yet, he said, “huge number of donors still don’t publish hyper-local data”.
He also highlighted the need for making big data usable, accessible and the need for developing human capacity to deal with such data.
This made me think we needed something akin to UN data troops, an army of experts, that could be requested by governments to come and help them deal with big data. They could perhaps even be made part of a donor’s aid — you get the money as long as you allow data specialists to help you ensure that any data that is generated is shared in a free and useable fashion.
They could perhaps also provide the much-need training to thousands of people working at the national statistical offices — the publicly funded and stable infrastructure on which big data movement should build upon and work with — instead of simply brushing them aside as “dinosaurs”, as the meeting heard they sometimes are.
Such offices have outdated software and their staff were often last trained in stats in the 1970s, Hilbert said. But these are smart and ambitious people. Instead of disregarding them, he said, we should harness their skills by updating their software and training them in modern techniques.