Andrew Mack describes the extent of the current dearth in available data to realise “evidence-based” policymaking globally and to ascertain people’s perceptions of the local impact of peacebuilding initiatives. Despite increasing donor emphasis on the importance of evidence, there is very little funding available to support data gathering capacity – especially for developing countries affected by conflict. Agreeing indicators to measure peacebuilding impact and ways to gather data to monitor progress is politically contentious. But other sectors, such as health, provide workable models that peacebuilding could use, while new technology and investment in local capacity offer ways that good data could start to be collected.
Measuring peacebuilding performance: why we need a "data revolution"
Creating evidence-based peacebuilding policies
Research and policymaking on peacebuilding in war-affected states are severely hampered by the lack of the most basic data on the most relevant issues. States emerging from what are often long periods of warfare tend to have grossly inadequate administrative data, very weak statistical capacity and long outdated census data. For both donors and “fragile” state governments this means that creating peacebuilding policies and monitoring their impact based on evidence is currently extraordinarily difficult, if not impossible.
Today there is a growing push by donors, international agencies and by fragile state governments to address the huge knowledge gaps in this area and to create peacebuilding policies that are truly evidence-based. But without appropriate data, there is no real way to measure peacebuilding performance.
Data and the global security governance architecture
Over the last two decades, the number of high-intensity armed conflicts being waged around the world – those with more than 1,000 battle deaths a year – has declined by more than 50 per cent. Successive studies appear to leave little doubt that the upsurge of international activism – peacemaking, humanitarian assistance, peacekeeping and peacebuilding – has played a major role in this decline. But more precise understanding has been extraordinarily challenging, not least because of the lack of relevant data. Much more robust, timely and quantitative data is needed to measure progress towards agreed security and development goals and their associated targets and indicators.
The Millennium Development Goals (MDGs) process was an important step towards more evidence-based policy – helped by the fact that reliable survey-derived quantitative data on health, education and some other development goals were already being collected in many countries.
The launch of the International Dialogue on Peacebuilding and Statebuilding (IDPS) in 2008 was also significant and even potentially revolutionary in three ways. First, the fragile state members of the IDPS – who make up almost half the membership – have the lead role in driving the dialogue process. Second, the IDPS is the first multilateral development initiative in which security, governance and justice issues in war-affected states have been central to the main policy agenda – in the MDG process these were deemed too politically sensitive to even discuss. And third, the IDPS starts from the assumption that peacebuilding and statebuilding are quintessentially political – not technical – processes, again in contrast to the MDGs.
An important milestone in the short history of the IDPS was the New Deal for Engagement in Fragile States, endorsed by 41 countries and multilateral organisations in November 2011. Central to the New Deal was the commitment to pursue five Peacebuilding and Statebuilding Goals (PSGs). These are enhancing the legitimacy of political processes, improving security, increasing citizens’ access to justice, promoting good economic governance, and managing revenue and building the capacity to deliver services.
The PSGs are not simply aspirational. From the beginning there was a commitment by all parties to use indicators to track progress towards meeting each goal, using quantitative data where possible. This is broadly the approach used to monitor progress towards achieving the MDGs. The indicator data would also provide an evidence base that would both track and inform the dialogue process between fragile states and their Northern partners.
Throughout 2012 and into 2013 there were a series of international meetings to determine the most appropriate indicators for each of the PSG targets and how the necessary data might be collated or generated. But while the IDPS partners now had a list of agreed (preliminary) indicators for the PSGs, there were few reliable sources of data available to populate them. And some of the datasets that have been proposed – on homicides and deaths from armed conflict, for example – are not reliable enough to provide a useful guide to progress.
The data challenge: context versus commonality
As early as 2011, the question of what data sources to use to monitor progress towards the PSGs was a growing source of tension behind the scenes between the g7+ Group of Fragile and Conflict-affected States and their Northern partners. Discussions on how to monitor PSG policies led to the creation of two separate sets of indicators: country-level indicators and common indicators.
Country-level indicators are intended to track PSG goals in the security, economic and political contexts in which g7+ countries find themselves. So some country-level indicators will be unique to particular countries. Country-level indicators will be used to create national fragility assessments that will locate the assessed countries on a 5-level fragility spectrum from “crisis” to “resilience”. States’ initial location on the fragility spectrum could in principle serve as an approximate baseline against which to measure or estimate their progress towards achieving resilience.
Common indicators are common to all fragile states – like the under-five mortality rate, which was a key MDG indicator on child health. Fragile state governments have resisted common indicators claiming they primarily reflect the interests of donors. These North/South differences have been an ongoing source of tension and have slowed the IDPS’s momentum. Part of the reason for the slow-down is political. In fragile states without robust statistical systems – the large majority – nationwide household surveys are the only means of generating the reliable data needed to populate peacebuilding and statebuilding indicators. But the transparency that especially perceptions surveys generate can be politically embarrassing and even damaging to governments.
Moreover, for many fragile state governments the Northern emphasis on common indicators and cross-national surveys misses the critical point that the development and security challenges that confront fragile states are determined by their unique historical, cultural and political circumstances – a concern shared by many development researchers who rely on qualitative methods. Country-level indicators of progress can be designed to take the unique circumstances of fragile states into account; common indicators – by definition – cannot.
Suggested IDPS common indicators are similar in concept to MDG indicators. But many fragile state governments, along with aid critics in the North, are sceptical about the MDG model – and not without reason. First, the MDG monitoring process, which has relied heavily on cross-national survey data, has failed to reveal the very real developmental successes that have been achieved by sub-Saharan African states since 2000. The MDGs’ architects chose indicators that few African states could hope to achieve, while ignoring those in which they were making important gains.
Second, fragile states worry that common indicators may stigmatise them as “failures” and identify them as “poor performers” – assessments that can lead to reduced aid allocations or the imposition of harsh conditionality measures.
Third, the UN has asserted that the campaign to boost achievement of the MDGs is the “most successful global anti-poverty push in history”, a claim that appeared persuasive. Economic assistance to the MDG process has doubled in value since 2000, and as aid flows increased, development outcomes improved. But correlation is not the same as cause. In 2013 a major econometric study by UN economist Howard Friedman cast major doubts on the claim. Friedman did not question the fact that, on average, MDG development outcomes across the developing world had improved since 2000, but he pointed out that these indicators were mostly improving before 2000. The pre-2000 improvements cannot logically be attributed to aid flows that increased after 2000. This raised an obvious question: if the MDG process has not had the positive impact that its supporters claim, then why assume that the very similar IDPS process would be any more successful?
Finally, there is the question of trust. Developing countries’ scepticism arises from the repeated failures of donors to honour aid pledges. In 2005, for example, the G7 – the rich countries’ club – pledged to increase aid levels to sub-Saharan Africa by $25 billion within five years. But by 2010, less than half the promised amount had actually been delivered. Donors also have concerns. They can – and do – point to rent-seeking, inappropriate aid disbursements and pervasive corruption as part of the reason why aid has had so little measurable positive impact, particularly in fragile states.
The long history of failed aid policies is well understood in both donor and recipient countries – though accounts of who is responsible for the failures, not surprisingly, differ considerably. The failures have been a major driver of the two decade-long push by the Organisation for Economic Cooperation and Development (OECD) to improve the effectiveness of aid disbursements in both donor and recipient countries.
The need for a date revolution
The IDPS seeks to provide more effective support to peacebuilding and statebuilding policies. But this ambition confronts the disconcerting fact that even after decades of research and debate there is still no broad expert consensus about the efficacy of aid in reducing poverty, building states or preventing conflict. And with respect to peacebuilding or statebuilding programmes, evidence-based policy remains more aspiration than reality. The increasingly insistent Northern mantra that development policies should be evidence-based has not been matched by donor support for enhancing statistical capacity in fragile states. This remains far less than needed: $419 million globally between 2010 and 2012. Without robust statistical data, evidence-based policy is impossible.
Distribution of the limited donor support for building national statistical capacity is also hugely unequal between “aid darlings” and “aid orphans”: between 2010 and 2012, Afghanistan and the Central African Republic received $80 and $60 million respectively; over the same period more than half of the g7+ states received less than $5 million a year each on average – many of them much less.
The case for improving poor country statistical capacity is widely accepted. In 2013 the Secretary-General’s High-Level Panel on the post-MDG development agenda called for “a new data revolution”, noting that, “too often, development efforts have been hampered by a lack of the most basic data”. Not surprisingly the worst data deficits have been in fragile and war-affected countries.
A data revolution is not, of course, a sufficient condition for overcoming poverty or preventing war, but it is a necessary condition for evidence-based policies that pursue such goals. Robust nationwide data – particularly when disaggregated by age, gender, income and geography – can provide critically important information for both aid donors and fragile state governments that seek to assess developmental needs and to target assistance more effectively and equitably.
Without reliable data, it is too easy for governments to disguise programme failures and claim progress where none exists – which can be convenient for both donors and recipients. With appropriate indicators, and robust data to populate them, developmental successes that might otherwise remain unknown – like those missed by the MDG process in sub-Saharan Africa – can be revealed and celebrated.
Data availability and transparency can uncover corruption, rights abuses and governmental incompetence and malfeasance, and track progress in combating them. Reliable data on abuses of state power can help citizens hold governments accountable and mobilise pressure for change.
Robust quantitative data can also provide important – and sometimes surprising – insights into regime legitimacy. In 2002, for example, one of East Asia Barometer’s authoritative surveys on legitimacy revealed that in China no less than 94 per cent of respondents believed that Beijing’s authoritarian form of government was “best for them”. In democratic Japan, by contrast, just 24 per cent felt the same way about the government in Tokyo.
For those who assume that democracy and respect for human rights should be major determinants of regime legitimacy such findings may well be disconcerting. But they do not represent a rejection of democracy, which is supported by two thirds of the China’s population; rather they signal the greater importance that Chinese citizens attach to “performance legitimacy” – the ability of their government to “deliver the goods” in terms of jobs, educational opportunities and rising living standards. In a country where political instability, civil war and mass starvation are living memories for many citizens, this is perhaps not surprising.
How to generate the data needed for evidence-based peacebuilding policy
The IDPS has agreed on an initial list of indicators that will be used to track progress towards the PSGs, but it is still unclear how the data needed to populate the indicators will be created. Many sources of indicator data rely on citizens reporting various events to authorities, from sexual assaults to solicitation of bribes. Unfortunately this data-gathering practice is notoriously unreliable.
The UN claimed that some 13,000 women and girls had been raped in the Democratic Republic of the Congo (DRC) between June 2006 and May 2007. The estimate came from cases of rape reported to the authorities and to NGO clinics. But massive under-reporting of sexual violence is the norm in DRC, as in many developing countries. The actual rape toll was more than 30 times higher. Data from a nationwide Demographic and Health Survey (DHS) carried out in DRC in 2007 revealed that more than 400,000 women and girls were raped that year. The widely-cited UN estimate was worse than useless since it grossly under-estimated the extent of sexual violence, sending a message to policy-makers that was profoundly misleading. Only sensitively undertaken nationwide surveys can provide reasonably robust estimates of rape numbers, though even here some under-reporting is likely.
The most basic metrics of citizen security from deadly violence are UN national homicide statistics. However, as the World Development Report 2011 points out, UN data are provided by national governments and suffer from similar reporting biases to the DRC example above – ie far fewer cases being reported than occur. And there are no data at all provided to the UN in more than 75 per cent of fragile states in sub-Saharan Africa. Meanwhile most World Health Organisation estimates on violent deaths in fragile states are “modelled” – which essentially makes them “guestimates”. Determining citizen security – and knowing whether or not government security policies are working – is simply impossible in such cases.
Because national statistical offices are both weak and greatly under-resourced, the only sources of robust data for most PSG indicators in war-affected states are nationwide population and perception surveys, both of which involve interviewing a representative sample of the national population. These surveys can provide data on security, justice, governance and other indicators for the PSGs.
The MDG model is relevant for PSG data gathering. In fact much of the data for the key MDG country-level indicators were generated by cross-national population surveys that use common methodologies, definitions and questionnaires. In other words key country-level indicators were actually being populated by the very common indicator data that the g7+ members are so wary of. This suggests that in practice the distinction between common and country-level data is not particularly helpful and risks causing confusion.
The gold standard DHS population surveys that have been used to collect data on child health and education outcomes for the MDGs can be adopted to collect data to track progress towards any goals, including the PSGs. DHS surveys not only generate robust country-level data, they also help build fragile states’ statistical capacity. The surveys are actually carried out by host country nationals, with technical assistance from DHS. Host governments have ownership of the data.
Tensions between the Northern and Southern IDPS partners have slowed the momentum of the IDPS process. This is not surprising given similar tensions in the much broader process to create a successor regime to the MDGs after 2015. Furthermore, IDPS not only deals with more politically sensitive indicators, it also has a much narrower base of international political support.
But there are grounds for optimism in the longer term, not least because of the growing momentum for a data revolution for the developing world. And notwithstanding the lack of statistical support to many fragile states, overall financial support for improving statistical capacity in the developing world has increased by 125 per cent since 2008. New data collection methods using cell phones and other technology promise lower costs and more timely release of data.
A data revolution would enhance peacebuilding activism. It would provide the evidence base needed to track progress, determine the impact of policy, and challenge governments – donors as well as fragile states – that renege on their commitments. Such a revolution could also play a critical role in providing the hard evidence that peacebuilding and post-conflict development policies are succeeding – or not. In fragile states, successes in these areas are key determinants of state legitimacy, and hence of reduced risks of conflict recurrence.