CERTIFICATE IN PUBLIC HEALTH MANAGEMENT
Module 3: Metrics from the Ground Up: Evaluating the Impact of Programs, GHIs, and NGOs
What Are Data Collection Challenges at the Grassroots Level?
Creating successful health information systems in developing countries has been an extremely challenging undertaking due to many existing obstacles, including patchy funding, lack of an international infrastructure, and poor communication between independent health programs. While global health policies endorse local supervision and integration of health information from multiple organizations, this ideal system is far from the reality. In practice, national health systems are usually comprised of numerous independent health programs, each preserving their unique and often disorganized method of reporting. Due to this absence of a common system, the same data is often collected and reported multiple times through separate structures, while other significant data can be lost and never reported. In developing countries, vast inequities exist in terms of access to elements of the greater healthcare information infrastructure. Due to these complexities and discrepancies between separately funded programs, several strategies to expand healthcare information infrastructures have been proposed.(1)
One initiative to improve global health metrics was the Health Metrics Network (HMN), a program developed in 2005 and supported by the WHO, European Union and other international agencies. The HMN concentrated on exploring how correct and dependable information can influence informed decision-making and therefore improve health. This initiative was founded on the idea that “information is essential for public health action: it is the foundation for policy making, planning, programming, and accountability.”(2) Since information is sparse in developing countries, HMN was structured with the intention of increasing accessibility and application of health information. HMN strived to construct a synchronized framework for country health information system development and the program was dissolved in 2013.(3)
Problems with Impact Evaluation Methodologies
A variety of impact evaluations exist for enterprise development projects. In Zandniapour et. al. (2004), impact assessments for a selection of these projects are reviewed, and problems with the methodologies are assessed. Several common flaws can be identified across these studies, including:
Lack of a control group and time series data (these are crucial in order to study change over time or compare participants of the intervention with those separate from the intervention)
Problematic sampling (non-random, incorrect sample size, or non-representative of population being examined)
Self- selection bias (lack of a randomized design)
Attribution of a change to the intervention without a valid/reliable reason for claiming causality
Lack of focus on the outputs, outcomes or impacts
Difficulty with implementation of complex projects
Weak monitoring and evaluation systems
Weak baseline statistics and data collection
Lack of valid assessments of cost-effectiveness, efficiency, sustainability
All of these factors serve as limitations for evaluating given program’s impact in developing countries, and stronger impact assessment methodologies are needed in order to attribute changes and draw decisive conclusions of the program’s effectiveness. (4)
Strategies for Future Impact Studies
It is crucial to improve impact evaluations for enterprise development programs. Suggested methodologies include:
Development of more systematic methodologies
Perform impact assessments with knowledge of broader assessment frameworks, and form links between project inputs, outputs, and outcomes
Increase awareness of impacts in relation to tangible reducing poverty
Improve diffusion of evaluation findings (5)
One rigorous impact assessment method is the Randomized Evaluation, which utilizes random assignment to allocate funds, track programs, or apply policies. This is a method that is strongly advocated for by the Abdul Latif Jameel Poverty Action Lab. When conducting an impact evaluation, it is crucial to identify a comparison group who did not participate in the program but otherwise closely resembles the group who did participate. Randomized evaluations are particularly effective in creating a valid comparison group because it does not rely on assumptions. Instead, there is an actual comparison group that is a statistical duplicate of the original. J-PAL is comprised of a network of professors worldwide, continually conducting impact evaluations in order to investigate and expand the success of poverty reduction programs.(6)
Conclusion
Methodologically rigorous and systematic evaluations provide the opportunity to raise the impact of organizations beyond their financial contributions. Future program planning can benefit from an understanding about past successes and failures. Therefore, reliable evaluations should be viewed as worldwide public goods. Successful programs can be modified and used in other countries, while ineffective attempts should be discontinued. Encouraging and financing evaluations (for example, credible randomized evaluations)(7) of effective organizations offers feedback not only for the specific organization being evaluated, but also for other contributors, governments, and NGOs.
Footnotes
(1) Braa, J., Hanseth, O., Heywood, A., Mohammed, W., & Shaw, V. (2007). Developing health information systems in developing countries: the flexible standards strategy. Mis Quarterly, 381-402.
(2) World Health Organization. About the health metrics network. 2009(10/3). http://www.who.int/healthmetrics/about/en/.
(3) Ibid. See also, https://www.who.int/healthmetrics/en/.
(4) Zandniapour, L., Sebstad, J., & Snodgrass, D. (2004). Review of evaluations of selected enterprise development projects. Microenterprise report, 3.
(5) Ibid.
(6) Abdul Latif Jameel Poverty Action Lab. “About J-PAL” https://www.povertyactionlab.org/about-j-pal#:~:text=About%20Us-,The%20Abdul%20Latif%20Jameel%20Poverty%20Action%20Lab%20(J%2DPAL),is%20informed%20by%20scientific%20evidence..
(7) Duflo, E. and Kremer, M. (July 2003). “Use of Randomization in the Evaluation of Development Effectiveness,” World Bank Operations Evaluation Department Conference on Evaluation and Development Effectiveness.