MONITORING AND EVALUATION CERTIFICATE

Module 10: Distinguishing Evaluation from Research


“The purpose of evaluation is to improve, not prove.” - D.L. Stufflebeam(1)

Research and evaluation are characterized by similar features that center on the shared objective of answering a question. However, it is important to distinguish between the two disciplines by explaining that the purpose of evaluation is essentially to improve the existing program for the target population, while research is intended to prove a theory or hypothesis. Although both use similar data collection and analysis methods, the two disciplines diverge again during use and dissemination. This relationship can be visualized using an hourglass shape:

Considering the aspects of research and evaluation, there is validity in the opening quote by Stufflebeam. However, from what we know about the purpose of evaluations, some evaluations do seek to ‘prove’ a theory; probability evaluations prove that the outcomes or impact of a program are the result of program activities.(2) Therefore, although the main purpose of evaluation is to improve programs, certain circumstances enlist evaluations to "prove." For example, an evaluation might demonstrate that microcredit programs for women reduce child mortality. According to the World Bank, although results may be generalizable in robust plausibility evaluations, the primary purpose of program evaluations is to benefit the specific program target audience.(3) It is possible to say that evaluation is a sub-set of research because it would be impossible to conduct an evaluation without incorporating basic constructs of research, such as question development and study design.

In his article about the differences between evaluation and research, Scriven (2003/2004) distinguishes the skills of evaluators from those of social science researchers. Scriven notes that an evaluator’s ability to determine context and unexpected effects of a program distinguishes him from a researcher.(4) Whereas a researcher would seek to determine whether microcredit programs accomplish the intended goal of reducing child mortality, an evaluator would also look for side effects of the same microcredit program: How does it improve the household’s quality of life? How does the program increase spending on health care? What village-specific confounding factors also reduce child mortality? These are just a few questions specific to evaluators in the course of program analysis.

Research is intended to increase the body of knowledge on a particular issue; any subjective opinion limits the researcher’s credibility. On the other hand, evaluators must balance the need to remain objective and the expectation to make recommendations for stakeholders. Evaluators must determine what information is valuable, what method is best for data collection, how to analyze the data, and how to relay findings to stakeholders. This requires interpretation and a certain level of judgment by the evaluator that is absent from the role of the traditional researcher.(5)

There are extensive debates about the differences between research and evaluation and their potential relevance to practical use. The visualization of the hourglass provides one perspective of this debate. In practice, the two disciplines have different objectives, but as tools, they use similar methods of analysis. The particular intersection of evaluation and research will depend on the context, but it is important to distinguish one from the other, as evaluation is conducted for the purpose of improvement and traditional research is conducted to contribute to the knowledge base.

Footnotes

(1) Stufflebeam, D.L. (1983). The CIPP Model for program evaluation. In G.F. Madaus, M. Scriven, and D.L. Stufflebeam (Eds.), Evaluation Models: Viewpoints on Educational and Human Services Evaluation. Boston: Kluwer Nijhof.

(2) Victoria, C., Habicht, J.P., and Bryce, J. (2004). Evidence-based public health: Moving beyond randomized trials. American Journal of Public Health. 94(3):400-405.

(3) The World Bank. (n.d.). HIV monitoring and evaluation resource library. Global AIDS Monitoring and Evaluation Team (GAMET).

(4) Scriven, M. (2003/2004). Differences between evaluation and social science research. The Evaluation Exchange Harvard Family Research Project. 9(4).

(5) Levin-Rozalis, M. (2003). Evaluation and research: Differences and similarities. The Canadian Journal of Program Evaluation. 18(2):1-31.

NEXT: MODULE 11

HEALTH INDICATORS