Integrating a Comparison Group Design into a Theory of Change Evaluation: The Case of the Urban Health Initiative

https://doi.org/10.1016/S1098-2140(02)00236-9Get rights and content

Abstract

This paper describes how we strengthened the theory of change approach to evaluating a complex social initiative by integrating it with a quasi-experimental, comparison group design. We also demonstrate the plausibility of selecting a credible comparison group through the use of cluster analysis, and describe our work in validating that analysis with additional measures. The integrated evaluation design relies on two points of comparison: (1) program theory to program experience; and (2) program cities to comparison cities. We describe how we are using this integrated design to evaluate the Robert Wood Johnson Foundation’s Urban Health Initiative, an effort that aims to improve health and safety outcomes for children and youth in five distressed urban areas through a process of citywide, multi-sector planning and changed public and private systems. We also discuss how the use of two research frameworks and multiple methods can enrich our ability to test underlying assumptions and evaluate overall program effects. Using this integrated approach has provided evidence that the earliest phases of this initiative are unfolding as the theory would predict, and that the comparison cities are not undergoing a similar experience to those in UHI. Despite many remaining limitations, this integrated evaluation can provide greater confidence in assessing whether future changes in health and safety outcomes may have resulted from the Urban Health Initiative (UHI).

Section snippets

INTRODUCTION

This paper describes how we strengthened the theory of change approach to evaluating a complex social initiative by integrating it with a quasi-experimental, comparison group design. The Urban Health Initiative (UHI), sponsored by the Robert Wood Johnson Foundation (RWJF), aims to improve the health and safety of children and youth in distressed urban areas. Several features of the initiative, including its non-prescriptive guidelines, its focus on entire cities, and its emphasis on changing

THE URBAN HEALTH INITIATIVE

In developing and launching their UHI in 1995, the RWJF recognized that although a number of promising approaches to improving the health and safety of children had emerged, most had succeeded either in improving outcomes in a single, categorical area of concern or in improving the health and safety climate of a single neighborhood. None had been sufficiently broad-based to significantly improve the health and safety of young people, measured against a number of indicators, across an entire

THE EVALUATION OF UHI

Researchers at New York University were enlisted to evaluate this national initiative from the onset. The purposes of the national evaluation were to determine whether and how the UHI sites were able to effect change on a range of health and safety outcomes for young people over the course of the initiative. The evaluation design is guided by three research questions:

  • To what extent, and in what ways, can a foundation-sponsored initiative serve as a catalyst for a cross-sector collaborative

USING CLUSTER ANALYSIS TO IDENTIFY A COMPARISON GROUP

In order to blend theory-driven and comparison group approaches, we believed it essential that some of the baseline features of the cities participating in UHI be integrated into the selection of the comparison cities. The small sample of UHI cities (5) necessitated the use of a matched comparison group; if the UHI and comparison cities had been chosen at random from the top 100 cities, there is a good chance that the program cities and comparison cities would differ considerably. For example,

CHECKING THE VALIDITY OF THE COMPARISON GROUP

In response to the previously described criticisms made by Hollister and Hill (1995), we chose to check the validity of our selection in regard to a number of additional program-relevant measures. We used documents from, and our observations of, the RWJF selection process to provide some insight into their ideas of the “target group”—that is, the cities to which the findings of this evaluation might be generalizable. RWJF chose cities with demonstrable levels of distress among families and

USING AN INTEGRATED APPROACH: PRELIMINARY FINDINGS

As already noted, evaluators need to consider the counterfactual, not only at baseline and with regard to the outcomes of an intervention, but also throughout the progress of the intervention itself. Efforts like the UHI have, as their premise, a vision of altering complicated social, political, and economic arrangements. Such arrangements are fluid and dynamic, and evaluators seeking to understand interventions aimed at altering these arrangements face very difficult problems of attribution.

We

DISCUSSION AND CONCLUSION

Evaluation research is at the mercy of the counterfactual. Even in a theory of change evaluation, the need to ask whether the documented changes would have occurred in the absence of the program begs for comparison. At the same time, with initiatives as complex as the UHI, it is imperative to understand the expectation of program planners and implementers and to test their assumptions as part of the evaluation design. The two approaches must be integrated, so that evaluators can build the case

Acknowledgements

This work is funded by a grant from the Robert Wood Johnson Foundation.

References (22)

  • The Brookings Institution. (1998). Learning what works: Evaluating complex social interventions. Washington, DC: The...
  • Chen, H. (1990). Theory-driven evaluations. London: Sage...
  • Cook, T. D. (2000). The false choice between theory-based evaluation and experimentation. Program theory evaluation:...
  • Granger, R. C. (1998). Establishing causality in evaluations of Comprehensive Community Initiatives. In K....
  • Gruenewald, P. J., Treno, A. J., Taff, G., & Klitzner, M. (1997). Measuring community indicators: A systems approach to...
  • Hacsi, T. A. (2000). Using program theory to replicate successful programs. Program theory evaluation: Challenges and...
  • Hatry, H., Winnie, R. E., & Fisk, D. (1981). Program analysis for state and local governments (2nd ed.). Washington,...
  • Hauser, R. M., Brown, B. V., & Prosser, W. R. (Eds.). (1997). Indicators of children’s well-being. New York: Russell...
  • Hollister, R. G., & Hill, J. (1995). Problems in the evaluation of community-wide initiatives. In J. Connell, A....
  • Kubisch, A. C., Fullbright-Anderson, K., & Connell, J. P. (1998). Evaluating Community Initiatives: A progress report....
  • Kubisch, A., Weiss, C., Schorr, L., & Connell, J. (1995). Introduction. In J. Connell, A. Kubisch, L. Schorr, & C....
  • Cited by (41)

    • Community pandemic prevention and control measures and their influence on citizen satisfaction during the COVID-19 pandemic in China

      2023, International Journal of Disaster Risk Reduction
      Citation Excerpt :

      Inadequate resources will make it difficult to provide comprehensive measures [79]. While its problem is not the oversupply of services but the basic provision of measures in rural [80]. The local government should strictly control the dissemination of information and develop an official information platform that integrates the government and the community.

    • Deliberations of the expert advisory council on innovation fund applications

      2018, Zeitschrift fur Evidenz, Fortbildung und Qualitat im Gesundheitswesen
    • Program theory evaluation: Logic analysis

      2011, Evaluation and Program Planning
    • Theory of Change from formulation to evaluation: A critique on the operationalization of the logic model

      2023, Strategic Thinking, Design and the Theory of Change: A Framework for Designing Impactful and Transformational Social Interventions
    View all citing articles on Scopus
    View full text