ACCOUNTABILITY! ACCOUNTABILITY! That is the mantra of our current era of fiscal challenges. It is noteworthy then to ask how will viable programs continue to thrive with diminished resources? An effective strategic tool to guide this decision process is program evaluation. The United States Government Accountability Office (GAO, Designing Evaluations, 2012) defines program evaluation as “A systematic study using research methods to collect and analyze data to assess how well a program is working and why.”
In reading this definition, some of you may feel that program evaluation is a daunting endeavor! However, take heart, designing a complex research study with experimental and control groupsis not a key element of all program evaluation efforts. In fact, although program evaluation utilizes similar techniques to those of research to ensure rigor, there are some differences that make it manageable for strategic planning. These differences are highlighted below:
The primary purpose of research studies is to produce findings that can be generalizable to a defined population. In contrast, program evaluations focus on programs and often operate under more realistic timelines and resources than research studies.
b) The intent of program evaluation is to improve NOT prove
Research studies often focus on establishing cause and effect relationships. In contrast, the purpose of program evaluation is to improve programs by examining all program features such as objectives, activities, resources, context, that can lead to successful outcomes. As lessons are learned, programs can be modified accordingly. Thus, program evaluation is an iterative process.
c) Program evaluation seeks to assign a value to programs RATHER than being value-free.
Michael J. Scriven, a recognizedexpert in the field of program evaluation, explains that
“evaluation determines the merit, worth, or value of things” by using establishedcriteria to draw conclusions. Research, unlike program evaluation, is value free. That is, research draws conclusions solely on factual results obtained on “observed, measured, or calculated data.” Factual results are not integrated with established criteria to reach conclusions.Program evaluation goes a step further with the empirical data by comparing program findings to performance benchmarks with the recurring goal in mind of improving programs.
Thus, program evaluation - as characterized by the three major features discussed - when used appropriately holds great potential for guiding programmatic efforts. Additionally, keep in mind that we all utilize evaluation in informal ways in our everyday life to make decisions that range from selecting a great family vacation to the best preschools. As clinicians, you also engage in informal evaluation by the assessment(s) you use to select a treatment plan and to monitor progress of patients. Consider going a step further by asking how else can I make use of program evaluation to inform clinical decisions or other programmatic interests? Brainstorm with colleagues or a program evaluation consultant and begin to explore resources such as the American Evaluation Association (http://www.eval.org/). You will soon realize that program evaluation is indeed a relevant tool to document best practices of a program and/or other clinical activities.
In fact, here at the CDP, program evaluation is the cornerstone of our strategic efforts. We are mindful of evaluating our training programs to ensure that we utilize the lessons learned to expand our efforts as well as target those areas that need improvement or refinement.
United States Government Accountability Office (GAO 12-208G), Applied Research Methods, Designing Evaluations, Revisions 2012).Retrieved February19, 2013 from http://www.gao.gov/assets/590/588146.pdf.
Coffman, J. (2003). Ask the Expert: Michael Scriven on the Differences Between Evaluation and Social Science Research.The Evaluation Exchange, 9(4).Retrieved February 19, 2013 from
Dr. Beda Jean-François is a Research Psychologist with the Center for Deployment Psychology. She works collaboratively with the CDP Faculty to develop program evaluation protocols for monitoring and assessing the progress and effectiveness of evidence-based psychotherapy (EBP) trainings & other program initiatives offered by CDP.