Index


Evaluation Overview

Program managers can use a variety of methods for evaluating programs, which yield different results and have different uses and purposes. Evaluation design and scope dictate the required resources. Several common types of evaluation are:

Process evaluations, which assesses the performance or completion of steps taken to achieve desired program outcomes. Process evaluation can occur throughout the project cycle and can guide managers to make changes to maximize effectiveness. Examples of process measures are the number of ads shown in a media campaign or the number of community partners.

Output measures, which are commonly used in process evaluations, help gauge a program's processes; they describe a program's activities (e.g., how many older adults participated in a walk or how many classes were convened), rather than the ultimate effect of the program (e.g., changes in health). Output measures allow program managers to plan appropriately for clients or classes. Program planners can also use outputs to identify a need to better tailor programs to a target population (for example, if older adults are not joining a walking group) or monitor changes in program outputs (fewer older adults in the walking group than before).

Outcome evaluations, which consider program goals to determine if desired changes to attitudes, behavior, or knowledge have been attained as a result of the intervention. Outcome metrics are usually measured at the beginning and end of a project cycle or program. Examples of outcomes include positive changes to health status or a quantifiable increase in walking seniors.

...Community outcomes are not visions or goals, but specific changes or benefits that involved organizations hold themselves accountable for influencing.

- Achieving and Measuring Community Outcomes:
Challenges, Issues, and Some Approaches.
United Way of America, April 1999.

Impact evaluations, which seek to isolate a program's impact on participants and communities, while filtering out effects from other potential sources (e.g. weather, other programs).1 Although impact evaluations require a higher level of technical expertise, they are considered the "gold standard" of evaluation. Impact evaluations (known as "experimental" or "quasi-experimental" studies) compare a group receiving services against one that is not.

back to top

Process Evaluation

In Sacramento, California, program planners measured the number of participants in walking groups as well as walkers' satisfaction. Although not a formal evaluation, this basic assessment of program processes provided managers with information indicating a need to modify the program. Walkers told them that the group meeting place affected their willingness to participate.

Sacramento's process evaluation could be formalized to include questionnaires, structured interviews, or surveys so that each participant is assessed in the same manner and data can be compared across individuals and groups.

Source: Partnership for Prevention (2002). From the Field: Four Communities Implement Active Aging Programs. www.prevent.org/activeaging.htm


Who Conducts Program Evaluations?

Depending on the scope, design, and purpose of program evaluations, a range of staff and researchers can conduct them. For process-oriented, day-to-day evaluation, program staff can collect and monitor information on various program features and collaborate with managers to analyze and interpret data. This type of evaluation, sometimes referred to as an internal evaluation, is conducted on a routine basis to review objective aspects of a program.

For in-depth evaluations, the Centers for Disease Control and Prevention (CDC) recommends a team approach.2 The team may include technical experts, such as statisticians or epidemiologists; program staff and management; stakeholders; and trusted members of the community with no vested interest in the outcome of the evaluation. Participation from outside the program provides fresh insight and increases the credibility of the evaluation. These external evaluations often focus on the outcomes or impact of a program.

Outcome Evaluation

Largo, Florida's active aging program managers will measure blood pressure and pulse before and after the intervention to determine the quantifiable effects of their program on senior health. Largo is also building new urban trails and could measure the number of people walking in the community before and after the trails are built. Managers could survey or count people using trails. Businesses along the trails may measure increases in sales or customer traffic before and after the trails are built. Because bus routes will be connected to the trail system and every bus is equipped with a bike rack, bus drivers could survey passengers going to urban trails or using bike racks. Evaluation should be done before and after the trails' completion.

These trends should be measured over time to assess whether or not more outreach is necessary, if the trails are functioning well, or if other program changes are necessary.

Source: Partnership for Prevention (2002). From the Field: Four Communities Implement Active Aging Programs. www.prevent.org/activeaging.htm

back to top

Summary

Program managers can choose from an array of evaluation designs, methods, and evaluators. If it is not feasible for a program to conduct an external evaluation, program managers and staff can learn a great deal from regular program assessments. Conducting any evaluation of a program - judging the satisfaction of participants, the number of classes held, or the impact - is better than no evaluation at all. Program managers who do not assess the direction, methods, potential impact, and outcomes may have a limited understanding of their program and may lack the data to justify the program to funding agencies.


Impact Evaluation

An impact evaluation of the Wheeling, West Virginia media campaign, Wheeling Walks, led program managers to document a 30 percent increase in walking in the community as a result of the campaign. The evaluation compared walking rates in Wheeling before and after the campaign to rates in a similar community without the intervention.

Source: Partnership for Prevention (2002). From the Field: Four Communities Implement Active Aging Programs. www.prevent.org/activeaging.htm.


Program Evaluation Framework

The CDC recommends a series of steps for program evaluation (Figure 1).3 This framework was developed community initiatives. The steps are as follows: figure 1 - click for long description

Engage stakeholders: During this step, evaluators ask partners and stakeholders to provide input into evaluation design and data analysis. Stakeholders program managers, collaborators, the population makers. In addition to informing program and evaluation efforts, stakeholders ensure that the program meets the needs of the community and issues that require consideration.

Describe the program: Evaluators next describe in detail the mission, goals, objectives, and program strategies. The description explains the needs addressed by the program, the expected outcomes of program activities and strategies, as well as available resources. The program description provides an explanation of a "logic model." The description also presents the program's developmental stage – whether it is a new or old program – which can affect the type of measures considered. A newer program that has been in the community for a short time will not have discernable long-term effects, and evaluation measures should reflect this.

Finally, evaluators consider external factors that can affect program success. Creating Communities for Active Aging lists external factors that commonly influence older adults' walking practices.

seniors walking and enjoying wildlifeFocus the evaluation design: Managers must choose an evaluation methodology and measures to accurately assess the process or outcomes of the program while minimizing cost and time. To focus the evaluation design, managers should articulate the purpose of the evaluation, such as to improve the program's functioning (process evaluation) or to assess the effectiveness of the intervention (outcome or impact evaluation). Managers should also define the ultimate users (audiences). Methods should be directly connected to the planned use of data. The next step is to design the evaluation methodology. Methods can include questionnaires or surveys, quasi-experimental studies, and structured qualitative interviews.

 

 

back to top

What Are Logic Models? How Are They Used?

Program planners and managers use logic models to outline the steps of a program. The models begin with the problem or opportunity in question and examine the critical steps the program will undertake to bring about a desired change. Logic models also identify the external influences at work in the community that could potentially affect the outcome as well as the resources required to change the outcome.

Program managers can use the logic model to identify measures which track a program's progress at each step towards its goals.

  • Questionnaires or surveys can be distributed to program participants or other stakeholders before, during, and after the program to assess changes in attitudes, behaviors, or knowledge (such as use of new sidewalks, experiences in program, etc.) Managers of a walking initiative in Nashville, Tennessee distributed a "Walkability Checklist" after all community walking events.4

  • Quasi-experimental studies measure changes in a population or community who participates in a program with changes to a similar group or community without a program. Managers of Wheeling Walks used a quasi-experimental study design to demonstrate increases in walking rates as a result of their media campaign.

  • Qualitative interviews are useful for collecting in-depth information. Through structured interviews, evaluators ask program participants or other stakeholders open-ended questions to obtain a solid understanding of impressions or changes in attitudes, behavior, or knowledge.

Regardless of the chosen approach, the methodology should be well researched and adhere to the highest standards of science. All evaluations should actively ensure participants' confidentiality, and data should be reliable and accurate.


Effective Evaluation Standards

Evaluations should balance the following standards to ensure evaluation effectiveness:

Utility: The evaluation should be meaningful and useful.

Feasibility:The evaluation should provide a practical analysis of the program. The evaluation should also be cost effective for the program.

Propriety: The evaluation should consider confidentiality of participants, in addition to the legal and ethical implications for those affected by the results.

Accuracy: The evaluation should provide a truthful representation of the program and should communicate technically accurate information.

For more information on these standards, please see: Joint Commission on Standards for Educational Evaluation. 5

back to top


Confidentiality

Because community programs may have a small number of participants and key stakeholders may be easily identified, evaluators must ensure that all information is anonymous or that the identities of those who provided information are kept secret from external parties. By doing so, participants are protected and the integrity of the data is maintained.

After the method for data collection is determined, evaluators should plan for data analysis using accepted statistical and research methods. The type of analysis chosen will depend on the desired uses of the information. For a complex analysis, program planners may tap local experts for assistance.

Gather credible evidence: Evaluators obtain data from various sources, depending on the type of evaluation. Program participants, stakeholders, and administrative records can all be data sources. Evaluators should determine in advance all necessary aspects of data collection, including logistics.

Involving stakeholders in the design of both the program and the evaluation aids credibility by ensuring that all points of view are considered, the program will address the population’s needs, and the data will be meaningful to users. This involvement helps ensure that primary audiences will consider the resulting data credible.

Measures should assess discreet aspects of a program that relate to program goals and the time seniors bicyclingframe for achieving them. Established during the planning stages, measures quantify progress towards a desired goal and should be clearly linked to the program’s logic model.

Justify conclusions: After the collection, evaluators synthesize, analyze, and interpret the data using previously determined methods. Standards against which to compare data – such as a baseline measurement, a comparison to previous years, studies, or measures from comparable communities or the United States – will help determine whether or not the program is functioning well or achieving the desired outcomes. Program planners, with stakeholder involvement, can then use the evaluation data to recommend program changes and/or create other programs.

Ensure use and share lessons learned: Program managers should determine in advance how the evaluation results will be presented to users and stakeholders. Throughout the process, managers should share information with all parties to solicit feedback and respond to any concerns raised. Following the evaluation’s completion, managers should disseminate findings in a format that is easily understood and accurately depicts the information and analysis.

back to top


Evaluation Challenges and Strategies for Addressing Challenges

Although many program managers appreciate the utility of evaluations, many programs evaluate neither process nor outcomes due to the perceived challenges associated with evaluation. The following are examples of common evaluation challenges and suggested strategies for meeting these challenges:

The Cost Challenge: Program evaluation can be expensive. A rigorous evaluation can cost more than a program has allotted for research purposes.
In addition, given the choice between research and provision of services, many managers choose to provide direct services to the community.

Strategies: Evaluations can be built into program plans without major expense. By following the steps above, managers can design informative, quality evaluations. Managers also can seek external funding for evaluation purposes, such as from local or national foundations.

The Time Challenge: Evaluation efforts may be time consuming and could divert staff from the day-to-day program functioning.

Strategy: Managers can address this challenge by planning small and incorporating effective evaluation components into the program's functioning.

The Expertise Challenge: Most evaluation efforts require a minimal level of expertise. Although complicated analyses or study designs require expertise in research and statistics, many meaningful lessons can be learned from simple process evaluations.

Strategies: In some cases, in-kind assistance may be provided by asking local entities to contribute to the program. Expertise may be tapped from local colleges, universities (especially schools of public health), hospitals, and health departments. National resources, such as the Robert Wood Johnson Foundation's Communities for Active Living program offices or the CDC, can provide technical assistance for the evaluation of active aging programs.

The Robert Wood Johnson Foundation

The Robert Wood Johnson Foundation (RWJF), a philanthropic organization dedicated to improving health and health care in the US, currently funds several program evaluation projects. A sampling of evaluation projects from their website shows expenditures ranging from $32,000 for eight months to $671,000 over four years. Source: www.rwjf.org,accessed on August 12, 2002.


The Measurement Challenge: Any evaluation must ensure that the performance measures will answer the questions that will lead to an understanding of a program's effects.

Strategy: Careful planning from the program's outset - including the development of program goals, objectives, logic models, and measures - help ensure that the information is useful.


back to top