Chapter 6: Closing Comments
The Art of Appropriate Evaluation

CHAPTER SIX

CLOSING COMMENTS

W
e want to leave you with four final thoughts on program evaluation.

  1. It doesn’t have to be hard!

    If you start out with the intention of keeping your evaluation as simple and straight-forward as possible, you are much more likely to have useable results. Resist anyone who tries to expand the focus or complicate the design. Keep the level of evaluation consistent with the size of the program and the objectives you are trying to meet.

  2. It doesn’t have to be expensive!

    First, re-read # 1, and keep your design as simple as you can. Second, take advantage of the resources that exist in your community. You might be able to convince a university professor to take your evaluation on as masters thesis project. Maybe you can hire an evaluator and recruit volunteer data collectors from local citizens organizations. Work with your evaluator to identify activities on which you can economize, and which areas are worth spending a little extra.

  3. Investing in evaluation can save you time and dollars over the long haul!

    With the information you learn from a worthwhile evaluation you can focus your resources on the most critical problems and the most effective countermeasures. You will also be able to adjust programs mid-stream to improve effectiveness. And most importantly, you will be much more likely to convince your funding sources that their dollars have been well-spent, which means that you are a good investment for the future.

  4. It’s never too late to start!

    drive safely signWe have spent a lot of time stressing that evaluation should be built into a project right from the start, and not left until the final act of your program performance. However, if you are in the middle of a project right now and are eager to try out your new evaluation mentality, go right ahead. You certainly can check to see if implementation is going as planned and how resources are being spent. An evaluator should be able to help you review what baseline data exists and develop some simple performance measures that you can use to assess how the program did in meeting its objectives. It’s even not too late to write some SMART objectives to clarify for everyone what you expect the outcomes to be.

The purpose of this Guide was to convince you that evaluation does not have to be a scary thing. You will only truly be convinced when you apply the information you have read here to evaluate a program of your very own. What are you waiting for?

Once upon a time there was a project manager who was faced with a problem. The head of her department informed her that there were two new projects being planned as part of a national effort to reduce nighttime collisions. Two county supervisors each had their own favorite solution. However, the funding source informed the department that the money they were providing could only go toward one new initiative. The department head refused to choose one project over another without empirical proof to justify her decision. So, the responsibility of pilot testing each approach and recommending one project over another was placed on the shoulders of the beleaguered manager. What a dilemma!

Remembering her training in evaluation management, the manager decided to approach this problem with an evaluation mentality. She was determined to save herself as much wasted time and effort as possible, so she decided to build evaluation procedures into each of the projects right from the start. With the assistance of a carefully selected professional evaluator, she asked five essential questions to put herself in the right mind-frame: “What do I know about the safety problems involved in night driving? What is the objective of each of these projects? How would I measure results? How can I collect the data I need? What are my criteria for success?”

Feeling like they had a firm grasp on each project, the manager and evaluator settled on reasonable objectives for each pilot test according to the SMART guidelines and created a plan for measuring results. They hired assistants to collect appropriate baseline data according to each project’s focus. Next, the pilot programs were implemented according to the carefully outlined schedule. In the following weeks, the collected data was analyzed and the report was carefully drawn up. “Hey,” the manager said to the evaluator, “with your help, this wasn’t as hard as I thought.”

The big day arrived. In the conference room gathered the department head, the two supervisors, and the funding representative, all anxious to hear the results. Calmly and confidently, the manager presented her findings. While one approach indicated modest success, she explained, the other program clearly surpassed it, raising safe night driving behaviors by 50%. Impressed by the convincing results, the funding representative heartily agreed to fund the successful project for three years. The department head recommended the manager for a long-overdue promotion. The victorious supervisor patted himself on the back for having though of such a brilliant idea. And even the not-so-triumphant supervisor took the news well, reassured that the outcomes had resulted from an impartial and professional study. Breathing a sigh of relief, the manager thanked her lucky stars that she had used her evaluation training.

And they all lived happily ever after…