Cooperative Evaluation

What is the role of an evaluator? Said differently, what is the goal of program evaluation? On complex projects and programs, certainly the role is not to sit, white-coated, judging programmers and curricula with the final word on if things “worked” or not. To judge without knowledge of best practices, or keep recommendations private is to betray the very expertise that validated evaluation as a necessary component of any new policy or program.

Proponents of strict, cold, and static evaluation argue that providing continuous feedback to programmers will affect the results of the program.  This, they say, threaten the internal validity of the evaluation.

To this argument I say “so be it”.  Final judgment will always rest with the evaluation team, and, if properly insulated from outside influences, can remain fair and scientific.  Giving updates and feedback to programmers can, in this way, be framed as both detached and cooperative, discerning and helpful.

What if the application of the program is improved due to the newly acquired information? Does this evince a breach of protocol by the evaluator? No, it does not. While a feedback loop between external evaluators and programmers is acceptable, a short circuit is not.  This is to say that the direct involvement by the evaluator in program application should not be condoned.  For example, coaching the answers of respondents in a written survey, or otherwise moving into a programming role would clearly affect results.

So how can a cooperative evaluation framework thrive in a climate of complex programming, funding renewal applications, and friendships forged along the way? To answer this question a distinction must be drawn between academic and managerial recommendations.  The former should be encouraged, while the later should be reserved for programmers alone.  An academic recommendation is based on scientific findings, formalized and submitted to programmers.  Differently, managerial recommendations could involve friendly reminders, disciplinary warnings, and any changes to programming based on efficiency or economic concerns; they are not necessarily based on research or best practices.

While the evaluator should not ‘care’ if a program succeeds or not, they should be encouraged to help programmers at each turn.  This may come in the simple form of providing ongoing feedback as data because available and the necessary analyses are completed.  This is to say, no value should be placed by evaluators in the success or failure of a program, only if data was collected in a timely fashion, analyzed properly, and submitted with prudence.

To work outside of the cooperative evaluation framework invites hostile feelings on both sides of such studies, breeding resentment.  Evaluation has become so ingrained in social policy and programming, why should it be seen as an obstacle of hurdle, rather than a team of collaborators with specialized knowledge? The time has come to work together to achieve best practices.

2 thoughts on “Cooperative Evaluation

  1. This is such a great and thoughtful comment. As a researcher who works regularly with communities in the context of applied, “action oriented” research, I struggle with many of these questions on a near daily basis. Working with local communities and practitioners in the “real world” context is one of the most rewarding and challenging types of scholarly activity. Yet these tensions are real, but I think the conclusions are the most valid. Our role is not to judge, but to work collaboratively to help find real world solutions.

Leave a comment