Bias is the prejudice in favor of something, usually in a way considered unfair.
People have been talking about bias in evaluation—and research—since the beginning. It is the idea that if a person is favorable towards a program or perhaps wants to receive future contracts from that program, they are more likely to adjust their evaluations in a way that can lead to findings that are invalid, unreliable, and lack credibility.
Understandably, some people have thought this level of bias should be avoided at all costs, and propose maximizing the distance between the evaluator and the program to achieve this goal. These methods to maximize the distance include goal-free evaluation, non-participatory evaluations (e.g., goal-free evaluation) independent funding (rather than the program funding the evaluation), designs that minimize threats to internal validity (a la Campbell, including randomized control trials), and peer reviews and meta-evaluation.
However, some of these approaches to evaluation—where the program and its stakeholders should be minimized to avoid contaminating the evaluator—alienates many evaluators, particularly those who report being in an internal role. Scriven and Campbell, some of the major proponents of minimizing bias, may not have the pragmatic or constructive epistemologies that many evaluators have. Thus, Scriven and Campbell argue for controlling for bias rather than acknowledging and recognizing bias like a constructivist would argue for.
Furthermore, this approach to evaluation alienates our stakeholders. Proponents of collaborative, empowerment, and participatory evaluations argue, and have found much evidence for, the benefits of such an approach for stakeholders, the program, and the evaluation. These benefits include giving participants ownership over the evaluation and results; building capacity to understand, use, and conduct evaluations; and improving program performance, learning, and growth.
Minimizing or controlling for bias may alienate evaluators and stakeholders who want a more participatory approach.
There is a level of bias that some seem to consider the worst type of bias: advocacy. As soon as evaluators put on their advocacy hat, they are no longer value neutral toward the program. Rakesh Mohan wrote a wonderful piece on Sheila Robinson’s website titled “Why are evaluators so tentative about the advocacy aspect of our profession?” In it, he argues that “it is the fear of politics that makes many evaluators tentative about advocacy.”
He further argues in his related AJE article that advocacy while maintaining independence is a difficulty and risky endeavor. Credibility is important in our profession, particularly outside of Canada without any sort of credential or professionalization system. As such, “the loss of credibility could adversely affect the evaluator’s professional reputation among peers and could negatively affect his or her livelihood. It is the fear of losing one’s credibility that keeps many evaluators away from engaging in advocacy activities.” (Mohan, 2014, p. 400).
Many are fearful of losing credibility if they are viewed as biased in any regard.
I think this loss of credibility—in the eyes of peer evaluators, stakeholders, and the outside community—is what most people think of when they think of, and fear for, bias. And I am not saying it is wrong to be fearful of it or wrong to avoid it necessarily. However, I think we need to balance credibility with participatory (or empowerment or collaborative) evaluations whereby we advocate for our programs when it is ethical to do so. We are often working with impoverished or disenfranchised communities or programs that have immense political implications. Through advocacy—or even just by maintaining closeness to these communities in our evaluation—we can help raise their voices in this highly political world.
 Rakesh’s blog and article are both focused primarily on advocating for evaluation, not necessarily for the programs. However, I feel his arguments are relevant to both cases.
 Mohan, R. (2014). Evaluator advocacy: It is all in a day’s work. American Journal of Evaluation, 35(3), 397-403.