One question I have in looking at education research reports is the motivation of the researchers. Were they interested in whether a particular intervention had an effect on student achievement or in showing that it did? An argument can be made that motivation doesn’t matter as long as researchers follow the correct protocols (presumably this is why the What Works Clearinghouse does not consider the sponsor of the research it reports on), but I would hold that there are still so many judgments to be made that researcher bias can affect the results.
Often there is a chicken and egg problem: does the researcher believe x causes y because that is what the research showed, or did the researcher look for evidence to demonstrate that x causes y? Even if the researcher started completely neutral, what about follow-on research? Does the researcher now have a vested interest in confirming his or her previous research, particularly if it was published and got some notoriety?
The credibility that comes from being perceived as not having a vested interest in the results is a valuable asset and I am surprised how often researchers throw it away. A recent example is Economists for Romney which has a web site with a statement and names of economists who presumably endorse the statement. One can question whether individuals who sign a partisan statement are able to dispassionately analyze policies associated with either candidate.
By contrast, the Chicago Booth business school has a panel of about 40 economists from major research universities. Every week panelists are asked to respond as to whether they strongly agree, agree, are uncertain, disagree, or strongly disagree with a statement on an economic issue. They are also asked to rate their degree of confidence in their answer. This panel appears to represent a reasonable cross-section of economists at major universities, so their answers can give a gauge of where there is consensus among economists and where there is none.
Comparing the assertions in the Romney economists with the issues that the panel were asked to respond to, I found one with a reasonably close match. The Romney statement complains that Obama “relied on short-term “stimulus” programs, which provided little sustainable lift to the economy….” In February 2012, the panel was asked to respond to: “Because of the American Recovery and Reinvestment Act of 2009, the U.S. unemployment rate was lower at the end of 2010 than it would have been without the stimulus bill.” All but three of the economists responding agreed or strongly agreed with this statement.
One of the two who disagreed was the only panel member who signed the Romney statement. So again, one is left with the question of whether his opposition to Obama influenced his conclusion that the ARRA did not decrease unemployment.
Finally, I looked for names of people on the Romney list who I recognized as players on educational policy. I found only one, but most education researchers are probably not economists. This individual strikes me as knowledgeable, with a good grasp of the tools of statistics, but his results always fall on what is considered the conservative side of an issue. In my experience research on education policy issues is more messy than he makes it out to be.
Unfortunately there is no simple answer to how to separate agenda-focused research from research aimed at understanding how things work. One step would be researchers themselves to more clearly distinguish when they are acting as advocates and when they are putting their own beliefs aside.