Skip to content
Commentary

Policy-Based Evidence’ Doesn’t Always Get it Backward

The oft-lamented flip side of evidence-based policy’ can help guide policymakers — when used correctly

Edgar Argo

If you support the use of evidence and research in government, you’ve probably heard the lament that instead of evidence-based policy,” we often end up with policy-based evidence.” I’ve repeated that line quite a few times myself.

While there’s a valid point behind the lament, there’s another sense in which a fundamental goal of Arnold Ventures is to seek policy-based evidence” — as the flip side of evidence-based policy.”

Bad Policy-Based Evidence

Let’s start with where the complaint about policy-based evidence” is valid.

Far too often, policymakers who gesture toward evidence aren’t actually interested in following the evidence. Instead, they are more motivated by something else — their own beliefs and desires, their political party, donations from special interest groups, the need to compromise and offer a deal to someone else, budgetary constraints, and more.

When these motivations are driving a decision, policymakers who cite evidence may be:

  • Selectively mentioning evidence that supports a favored policy while ignoring or dismissing contrary evidence (this happens all the time with, say, charter schools); or,
  • Generating evidence to support a policy decision that was already made.

Indeed, even a well-intentioned policy can distort the subsequent collection of evidence. In one example, Kenyan school authorities were rewarded financially for increasing school enrollment, but the result was that data on school enrollment became corrupted by over-reporting. As Sanderfur and Glassman put it, while people often repeat the famous quote that what gets measured gets managed,” it often seems that what gets managed gets systematically mis-measured.”

So far, I agree. Policymakers shouldn’t cherry-pick evidence to support a preordained conclusion, nor should they enact policies that then distort evidence. If they do so, the policy-based evidence” they create is not as useful (to say the least) as evidence that is more rigorous and neutral.

But there’s another sense in which policy-based evidence” is exactly what we’re looking for.

Good Policy-Based Evidence

The good kind of policy-based evidence” means evidence that begins with the need to resolve a key uncertainty in a policy area — one where policymakers could actually make different decisions based on what the evidence shows. Then, good policy-based evidence reduces that uncertainty so that policymakers have better information. 

What policymakers actually do is another question, of course (there are many open or even unstudied questions as to how to connect policymakers to evidence). Still, this sort of policy-based evidence is much more likely to create evidence-based policy than anything else.

Think of the opposite: non-policy-based evidence. There are famous examples, such as the Chicago economist Steven Levitt’s study on cheating in Japanese sumo wrestling, which seemed to be motivated merely by the chance to use a clever research technique. After all, Chicago economists would never have considered this an urgent policy question.

Beyond the obvious examples, way too much evidence is generated not because of a pressing policy question, but because someone had access to a convenient source of data, or to a convenient opportunity to do an RCT, and thought to themselves, What can I do here that is most likely to lead either to funding or a publication (or both)?”

Think of the flood of education policy articles over the past several years about the best way to measure the value-added” of teachers in K‑12 education. I would suggest that this vast area of scholarship arose not because assessing teachers is the single most important question in education policy — although it’s a good question — but because states started creating large, longitudinal datasets on student test scores. In those datasets, it is much easier to analyze teachers’ performance than to look at questions that might be more important but that are more difficult to study (such as curriculum, which relatively few people study because there’s hardly any good data on what curricula schools are using in the first place).

This is not meant to criticize academics; the main way they can get a job and keep it is to publish, publish, publish. That incentive wonderfully concentrates the mind (as Samuel Johnson would say) on publishing at all costs. It’s no surprise that relatively few academics prioritize policy-relevant questions over whatever is publishable.

Still, it’s difficult to move toward evidence-based policy” if so much evidence is generated in a process akin to the old joke about a drunk looking for his lost keys under the streetlight, not because he lost them there, but because that’s where the light is.

In short, scholars who come up with the good kind of policy-based evidence don’t define their question or hypothesis solely based on what can be answered with a convenient source of data, or based on what is trendy or publishable in a given scholarly field, or based on what can be studied with particularly rigorous methods (such as RCTs).

Instead, good policy-based evidence arises when people ask, What are the most pressing sources of uncertainty in this field? What new information would reduce uncertainty and shed light on possible reforms?” If such questions can be answered with existing data or with RCTs, all the better, but often the most pressing uncertainty is due to the lack of the most basic descriptive statistics (e.g., how many people are jailed in a given year).

When evidence is motivated by answering the most urgent policy uncertainties, only then can we realize the full potential of evidence-based policy.