Skip to content

Connecting the Dots Between Research and Government

Arnold Ventures’ John Scianimanico explains how embedding research and evaluation into government programs can have a huge payoff for society.

The Texas Capitol
(Eric Gay/The Associated Press)

Arnold Ventures’ John Scianimanico, Results-Driven Government Policy Lab Manager, spoke at the recent J‑PAL State and Local Innovation Initiative to discuss using rigorous research — particularly RCTs — when rolling out new government programs. Below are his remarks.

I’m excited to be here with you all today because like J‑PAL, Arnold Ventures cares about fighting poverty and improving the world, and we are guided by a fundamental belief in the power of rigorous research and reliable data.

That belief dates to the start of our organization. When the Arnolds founded their philanthropy in 2010, they began with the idea that they could fund programs shown by research to reduce crime and improve graduation rates. But they quickly ran into a problem: They had a hard time finding programs that would fit the bill. 

That’s mainly because there weren’t many interventions that research pointed to as having a significant and positive impact on people’s lives. 

But, there were other problems, too. When programs did cite research, many misinterpreted what the research was saying. Or worse, they conflated correlation with causation and mistakenly attributed the impact of the program under study to some other, unobservable factor. 

And even if the program provider did accurately interpret the research results, the research itself was often inherently flawed or misleading. 

What the Arnolds quickly realized was that many sectors lacked foundational evidence to help us move forward as a society.


Laura Arnold on the Importance of Rigorous Research

In her TEDx Talk, she calls for policymakers and philanthropists to make decisions based on rigorous research rather than anecdote or ideology.

As a result, in the last eight years they have made significant investments in high-quality research — the kind they believe will help us better support our communities. Because if we want to make progress on key issues, we don’t just need more solutions — we need more solutions that we know can actually work.

Right now, policymakers are looking for moonshots,” or big ideas that will lead to transformational change. The ones that get traction are usually the ones pushed by the leading political party. But I believe the success of the ideas being put forward will depend less on ideological purity and more on the research that underlies them.

The type of research I’m talking about is rigorous, honest, and objective. I’m talking about research that can give us the confidence that what we are doing is actually having an impact. Of course, I am talking about randomized-controlled trials, or RCTs.

In the social science world, RCTs have long been considered the gold standard because they can make claims about causality. In the political world, however, policymakers haven’t always embraced them because they’re thought to be expensive, time-consuming, and not always policy-relevant.

This used to be true in many cases. But research has evolved significantly over the past 10 years, thanks in large part to the work of organizations like J‑PAL and many, many others.

For example, today you can run an RCT for less than $100,000. For some experiments, you can complete them within one to two years instead of four to five, which gets you the results on a faster and more policy-friendly timeline. And you can conduct RCTs on programs and policies that were created to tackle the biggest issues of our time — from charter schools in low-income neighborhoods to state-funded earned income tax credits.

Body-worn camera
Photo by Matt Rourke/The Associated Press

I want to share a story about one of those programs because I think it’s important that we start connecting the dots between research and government.

In 2015, Washington, D.C. became one of the largest cities to require its police force to wear body-worn cameras following a string of deadly police shootings.

However, instead of rolling out the cameras immediately to every officer, D.C. Mayor Muriel Bowser directed the police force to randomize assignment of the cameras, meaning some officers received the camera while others were assigned to the control group. Then she asked the Lab @ D.C., a research team embedded in the city government, to conduct an RCT to test whether the cameras had any impact on use of force and civilian complaints.

The study was unique in several ways. For one, it was the largest evaluation of a body-worn camera program, consisting of more than 2,200 police officers. It was the first time that randomization occurred at the officer level rather than at the squad level. And perhaps most importantly, the study included one of the most prominent uses of a pre-analysis plan, which the Lab developed to lock in what questions would be asked and what outcomes would be measured. This pre-analysis plan enhanced the credibility of the project, created political buy-in among government and constituent stakeholders, and buffered against criticism of independence when the results were released.

At the end, the results of the study showed no discernible differences in outcomes between those who had the camera and those who did not. In other words, the cameras had no significant impact on police behavior when measured in terms of use of force, civilian complaints, and other judicial outcomes.

This could have been a big problem for Mayor Bowser because many people thought the cameras would improve policing. But instead of burying the study, the researchers and city leaders held multiple community events to discuss the findings and what they meant for police relations. They shared the findings with reporters so that other cities that were considering replicating the program could read about the report. And they committed to doing more research in order to address the problem driving the civil unrest.

The study had an important ripple effect as well. It demonstrated to the country how embedding evaluation into a rollout of a government program can be a smart and cost-effective way of injecting science into policy. For those of you thinking about rolling out a new program in your own jurisdiction, you may want to follow D.C.’s approach and do it in a randomized way. In addition to the benefits I mentioned before, the city was also able to bring costs down by deploying the cameras over a period of one-and-a-half years rather than all at once.

Imagine if we had more research like this, research that showed us the answers to questions like, “How do we reduce gun deaths?”; research that did not take years to unfold; research that was intellectually honest and a part of the political dialogue.

Now, of course, finding the political will to embed evaluation into the roll-out of a program like body-worn cameras is critical. As we know, it’s difficult to predict what the research will find, and political challenges can arise when research finds that a program doesn’t work — and that makes it risky to do things like what Mayor Bowser did. But the upside for conducting more studies — and the potential benefit to society — can be huge.

To prove that point, I want to tell you one more story. This time, about the progress that can be made when we evaluate an existing program.

Every year, hundreds of people fall victim to gun violence in Chicago neighborhoods. But due to the lack of rigorous studies on gun violence prevention programs, very little is actually known about which interventions are effective in reducing violence in communities.

To help save lives, a local provider of outcomes-driven social programs launched a new effort focused on working with young men from disadvantaged communities. They called it Becoming a Man, or BAM. BAM is a program that targets at-risk male students in Chicago public schools and teaches them through weekly group sessions how to be more conscious of their decision-making processes.

The University of Chicago’s Crime Lab rigorously evaluated the program on two separate occasions, and in their most recent evaluation, more than 2,000 ninth- and tenth-graders participated in the study over the course of two years.

Both studies showed striking evidence of the program’s effectiveness, including up to a 50 percent reduction in violent crime arrests and an increase in graduation rates by up to nearly 20 percent. Moreover, at a cost of less than $2,000 per participant, these studies found that the program’s impact on reducing crime alone could yield a benefit-cost ratio of up to 30-to‑1, and likely even higher when accounting for the effects of higher graduation rates.

Reduction in violent crime as a result of the Becoming a Man program
Reduction in violent crime as a result of the Becoming a Man program
Increase in graduation rates as a result of the Becoming a Man program

Without these evaluations, this program would likely have gone unnoticed by the city and local philanthropy. In their most recent budget, the Chicago granted $1.4 million to expand the program, allowing BAM to grow to serve more than 6,000 young men in 105 schools across Chicago. And in 2017, Boston Mayor Marty Walsh and philanthropic partners brought the program across the river to Boston, making it the first district outside Chicago where BAM expanded.

Imagine if we had more research like this, research that showed us the answers to questions like, How do we reduce gun deaths?”; research that did not take years to unfold; research that was intellectually honest and a part of the political dialogue.

The good news is that we can do more research by working with partners like J‑PAL’s State and Local Government initiative.

What makes J‑PAL and other entities’ approach to research so special is that they place equal emphasis on rigor and policy-relevance — there is no trading off one for the other.

The result is clear, accurate, and useful information to help people like yourselves execute on your policy agenda.

Here are their secrets to success:

  • First, they consult with you, the leaders who have invaluable local knowledge about what is happening in your communities, what problem needs to be solved, and what questions need to be asked.
  • Second, they work step-by-step with you on implementation – to set up a fair and thorough randomization design, to implement the experiment with fidelity and transparency, to communicate frequently any interim results to you and other stakeholders, and to mitigate any political or project-related risks. 
  • And finally, when the project concludes and the results are in, they present them in an easy-to-read format that gives actionable takeaways upfront, without having to find them in a 40-page report.

As I mentioned earlier, academic research and research-practice partnerships have evolved quite a bit over the past decade. Today, there is a greater emphasis on how to make research more practical, more useable, and more impactful. I think this trend is still in its early stages, but several tailwinds are driving it forward. Here are some of them:

  • First, universities like Brown, MIT, and the University of California system have generously supported new research-practice partnerships. For example, the University of California just awarded a $1.2 million grant to the California Policy Lab to provide research expertise for projects that California agencies value.
  • Second, government leaders have pushed academia to not be afraid of making recommendations to government and to not stop at yes this program works” or no this program does not work,” but to interrogate why it works, for whom does it work, and under what conditions does it work? 
  • Third, I would be remiss if I did not make a shameless plug for the funders in the room, and the role that philanthropy has played in seeding new partnerships, betting on promising people, and facilitating the match-making process between researchers and government.
  • And lastly, speaking of promising people, it’s people like yourselves who we have to thank. People like David Yokum who ran the body-worn camera evaluation in D.C. and is now leading the Brown Policy Center in Rhode Island. People like Day Manoli, who is using IRS data to conduct long-term follow-up evaluations of major federal social programs, including National Job Corps and Career Academies. And finally, people like Mary Ann Bates and Julia Chabrier who oversee J‑PAL North America and it’s State and Local Initiative, and without whose hard work, we would not be here today.

If we want to truly commit to making social progress, then we need more Davids and Days, more Mary Anns and Julias doing this work today. It’s my hope that one day, philanthropy will go out of the research-practice partnership business because government and universities will make rigorous and policy-relevant research part of their business as usual.”

There are still headwinds that keep this hope at bay, including misaligned interests between researchers and policymakers, an underwhelming number of successful and proven social programs, and challenges adapting and implementing these successful programs from one jurisdiction to another.

But these challenges are not insurmountable. After all, the medical community has been using RCTs for more than 75 years. And RCTs in medicine have led to major breakthroughs in common illnesses, ranging from diabetes to heart disease.

It’s time for social policy to catch up. Twenty years from now, I want to be able to point to the same type of progress that we’ve seen in medicine. By giving rigorous research equal weight in our political debates and by working with organizations like J‑PAL, we can make it happen.