Scope Conditions Podcast

Randomizing Together (Part 1), with Tara Slough and Graeme Blair

December 09, 2021 Alan Jacobs and Yang-Yang Zhou Season 2 Episode 3
Randomizing Together (Part 1), with Tara Slough and Graeme Blair
Scope Conditions Podcast
More Info
Scope Conditions Podcast
Randomizing Together (Part 1), with Tara Slough and Graeme Blair
Dec 09, 2021 Season 2 Episode 3
Alan Jacobs and Yang-Yang Zhou

The last two decades have seen an explosion of field experimentation in political science and economics. Field experiments are often seen as the gold standard for policy evaluation. If you want to know if an intervention will work, run a randomized controlled trial, and do it in a natural setting. Field experiments offer up a powerful mix of credible causal identification and real-world relevance.

But there’s a catch: if you’ve seen one field experiment, you’ve seen one field experiment. A field experiment is essentially a case study with strong causal evidence. So you now know something about the effects of foreign aid or canvassing or social contact in one corner of the real world – but will those interventions have the same effect in other contexts? 

And if someone else runs their own experiment on the same intervention in some other setting, they’ll probably do it in their own way, shaped by their own pet theory, the demands of their funder, or the interests of their local partner. So, at the end of the day, how will we combine or compare the results? How can learning cumulate if everyone’s doing their own thing?

One promising answer to these questions is the metaketa framework, pioneered by EGAP, the Evidence in Governance and Politics research network. In a metaketa, several teams of researchers coordinate on a harmonized cluster of randomized trials carried out across disparate contexts. So far, EGAP teams have run or planned metaketas on topics such as the role of information in democratic accountability, taxation, and women’s participation in public service advocacy. The idea is that, by running parallel experiments across diverse settings, we’ll learn something about the generalizability of effects.

Our guests today have just finished running two metaketas and join us to reflect on the promise and challenges of learning from coordinated field experiments. Dr. Tara Slough, an Assistant Professor of Politics at NYU, co-led with Daniel Rubenson a metaketa on the governance of natural resources that was published this year in PNAS. Dr. Graeme Blair, an Assistant Professor of Political Science at UCLA, co-led a metaketa with Fotini Christia and Jeremy Weinstein testing the effects of community policing. The main paper from that project was just published last month in Science

We had such a wonderful, in-depth conversation with Tara and Graeme that we’re dividing it into two parts. In today’s episode, we hear about the projects themselves: the interventions they were evaluating, how they were set up, and what they found. We also talk about the difficulties of choosing and designing a treatment that can be implemented across radically different contexts, and about the analytical subtleties of aggregating estimates across those studies. In Part 2, we’ll get into a set of broader issues surrounding the metaketa strategy, including what coordinated trials can tell us about external validity and the practical challenges of running simultaneous experiments around the world.

For references to all the academic works discussed in this episode, visit the episode webpage at www.scopeconditionspodcast.com/episodes/episode-23-randomizing-together-part-1-with-tara-slough-and-graeme-blair 

Show Notes

The last two decades have seen an explosion of field experimentation in political science and economics. Field experiments are often seen as the gold standard for policy evaluation. If you want to know if an intervention will work, run a randomized controlled trial, and do it in a natural setting. Field experiments offer up a powerful mix of credible causal identification and real-world relevance.

But there’s a catch: if you’ve seen one field experiment, you’ve seen one field experiment. A field experiment is essentially a case study with strong causal evidence. So you now know something about the effects of foreign aid or canvassing or social contact in one corner of the real world – but will those interventions have the same effect in other contexts? 

And if someone else runs their own experiment on the same intervention in some other setting, they’ll probably do it in their own way, shaped by their own pet theory, the demands of their funder, or the interests of their local partner. So, at the end of the day, how will we combine or compare the results? How can learning cumulate if everyone’s doing their own thing?

One promising answer to these questions is the metaketa framework, pioneered by EGAP, the Evidence in Governance and Politics research network. In a metaketa, several teams of researchers coordinate on a harmonized cluster of randomized trials carried out across disparate contexts. So far, EGAP teams have run or planned metaketas on topics such as the role of information in democratic accountability, taxation, and women’s participation in public service advocacy. The idea is that, by running parallel experiments across diverse settings, we’ll learn something about the generalizability of effects.

Our guests today have just finished running two metaketas and join us to reflect on the promise and challenges of learning from coordinated field experiments. Dr. Tara Slough, an Assistant Professor of Politics at NYU, co-led with Daniel Rubenson a metaketa on the governance of natural resources that was published this year in PNAS. Dr. Graeme Blair, an Assistant Professor of Political Science at UCLA, co-led a metaketa with Fotini Christia and Jeremy Weinstein testing the effects of community policing. The main paper from that project was just published last month in Science

We had such a wonderful, in-depth conversation with Tara and Graeme that we’re dividing it into two parts. In today’s episode, we hear about the projects themselves: the interventions they were evaluating, how they were set up, and what they found. We also talk about the difficulties of choosing and designing a treatment that can be implemented across radically different contexts, and about the analytical subtleties of aggregating estimates across those studies. In Part 2, we’ll get into a set of broader issues surrounding the metaketa strategy, including what coordinated trials can tell us about external validity and the practical challenges of running simultaneous experiments around the world.

For references to all the academic works discussed in this episode, visit the episode webpage at www.scopeconditionspodcast.com/episodes/episode-23-randomizing-together-part-1-with-tara-slough-and-graeme-blair