Please note: This course will be taught in hybrid mode. Hybrid delivery of courses will include synchronous live sessions during which on campus and online students will be taught simultaneously.

 

Ryan T. Moore is Associate Professor of Government at American University, Senior Social Scientist at The Lab @ DC, and Fellow at the US Office of Evaluation Sciences. Ryan’s research interests centre around statistical political methodology, with applications in American social policy. He develops and implements methods for political experiments, ecological data, missing data, causal inference, and geolocated data. Among his publications are sixteen peer-reviewed journal articles, a book chapter, and several software packages and applications. His work has appeared in Political Analysis, The Lancet, Nature Human Behaviour, the Journal of Public Policy, and the Journal of Policy Analysis and Management, among other outlets. Ryan received his Ph.D. in government and social policy, along with his A.M. in statistics, from Harvard University.

http://www.ryantmoore.org

 

Course Content
Do campaign messages actually affect public opinion? Does a refugee’s religion affect support for her asylum application? Do legislators respond when made aware of district preferences? This course develops a framework and a set of tools centred around answering causal questions such as these.  We lay foundations in the potential outcomes model, allowing us to identify causal inferences. We discuss why we might conduct field, survey, and laboratory experiments, best practices for designing and registering experiments, how to overcome common problems, and how to analyse experimental data.  We will also address special topics such as interference and mediation.  Using experiments as a foundation, we will examine and apply methods for causal inference from observational data, such as matching, regression adjustment, instruments, and discontinuity designs.

Course Objectives
Participants will gain understanding of the potential outcomes model, and how and why we often register and conduct experiments for causal inference. Participants apply this understanding to experimental design, and will analyse experimental and observational data with attention to causal questions. Throughout, participants will learn application through the R statistical language.  This course is suitable for participants at a variety of levels, including exceptional undergraduates, master’s degree and Ph.D. students, and those with a Ph.D.

Course prerequisites
Students should have encountered conventional topics in introductory statistics, such as null hypothesis significance tests, confidence intervals, and linear regression. We will reintroduce such topics as needed.  Students should have some familiarity processing data with R or Stata, or be willing to learn.

Representative Background Reading
Freedman, Pisani, and Purves, “Statistics”, 4th edition (2007) Norton, Chapter 1, pages 3-11.

Required Text – this text will be provided by ESS:

Gerber and Green. Field Experiments: Design, Analysis, and Interpretation. WW Norton. ISBN: 978-0393979954.

Background knowledge required

Maths

Linear Regression = elementary

Statistics

OLS = elementary

Computer Background

R or Stata = elementary

Students should have some familiarity processing data with R or Stata, or be willing to learn.

 

Please note: Recordings will only be available to online students for the length of the course’s duration for the purpose of catching up on missed content.   

Day 1:

Introduction to causal inference.
The potential outcomes framework. Estimands.
Introduction to computing environments.
Lab: Introduction to R

Day 2:

Randomized experiments I: Motivation, inference, testing.
Submit PS1: Gerber and Green 2.1, 2.10, 2.12
Chapters 2 and 3 of Alan S. Gerber and Donald P. Green. Field Experiments: Design, Analysis, and Interpretation. WW Norton, New York, NY, 2012.

Day 3:

Randomized experiments II: Covariates, blocked designs.
Submit PS2: 3.1, 3.5.a, 3.5.b
Chapter 4 of Gerber and Green
Moore, Ryan T. “Multivariate Continuous Blocking to Improve Political Science
Experiments”. Political Analysis, 20(4):460–479, Autumn 2012.
Moore, Ryan T. and Sally A. Moore. “Blocking for Sequential Political Experiments”. Political Analysis, 21(4):507–523, 2013.

Day 4:

Regression and Experiments. Heterogeneous treatment effects.
Submit PS3: 3.5.c, 3.5.d, 3.5.e
Chapter 9 of Gerber and Green

Day 5:

Survey experiments. Conjoints, item counts, lists.
Submit PS4:
– Exercise 10 from 03-r-intro.R
– Exercise in heterogeneous effects from notes/04-linear-exp.pdf, slide 73
Paul M. Sniderman. Some advances in the design of survey experiments. Annual Review of Political Science, 21:259–275, May 2018.
Jens Hainmueller, Daniel J. Hopkins, and Teppei Yamamoto. Causal inference in conjoint analysis: Understanding multidimensional choices via stated preference experiments. Political Analysis, 22(1):1–30, 2014.
Yusaku Horiuchi, Daniel M Smith, and Teppei Yamamoto. Measuring voters’ multidimensional policy preferences with conjoint analysis: Application to japan’s 2014 election. Political Analysis, 26(2):190–209, 2018.
Graeme Blair and Kosuke Imai. Statistical analysis of list experiments. Political Analysis, 20(1):47–77, Winter 2012.
Graeme Blair, Kosuke Imai, and Jason Lyall. Comparing and combining list and endorsement experiments: Evidence from afghanistan. American Journal of Political Science, 58(4):1043–1063, 2014.

Day 6:

Multiarm bandits.
Molly Offer-Westort, Alexander Coppock, and Donald P Green. Adaptive experimental design: Prospects and applications in political science. Manuscript., 2018. http://j.mp/2FsHlKr.
Volodymyr Kuleshov and Doina Precup. Algorithms for multi-armed bandit problems. CoRR, abs/1402.6028, 2014.
Neha Gupta, Ole-Christoffer Granmo, and Ashok Agrawala. Thompson sampling for dynamic multi-armed bandits. In 2011 10th International Conference on Machine Learning and Applications Workshops, pages 484–489. IEEE, 2011.

Day 7:

Submit PS5 (.pdf available on GitHub)
Review 3 articles on eye-tracking, VR
Lab Experiments. ESSEXLab visit.
• Interactive incented experiments
• Biometric demonstration in conjoint experiments
• zTree/oTree programming introduction

Day 8: 

Chapter 10 of Gerber and Green
Kosuke Imai, Luke Keele, Dustin Tingley, and Teppei Yamamoto. Unpacking the black box of causality: Learning about causal mechanisms from experimental and observational studies. American Political Science Review, 105(4):765–789, November 2011.
John G. Bullock, Donald P. Green, and Shang E. Ha. Yes, but what’s the mechanism? (don’t expect an easy answer). Journal of Personality and Social Psychology, 98(4):550–558, 2010.
Kosuke Imai, Luke Keele, and Teppei Yamamoto. Identification, Inference, and Sensitivity Analysis for Causal Mediation Effects. Statistical Science, 25(1):51–71, February 2010.
Interference. Time-varying treatments and covariates.
Submit PS6 (.pdf available on GitHub)
Chapter 8 of Gerber and Green
Michael G. Hudgens and M. Elizabeth Halloran. Toward Causal Inference With Interference. Journal of the American Statistical Association, 103(482):832–842, June 2008.
Paul R. Rosenbaum. Interference Between Units in Randomized Experiments. Journal of the American Statistical Association, 102(477):191–200, 2007.
Michael E. Sobel. What do randomized studies of housing mobility demonstrate?: Causal inference in the face of interference. Journal of the American Statistical Association, 101(476):1398–1407, 2006.
Matthew Blackwell. A framework for dynamic causal inference in political science. American Journal of Political Science, 57(2):504–520, 2013.

Day 9:

Observational Designs for Causal Inference: Matching, Matching + Regression, Regression Discontinuity Designs
Donald B. Rubin. The design versus the analysis of observational studies for causal effects: parallels with the design of randomized trials. Statistics in Medicine, 26(1):20–36, 2007.
Daniel Ho, Kosuke Imai, Gary King, and Elizabeth Stuart. Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference. Political Analysis, 15:199–236, 2007.
Paul R. Rosenbaum and Donald B. Rubin. “The Central Role of the Propensity Score in Observational Studies for Causal Effects”. Biometrika, 70(1):41–55, 1983.
Stefano M. Iacus, Gary King, and Giuseppe Porro. Causal inference without balance checking: Coarsened exact matching. Political Analysis, 20(1):1–24, Winter 2012.
Devin Caughey and Jasjeet S. Sekhon. Elections and the Regression Discontinuity Design: Lessons from Close U.S. House Races, 1942–2008. Political Analysis, 19(4):385–408, 2011.
Guido W. Imbens and Thomas Lemieux. Regression discontinuity designs: A guide to practice. Journal of Econometrics, 142:615–635, 2008.
Course in-class exam in the evening.

Day 10:

Registration, Replication,
Declaration
Macartan Humphreys, Raul Sanchez de la Sierra, and Peter van der Windt. Fishing, commitment, and communication: A proposal for comprehensive nonbinding research registration. Political Analysis, 21(1):1–20, 2013.
James E. Monogan. A case for registering studies of political outcomes: An application in the 2010 house elections. Political Analysis, 21(1):21–37, 2013.
Macartan Humphreys. Reflections on the ethics of social experimentation. Journal of Globalization and Development, 6(1):87–112, 2015.
Graeme Blair, Jasper Cooper, Alexander Coppock, and Macartan Humphreys. Declaring and diagnosing research designs. American Political Science Review, pages 1–22, 2018