Please note: This course will be taught online only. In person study is not available for this course. 

Rosario Aguilar is Senior Lecturer in Comparative Politics at Newcastle University. She holds a Ph.D. in Political Science from the University of Michigan and previously was Associate Professor in the Division of Political Studies at the Centro de Investigación y Docencia Económicas (CIDE) in Mexico City. Her research focuses on the political psychology of citizens’ behavior, with a focus on experimental methods. She is a member of the EGAP network and contributes to teaching experimental methods every summer in Latin America. Her work has been published in journals such as Comparative Political Studies, Comparative Politics, Political Behavior, Electoral Studies, among others.

Course Content

This course introduces the core concepts and tools of modern causal inference and their application to impact assessment in the social sciences. We start from the fundamental problem of causality and the logic of randomisation, and then develop a design-based approach to estimating causal effects using field experiments and observational data. Topics include randomised experiments and covariance adjustment, instrumental variables, matching methods, regression discontinuity designs, interference between units, difference-in-differences, sensitivity analysis, and synthetic control methods.

Throughout, we emphasise clear identification strategies, research design, and the assumptions required for credible causal claims. Readings are drawn from econometrics, statistics, and political science, and students will engage critically with both foundational texts and recent applications. By the end of the course, students will be able to assess the strengths and weaknesses of different causal designs, interpret and critique empirical research, and outline their own impact evaluation strategies for substantive questions in politics and public policy.

Course Objectives

1. Students will be able to design advanced policy evaluations with quantitative methods. In particular, they will be able to correctly apply i) regression discontinuity designs, ii) instrumental variable designs, and iii) difference-in-difference designs.
2. Students should be able to critically evaluate the empirical soundness of existing policy evaluations.

Course Prerequisites

Students are expected to have taken at least one graduate level course in statistics. Only a basic familiarity with probability, hypothesis testing, and regression is assumed.
The course will conduct statistical analyses using the R programming language. No prior experience in computer coding is necessary. The Instructor and/or teaching assistant will hold special sessions to introduce R. Students will primarily be responsible for running already-produced code, on pre-assembled datasets, rather than writing their own code.

Books provided by ESS:

Paul R. Rosenbaum. Design of Observational Studies. Springer, New York, 2010;
Alan S Gerber and Donald P Green. Field Experiments: Design, Analysis, and Interpretation. WW Norton, 2012.

Background knowledge required

Maths

Linear Regression – elementary

Statistics

OLS – elementary

Software

Stata – elementary

R – elementary

 

  • Fundamental problem of causality and randomisation

Angrist, J.D. and Pischke, J.S., 2009. Mostly harmless econometrics: An empiricist’s companion. Princeton University Press, chapters 1 & 2.

Cunningham, Scott..2023. Causal Inference: The Mixtapehttps://www.scunning.com/mixtape.htm, chapters 4.0 and 4.1

Kinder, D.R. and T.R. Palfrey. 1993. “On behalf of an experimental political science.” Experimental Foundations of Political Science, pp. 1–39.

Ravallion, M. 2001. “The Mystery of the Vanishing Benefits: An Introduction to Impact Evaluation.” World Bank Econ Rev 15 (1): 115-140.

Recommended:

Fisher, Ronald A. 1935.  “Introduction.” The Design of Experiments. Edinburgh, SCT: Oliver and Boyd

Fisher Box, Joan. 1978. R. A. Fisher, the Life of a Scientist. New York, NY: Wiley, pp. 131-135.

Holland, Paul W.1986. “Statistics and Causal Inference.” Journal of the American Statistical Association 81.396, pp. 945–960, Sections 1 – 4.

Bowers, Jake and Thomas Leavitt (2020). “Causality and Design-Based Inference”. The SAGE Handbook of Research Methods in Political Science and International Relations. Luigi Curini and Robert Franzese (eds.). Vol. 2. Thousand Oaks, CA: SAGE Publications, pp. 769–804

  • Inference & random assignment

Arceneaux, Kevin. 2005. “Using Cluster Randomized Field Experiments to Study Voting Behavior”. The Annals of the American Academy of Political and Social Science 601.1: 169-79

Rosenbaum, Paul R. 2017. Observation and Experiment: An Introduction to Causal Inference. Cambridge, MA: Harvard University Press, chapter 3.

Gerber, A. S. and Green, D. P. 2012. Field Experiments: Design, Analysis, and Interpretation. New York, NY: W.W. Norton, chapters 2 & 3.

Recommended:

Fisher, R.A. 1935. The Design of Experiments. 1935. Edinburgh: Oliver and Boyd, Chap 2

Neyman, J. 1990. “On the application of probability theory to agricultural experiments. Essay on principles. Section 9 (1923)”. In: Statistical Science 5. reprint. Transl. by Dabrowska and Speed, pp. 463–480

  • Covariance Adjustments in Randomised Experiments

Gerber, A. S. and Green, D. P. 2012. Field Experiments: Design, Analysis, and Interpretation. New York, NY: W.W. Norton, chapter 4.

Rosenbaum, Paul R. 2002. “Covariance Adjustment in Randomized Experiments and Observational Studies”.  Statistical Science 17(3): 286–327.

Recommended:

Lin, Winston. 2013. “Agnostic Notes on Regression Adjustments to Experimental Data: Reexamining Freedman’s Critique” The Annals of Applied Statistics 7.1, pp. 295–318.

Freedman, David A. 2008c. “On Regression Adjustments to Experimental Data.” Advances in Applied Mathematics 40.2, pp. 180–193

Freedman, David A. (2008b). “Randomization Does Not Justify Logistic Regression”. Statistical Science 23.2, pp. 237–249

Freedman, David A. 2008a. “On Regression Adjustments in Experiments with Several Treatments”. The Annals of Applied Statistics 2.1, pp. 176–196

Aronow, Peter M and Cyrus Samii. 2016. “Does Regression Produce Representative Estimates of Causal Effects?” American Journal of Political Science 60.1, pp. 250–267

  • Instrumental Variables

Cunningham, Scott. .2023. Causal Inference: The Mixtape. https://www.scunning.com/mixtape.htm, chapter 7

Gerber, A. S. and Green, D. P. 2012. Field Experiments: Design, Analysis, and Interpretation. New York, NY: W.W. Norton, chapters 5 & 6.

Rosenbaum, Paul R. 2010. Design of Observational Studies. New York, NY: Springer, Section 5.3.

Rosenbaum, Paul R. 1996. “Identification of Causal Effects Using Instrumental Variables: Comment”. Journal of the American Statistical Association 91.434, pp. 465–468

  • Matching

Cunningham, Scott. .2023. Causal Inference: The Mixtapehttps://www.scunning.com/mixtape.htm, chapter 5

Rosenbaum, Paul R. 2017. Observation and Experiment: An Introduction to Causal Inference. Cambridge, MA: Harvard University Press, pp. 65-90

Rosenbaum, Paul R. 2010. Design of Observational Studies. New York, NY: Springer, chapters 7-8.

Recommended:

Rosenbaum, Paul R. 2017. Observation and Experiment: An Introduction to Causal Inference. Cambridge, MA: Harvard University Press, chapter 11.

  • Regression Discontinuity

Cattaneo, Matias D., Rocío Titiunik, and Gonzalo Vazquez-Bare (2020). “The Regression Discontinuity Design”. The SAGE Handbook of Research Methods in Political Science and International Relations. Luigi Curini and Robert Franzese (eds.). Vol. 2. Thousand Oaks, CA: SAGE Publications.

Caughey , Devin and Jasjeet S Sekhon. 2011. “Elections and the Regression Discontinuity Design: Lessons from Close US House Races, 1942–2008”. Political Analysis 19.4, pp. 385–408

Cunningham, Scott. .2023. Causal Inference: The Mixtape. https://www.scunning.com/mixtape.htm, chapter 6.

Sales, Adam and Ben B Hansen. 2020. “Limitless Regression Discontinuity”. Journal of Educational and Behavioral Statistics 45.2, pp. 143–174

  • Interference

Bowers, Jake, Mark Fredrickson, and Costas Panagopoulos (2013). “Reasoning about Interference Between Units: A General Framework”. Political Analysis 21(1): 97–124

 Rosenbaum, Paul R. (2007). “Interference Between Units in Randomized Experiments”. Journal of the American Statistical Association 102 (477): 191–200

  • Difference in Difference

Angrist, J.D. and Pischke, J.S., 2009. Mostly harmless econometrics: An empiricist’s companion. Princeton University Press, chapter 5

Cunningham, Scott. .2023. Causal Inference: The Mixtape https://www.scunning.com/mixtape.htm, chapter 9.

Gerber, A. S. and Green, D. P. 2012. Field Experiments: Design, Analysis, and Interpretation. New York, NY: W.W. Norton, section 4.1

Rosenbaum, Paul R. 2017. Observation and Experiment: An Introduction to Causal Inference. Cambridge, MA: Harvard University Press, pp. 162 – 167

  • Sensitivity analysis

Rosenbaum, Paul R (2004). “Design Sensitivity in Observational Studies”. Biometrika 91.1, pp. 153–164

Rosenbaum, Paul R. 2017. Observation and Experiment: An Introduction to Causal Inference. Cambridge, MA: Harvard University Press, chapter 10

  • Synthetic control

Abadie, Alberto, Alexis Diamond, and Jens Hainmueller. 2010. “Synthetic Control Methods for Comparative Case Studies: Estimating the Effect of California’s Tobacco Control Program.” Journal of the American Statistical Association 105.490, pp. 493–505

Abadie, Alberto, Alexis Diamond, and Jens Hainmueller (2012). “Comparative Politics and the Synthetic Control Method.” American Journal of Political Science 59(2): 495–510

Ben-Michael, Eli, Avi Feller, and Jesse Rothstein. 2021. “The Augmented Synthetic Control Method”. Journal of the American Statistical Association