Chris Fariss is an Assistant Professor in the Department of Political Science at the University of Michigan. Prior to beginning this appointment, he was the Jeffrey L. Hyde and Sharon D. Hyde and Political Science Board of Visitors Early Career Professor in Political Science in the Department of Political Science at Penn State University. In June 2013, he graduated with a Ph.D. in political science from the University of California, San Diego. He also studied at the University of North Texas, where he graduated with an M.S. in political science (2007), a B.F.A in drawing and painting (2005), and a B.A. in political science (2005). His core research focuses on the politics and measurement of human rights, discrimination, violence, and repression. Chris uses computational methods to understand why governments around the world torture, maim, and kill individuals within their jurisdiction and the processes monitors use to observe and document these abuses. Other projects cover a broad array of themes but share a focus on computationally intensive methods and research design. These methodological tools, essential for analyzing data at massive scale, open up new insights into the micro-foundations of state repression and the politics of measurement. Below you will find links to his publications , working papers, teaching material , a Dataverse archive where you can access replication data, and links to human rights data generated from several measurement projects.

 


Course Content
This class will provide graduate students with an introduction to the scientific method and an overview of how to apply it to the study of politics. Students will learn the fundamentals of the scientific method and, through research design, how to improve both causal inference and the measurement of political phenomena.

Course Objectives
Students will learn how to recognize and engage with the research design tools used to generate valid causal inferences.

Course Prerequisites
I assume no prior exposure to the research design concepts covered in this class.

Course Details
• I will begin each class day with a lecture over the class material (approximately 90 minutes).
• After each lecture, students will discuss two or three articles as they relate to the research design topics of each lecture (approximately 30-45 minutes for each article).
• Finally, we will practice implementing the design using an applied example in the R programming language.

Course Books
1. Trochim and Donnelly — Trochim, William and James P. Donnelly. 2007. The Research Methods
Knowledge Base, 3rd Edition. Cincinnati, OH, Atomic Dog Publishing. http://www.socialresearchmethods.net/kb/
2. Additional articles and chapters are listed below. Copies of these readings will be provided by the
instructor.

 

Course Details 

• I will begin each class day with a lecture over the class material (approximately 90 minutes).
• After each lecture, students will discuss two or three articles as they relate to the research design topics of each lecture (approximately 30-45 minutes for each article).
• Finally, we will practice implementing the design using an applied example in the R programming language.

Course Books 

  1. Trochim and Donnelly — Trochim, William and James P. Donnelly. 2007. The Research Methods 

Knowledge Base, 3rd Edition. Cincinnati, OH, Atomic Dog Publishing. http://www.socialresearchmethods.net/kb/ 

  1. Additional articles and chapters are listed below. Copies of these readings will be provided by the 

instructor.  

Types of Course Readings 

  • Lecture Readings provide the core content for each course topic. You should read each of these readings closely prior to the class day. 
  • Discussion Readings illustrate the use of the research design tools discussed for each course topic. You should read each of these readings closely prior to the class day. 
  • Supplementary Readings provide additional details about the research design tools discussed in the course. Note that you are not required to read these before class but I recommend you review these once the course is completed. These readings may also help you when choosing from the more advanced topics covered in the summer school. 
  • Related Essex Summer School Courses provides a list of other summer school courses that provide much greater coverage of the topics covered in this course. Note that not every summer school course is included in these lists. In the reading list below, author names in bold are current or former Essex Summer school instructors. 

 

Day 1: Designing Validity for Experimental and Observational Studies 

Lecture Readings: 

  1. Trochim and Donnelly. Ch 6: “Design” 
  2. Trochim and Donnelly. Ch 7: “Experimental Design.” 
  3. Rubin, Donald B. 2008. “For Objective Causal Inference, Design Trumps Analysis.” Annals of 

Applied Statistics 2(3):808-840. http://dx.doi.org/10.1214/08-AOAS187 

Discussion Readings: 

  1. D.J. Flynn and Yanna Krupnikov. 2018. “Motivations and Misinformation: Why People Retain Some Errors but Quickly Dismiss Others” Journal of Experimental Political Science 6(1):5-15. https://doi.org/10.1017/XPS.2018.12 
  2. Slough, Tara and Christopher J. Fariss. “Misgovernance and Human Rights: Experimental Evidence of Illegal Detention without Intent” (Forthcoming). 

Supplementary Readings: 

  1. Alberto Abadie and Matias D. Cattaneo. 2018. “Econometric Methods for Program Evaluation” 

Annual Review of Economics 10:465-503. https://www.annualreviews.org/doi/10.1146/annurev-economics-080217-053402

  1. Haavelmo, Trygve. 1944. “The Probability Approach in Econometrics” Econometrica 12:1-115. 

https://doi.org/10.2307/1906935 

  1. Lin, Winston, Donald P. Green, and Alexander Coppock. “Standard operating procedures for Don 

Green’s lab at Columbia.” Version 1.05: June 7, 2016. https://github.com/acoppock/Green-Lab-SOP 

  1. Imai, Kosuke, Luke J. Keele, Dustin Tingley, and Teppei Yamamoto. 2011. “Unpacking the Black Box of Causality: Learning about Causal Mechanisms from Experimental and Observational Studies.” American Political Science Review 105(4):765-789. https://doi.org/10.1017/S0003055411000414 
  2. Sinclair, Betsy, Margaret McConnell, and Donald P. Green. 2012. “Detecting Spillover Effects: Design and Analysis of Multilevel Experiments.” American Journal of Political Science 56(4):1055-1069. https://doi.org/10.1111/j.1540-5907.2012.00592.x 
  3. Shadish, William R. 2010. “Campbell and Rubin: A Primer and Comparison of Their Approaches to Causal Inference in Field Settings.” Psychological Methods 15(1):3-17. https://doi.org/10.1037/a0015916 
  4. Shmueli, Galit. 2010. “To Explain or to Predict?.” Statistical Science 25(3):289-310. https://projecteuclid.org/euclid.ss/1294167961

 

Day 2: Case Studies, Case Selection, and Qualitative Evidence 

Lecture Readings: 

  1. Glynn, Adam N. and Nahomi Ichino. 2018. “Using Qualitative Information to Improve Causal 

Inference” 59(4):1055-1071. https://doi.org/10.1111/ajps.12154 

  1. Seawright, Jason. 2016. “The Case for Selecting Cases That Are Deviant or Extreme on the Independent Variable.” Sociological Methods & Research 45(3):493-525. https://doi.org/10.1177%2F0049124116643556

 

Discussion/Applied Readings: 

  1. Eck, Kristine and Christopher J. Fariss. “Ill Treatment and Torture in Sweden: A Critique of Cross-Case Comparisons.” Human Rights Quarterly 40(3):591-604. https://doi.org/10.1353/hrq.2018.0033 
  2. Geddes, Barbara. 1990. “How the Cases You Choose Affect the Answers You Get.” Political 

Analysis 2:131-150. https://doi.org/10.1093/pan/2.1.131 

Supplementary Readings: 

  1. Abadie, Alberto, Alexis Diamond and Jens Hainmueller. 2014. “Comparative Politics and the Synthetic Control Method.” American Journal of Political Science 59(2):495-510. https://doi.org/10.1111/ajps.12116 
  2. Lijphart, Arend 1971. “Comparative Politics and the Comparative Method.” American Political 

Science Review 65(3):682-693. https://doi.org/10.2307/1955513 

  1. Lustik, Ian S. 1996. “History, Historiography, and Political Science: Multiple Historical Records and the Problem of Selection Bias.” American Political Science Review 90(3):605-618. https://doi.org/10.2307/2082612 
  2. Mosley, Layna. 2013. ““Just Talk to People.”? Interviews in Contemporary Political Science.” In 

Interview Research in Political Science, editors, Layna Mosley. Cornell University Press. 

  1. Nielsen, Richard. 2016. “Case Selection via Matching.” Sociological Methods and Research 

45(3):569-597 https://doi.org/10.1177%2F0049124114547054 

 

 

Day 3: Quasi-Experimental Designs: Non-Equivalent Group Designs 

Lecture Readings: 

  1. Trochim and Donnelly. Ch 9: “Quasi-Experimental Design” 
  2. Trochim and Donnelly. Ch 10: “Advanced Design Topics.” 

 

Discussion Readings: 

  1. Card, David, and Alan B. Krueger. 1994. “Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania.” American Economic Review 84(4):772-793. https://doig.org/10.3386/w4509 
  2. Lyall, Jason. 2010. “Are Co-Ethnics More Effective Counter-Insurgents? Evidence from the Second Chechen War.” American Political Science Review 104(1):1-20. https://doi.org/10.1017/S0003055409990323 

 

Supplementary Readings: 

  1. Campbell, Donald T. and H. Laurence Ross. 1968. “Analysis of Data on the Connecticut Speeding Crackdown as a Time-Series Quasi-Experiment.” Law and Society Review 3(1):55-76. https://doi.org/10.2307/3052794 
  2. Lalonde, Robert. 1986. “Evaluating the Econometric Evaluations of Training Programs.” American Economic Review 76:604-620. 

 

Day 4: Quasi-Experimental Designs: Regression Discontinuity and Instrumental Variables 

Lecture Readings: 

  1. Imbens, Guido and Thomas Lemieux. 2008. “Regression Discontinuity Designs: A Guide to Practice.” Journal of Econometrics 142:615-635. https://doi.org/10.3386/w13039 
  2. Sovey, Allison J., and Donald P. Green. 2010. “Instrumental Variables Estimation in Political Science: A Reader’s Guide.” American Journal of Political Science 55(1):188-200. https://doi.org/10.1111/j.1540-5907.2010.00477.x 

 

Discussion Readings: 

  1. Conrad, Courtenay R., and Emily Hencken Ritter. 2016. “Preventing and Responding to Dissent: The Observational Challenges of Explaining Strategic Repression.” American Political Science Review 110(1):85-99. https://doi.org/10.1017/S0003055415000623 
  2. James M. Snyder, Olle Folke, and Shigeo Hirano. 2015. “Partisan Imbalance in Regression Discontinuity Studies Based on Electoral Thresholds” Political Science Research and Methods 3(2):169-186. https://doi.org/10.1017/psrm.2014.31 

 

Supplementary Readings: 

  1. Lee, David S. and Thomas Lemieux. 2010. “Regression Discontinuity Designs in Economics.” 

Journal of Economic Literature 48(2):281-355. 

  1. Keele, Luke J. and Rocio Titiunik, 2015. “Geographic Boundaries as Regression Discontinuities.” 

Political Analysis 23(1):127-155. https://doi.org/10.1093/pan/mpu014 

 

Day 5: Measurement Designs: Data, Validity, and Reliability 

Lecture Readings: 

  1. Trochim and Donnelly. Ch 3: “The Theory of Measurement.” 
  2. Borsboom, Denny, Gideon J. Mellenbergh, and Jaap van Heerden. 2003. “The Theoretical Status of Latent Variables” Psychological Review 110(2):203-219. https://doi.org/10.1037/0033-295X.110.2.203 

 

Discussion Readings: 

  1. Baerg, Nicole and Will Lowe. Forthcoming. “A textual Taylor rule: estimating central bank preferences combining topic and scaling methods” Political Science Research and Methods. https://doi.org/10.1017/psrm.2018.31 
  2. Lo, James, Sven-Oliver Proksch and Jonathan B. Slapin. 2016. “Ideological Clarity in Multiparty Competition: A New Measure and Test Using Election Manifestos” British Journal of Political Science 46(3):591-610. https://doi.org/10.1017/S0007123414000192 

 

Supplementary Readings: 

  1. Adcock, Robert, and David Collier. 2001. “Measurement Validity: A Shared Standard for Qual- itative and Quantitative Research.” American Political Science Review 95(3):529-546. https://doi.org/10.1017/S0003055401003100 
  2. Borsboom, Denny. 2005. Measuring the Mind. Cambridge: Cambridge University Press. Ch 1: 

“Introduction”, Ch 3: “Latent variables”, and Ch.6: “The concept of validity”. 

  1. Fariss, Christopher J. 2019. “Yes, Human Rights Practices Are Improving Over Time” American 

Political Science Review 133(3):868-881. https://doi.org/10.1017/S000305541900025X 

  1. Hand, D. J., 1996. “Statistics and the Theory of Measurement.” Journal of the Royal Statistical Society. Series A (Statistics in Society). 159(3):445-492. https://doi.org/10.2307/2983326 
  2. Grimmer, Justin and Brandon M. Stewart. 2013. “Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts.” Political Analysis 21(3):267-297. https://doi.org/10.1093/pan/mps028 
  3. Jackman, Simon. 2008. “Measurement.” In The Oxford Handbook of Political Methodology, edited by Janet M. Box-Steffensmeier, Henry E. Brady, and David Collier. Oxford University Press. 
  4. Poole, Keith T. and Howard Rosenthal. 1991. “Patterns of Congressional Voting.” American 

Journal of Political Science 35(1):228-278. https://doi.org/10.2307/2111445 

  1. Stevens, S.S. 1946. “On the Theory of Scales of Measurement” Science 103(2684):677-680. 

https://doi.org/10.1126/science.103.2684.677