Chris Fariss is an Assistant Professor in the Department of Political Science at the University of Michigan. Prior to beginning this appointment, he was the Jeffrey L. Hyde and Sharon D. Hyde and Political Science Board of Visitors Early Career Professor in Political Science in the Department of Political Science at Penn State University. In June 2013, he graduated with a Ph.D. in political science from the University of California, San Diego. He also studied at the University of North Texas, where he graduated with an M.S. in political science (2007), a B.F.A in drawing and painting (2005), and a B.A. in political science (2005).
His core research focuses on the politics and measurement of human rights, discrimination, violence, and repression. Chris uses computational methods to understand why governments around the world torture, maim, and kill individuals within their jurisdiction and the processes monitors use to observe and document these abuses.
Other projects cover a broad array of themes but share a focus on computationally intensive methods and research design. These methodological tools, essential for analyzing data at massive scale, open up new insights into the micro-foundations of state repression and the politics of measurement. Below you will find links to his publications , working papers, teaching material , a Dataverse archive where you can access replication data, and links to human rights data generated from several measurement projects.
This course focuses on the research design and analysis tools used to explore and understand social data using new computational tools. The fundamentals of research design are the same throughout the social sciences; however the topical focus of this class is on computationally intensive data generating processes and the research designs used to understand and manipulate such data at scale. By massive or large scale, I mean that there are lots of subjects/connections/units/rows in the data (e.g., social network data like the kind available from Facebook or twitter), or there are lots of variables/items/columns in the data (e.g., text data with many thousands of columns that represent the words in the document corpus), or the selected analytical tool is a computationally complex algorithm (e.g., a Bayesian simulation for modelling a latent variable or a random forest model for exploratory data analysis), or finally some combination of these three issues. The course will provide students with the tools to design observational studies and experimental interventions into large and unstructured social media data sets at increasingly massive scales and at different degrees of computational complexity.
Students will learn how to design studies to take advantage of the wealth of information contained in new massive scale online datasets such as data available from Facebook, twitter, and many newly digitized document corpuses now available online. The focus of the course is on designing studies in such a way as to maximize the validity of inferences obtained from these complex datasets.
Students should have some familiarity with concepts from research design and statistics. Generally, exposure to these concepts occurs during the first year course at a typical PhD program in political science. Students should have at least some exposure to the R computing environment. The more familiarity with R the better.
Required Reading Material
1. Matloff, Norman. 2011. Art of R Programming: A Tour of Statistical Software Design. no starch press.
2. Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning Data Mining, Inference, and Prediction. Springer Series in Statistics.
Background knowledge required
OLS = m
Maximum Likelihood = m
R = m
e = elementary, m = moderate, s = strong