Please note: This course will be taught in hybrid mode. Hybrid delivery of courses will include synchronous live sessions during which on campus and online students will be taught simultaneously.

Rob Johns is Professor of Politics in the Department of Politics & International Relations at the University of Southampton. He has taught at the Essex Summer School since 2012. Rob’s research is in the fields of public opinion, political psychology and questionnaire design. He has run several major survey projects and published numerous articles and books based on analyses of public opinion data. Current research explores the connections between mental health and political attitudes, citizens’ understanding of and reactions to the notion of human rights, the drivers of support for Scottish independence, and public perceptions of the distinction between facts and opinions.

Course Content

This course is a guide to designing, implementing and analysing a survey. It covers all the key decisions that survey researchers have to make: how to measure complex concepts, how to reduce bias in question design, how to choose a sample and access respondents, and how to adjust for non-response. We will pay particular attention to the ways in which real-world surveys often have to deviate from the pure methodology of the textbook, and use the participants’ own data collections to illustrate how this need not be a major problem.
This is largely a classroom-based course but there will be some lab work during which participants will work with the market-leading survey platform Qualtrics, and some group activity during which participants will design their own and evaluate each others’ questionnaires.

Course Objectives

The straightforward objective of this course is to equip participants with the theoretical insights and practical knowledge needed to run their own survey. By understanding both the ideals of unbiased data collection and the ways in which real-world surveys tend to deviate from that ideal, participants will also become adept at evaluating their own and others’ survey designs and analyses. These skills are greatly prized by employers, given the omnipresence of surveys, as well as being essential for those planning surveys for their own academic or other research. Participants will also acquire a detailed working knowledge of Qualtrics, one of the more versatile and popular platforms for designing and fielding online surveys. In addition to boosting participants’ current skills, this course also serves as a springboard for the course in Advanced Survey Data Analysis and Survey Experiments typically available in Session 2 of the Essex Summer School.

Course Prerequisites

This is an introductory course. Participants are not assumed to have taken any previous class in survey methodology and there will also be a full introduction to Qualtrics, the survey design platform we use.

Nor are participants required to do any prior reading. Those who would like a sneak preview of the topics and debates covered could look over one of the following (or one of the numerous other survey methodology texts available):

Eichhorn, J. (2022). Survey Research and Sampling. London: SAGE Publications.

De Vaus, D. (2014). Surveys in Social Research (6th edn.). New York: Routledge. (this text will be provided by ESS)

Rea, L, and Parker, R. (2014): Designing and Conducting Survey Research: A Comprehensive Guide (4th edn.). San Francisco: Jossey-Bass.

Weisberg, H. (2005). The Total Survey Error Approach. Chicago: Chicago University Press.

Day 1: Surveys, research designs and research questions

Course admin; introductions; defining and delimiting surveys; surveys and research designs; surveys and the quant-qual distinction; surveys and research questions, including participants’ own agendas; surveys and causal inference


Day 2: How people answer survey questions

Lecture: cognitive process of survey response; cognitive interviewing; practical exercises; optimising vs. satisficing; respondent ability; respondent motivation; task difficulty; the effort-validity relationship; knowledge, ignorance and survey response; nonattitudes

Qualtrics lab: Intro to Qualtrics; question types; skip/display logic


Day 3: Designing questions measuring behaviour

Lecture: evaluating question designs; reliability and validity; measuring the effects of question wording; asking for factual information; measuring knowledge; recall of behaviour; open vs. closed questions; designing response scales; social desirability; measuring future behaviour

Qualtrics lab: non-response options; filtering; randomisation; survey flow


Day 4: Designing questions measuring attitudes

Lecture: reliability, validity and attitudes; nonattitudes and don’t know options; open vs. closed questions; rankings vs. ratings; designing response scales; neutral options; balanced questions and acquiescence; response order effects; question order effects; social desirability and attitude questions; wording effects; true attitudes vs. attitudes as constructions

Practical session: first drafts of individual/group survey questions


Day 5: Recoding and scaling variables

Lecture: levels of measurement; reasons for recoding; defining missing data; coding open-ended questions; multiple-item measurement; conceptual and operational definitions; developing indicators; Likert scales; factor analysis and dimensionality; reliability and validity

Qualtrics lab: frequency tables and graphs; coding and recoding; correlations


Day 6: Survey experiments

Lecture: defining survey experiments; typology of survey experiments; examples of each type; external validity; pre-treatment; sampling; between-subject vs. within-subject designs; designing treatments; causality issues; embedding experiments and survey order; analysing experimental data

Qualtrics lab: programming experiments in Qualtrics; graphing experimental results


Day 7: Sampling in theory and practice

Lecture: sampling: what it is and why we do it; the sampling distribution; standard errors and confidence intervals; inferential statistics; sampling variance vs. sampling bias; probability cf. non-probability samples; sampling frames; sample sizes

Practical session: pre-testing individual/group questionnaires


Day 8: Mode of data collection

Lecture: the basics: face-to-face, online panels, telephone, mail; representativeness of samples; response rates and cost per respondent; speed of data collection; respondent motivation; flexibility of design; social desirability; maximising attentiveness across modes; maximising response rates

Practical session: peer-reviewing questionnaires


Day 9: Non-response, weighting and imputation

Lecture: estimating and presenting response rates; unit non-response: causes and consequences; measuring non-response bias; principles of weighting; weighting cf. post-stratification; sampling and population-based weighting; weighting variables; when weights become too heavy; alternative forms of weighting; item non-response: causes and consequences; imputation; common methods of imputation; arguments for and against imputation

Lab: weighting in Qualtrics; weighting in major surveys