Cartoon of Mwizi (thief) manipulating election results (matokeo ya urais).
We regularly hear that during elections in semi-democratic regimes, incumbents exploit massive advantages by fueling private goods networks with public money. Whether buying votes, other forms of clientelism, stuffing ballot boxes, or even using troops to keep voters away from polls, these sorts activities damage electoral integrity and democratic legitimacy. Problematically, they also prove subject to substantial response biases, as individuals have incentives to misreport information on questions about such sensitive topics and behaviors.
In this piece in the official newsletter of the APSA Experimental Section, we offer practical advice for carrying out list experiments in developing countries, drawing upon out shared experience of implementing eleven list experiments in Kenya and Tanzania from 2009-2012. This piece briefly summarizes the results which we expand on the paper discussed immediately below.
"Reducing Response Bias in Developing Countries: Improving List Experiment Performance for Measuring Vote-Buying and Political Violence," Preparing for Submission for Review (with Eric Kramon)"
List experiments offer a promising way to reduce response bias and are increasingly popular in developing country settings to study political violence and clientelism. The technique provides greater respondent anonymity by placing a the sensitive item amongst a list of other non-sensitive items and asking respondents to identify the total number of applicable items. The cost of the additional anonymity, however, is more complexity than conventional question formats. In this paper, we explore the empirical properties of the list experiment in Tanzania and Kenya. We begin by showing evidence that respondents engage in satisficing, a response effect where individuals do not put forth the full effort required to evaluate a survey question and instead simply report a response they deem as reasonable or satisfactory. We then test two modifications of the list experiment which reduce the effort required to fully respondent to list experiment questions by (1) providing respondents laminated copies of the lists and a dry erase market and (2) providing respondents with cartoon handouts corresponding with items included on the lists. We find both techniques effectively reduce the prevalence of satisficing.
This paper presents results from a classroom experiment carried out at the University of Dar es Salaam. The study aimed to establish a baseline of sensitivity biases about political topics in Tanzania. Using a split-thirds design, I compare estimates from direct questions about topics like political violence to two alternative question formats evidence to increase respondent honesty by presenting the sensitive items in less obtrusive ways: the list experiment and randomized response technique. The paper finds that even for university educated students, the complexity of the randomized response technique may be too high for respondents. However, the list experiment masked individuals' responses and thus reduced the downward bias present with direct question formats. The technique resulted in higher self-reported support for political violence, beliefs that incumbents misuse state repressive forces, and views that elections are manipulated by the party in power---all by substantively and statistically significant amounts.