Cartoon of Mwizi (thief) manipulating election results (matokeo ya urais).
We regularly hear that during elections in semi-democratic regimes, incumbents exploit massive advantages by fueling private goods networks with public money. Whether buying votes, other forms of clientelism, stuffing ballot boxes, or even using troops to keep voters away from polls, these sorts activities damage electoral integrity and democratic legitimacy. Problematically, they also prove subject to substantial response biases, as individuals have incentives to misreport information on questions about such sensitive topics and behaviors. Further, data about electoral politics are commonly collected cross-sectionally and timing of data collection frequently covaries with many other important political variables that fluctuate with the election calendar.
List experiments (LEs) are an increasingly popular survey research tool for measuring sensitive attitudes and behaviors. However, there is evidence that list experiments sometimes produce unreasonable estimates. Why do list experiments “fail,” and how can the performance of the list experiment be improved? Using evidence from Kenya, we hypothesize that the length and complexity of the LE format make them costlier for respondents to complete and thus prone to comprehension and reporting errors. First, we show that list experiments encounter difficulties with simple, nonsensitive lists about food consumption and daily activities: over 40 percent of respondents provide inconsistent responses between list experiment and direct question formats. These errors are concentrated among less numerate and less educated respondents,offering evidence that the errors are driven by the complexity and difficultyof list experiments. Second, we examine list experiments measuring attitudes about political violence. The standard list experiment reveals lower rates of support for political violence compared to simply asking directly about this sensitive attitude, which we interpret as list experiment breakdown. We evaluate two modifications to the list experiment designed to reduce its complexity: private tabulation and cartoon visual aids. Both modifications greatly enhance list experiment performance, especially among respondent subgroups where the standard procedure is most problematic. The paper makes two key contributions: (1) showing that techniques such as the list experiment, which have promise for reducing response bias, can introduce different forms of error associated with question complexity and difficulty; and (2) demonstrating the effectiveness of easy-to-implement solutions to the problem.
"Panel Survey Attrition in sub-Saharan Africa: The Promise of the Mobile Revolution" Preparing for Submission (with Sterling Roop)
Longitudinal research on citizen-level outcomes is critical to studying development in sub-Saharan Africa. However, panel research designs face attrition concerns that are particularly acute there, where poor infrastructure presents signicant logistical and nancial barriers to recontacting and tracing respondents. The explosion of cellular access across the sub-continentin the last decade presents mobile phones as a promising alternative to face-to-face panel interviews, but may also introduce unexpected sources of attrition. In this paper, we study attrition in Wasemavyo Wazanzibari (WWz ), a 30 round mobile-based panel survey carried out from 2013-2016 in Zanzibar. We assess three strategies we implemented in order to address attrition: (1) mobile/solar charger distribution, (2) an elected respondent leader from each enumeration area, and (3) compensation that varied over time. We find that mobile phone and solar distribution offsets logistical barriers to locate respondents, but not for respondents who already owned a phone and used it frequently. Our results also show that respondent-leader and other community-level attributes drive higher rates of participation, as well as the level of compensation for participation.
In this piece in the official newsletter of the APSA Experimental Section, we offer practical advice for carrying out list experiments in developing countries, drawing upon out shared experience of implementing eleven list experiments in Kenya and Tanzania from 2009-2012. This piece briefly summarizes the results which we expand on in our paper discussed above ("(Mis)Measuring.."), which is under review.
This paper presents results from a classroom experiment carried out at the University of Dar es Salaam. The study aimed to establish a baseline of sensitivity biases about political topics in Tanzania. Using a split-thirds design, I compare estimates from direct questions about topics like political violence to two alternative question formats evidence to increase respondent honesty by presenting the sensitive items in less obtrusive ways: the list experiment and randomized response technique. The paper finds that even for university educated students, the complexity of the randomized response technique may be too high for respondents. However, the list experiment masked individuals' responses and thus reduced the downward bias present with direct question formats. The technique resulted in higher self-reported support for political violence, beliefs that incumbents misuse state repressive forces, and views that elections are manipulated by the party in power---all by substantively and statistically significant amounts.