Cartoon of Mwizi (thief) manipulating election results (matokeo ya urais).
We regularly hear that during elections in semi-democratic regimes, incumbents exploit massive advantages by fueling private goods networks with public money. Whether buying votes, other forms of clientelism, stuffing ballot boxes, or even using troops to keep voters away from polls, these sorts activities damage electoral integrity and democratic legitimacy. Problematically, they also prove subject to substantial response biases, as individuals have incentives to misreport information on questions about such sensitive topics and behaviors. Further, data about electoral politics are commonly collected cross-sectionally and timing of data collection frequently covaries with many other important political variables that fluctuate with the election calendar.
List experiments offer a promising way to reduce response bias and are increasingly popular in developing country settings to study political violence and clientelism. The technique provides greater respondent anonymity by placing a the sensitive item amongst a list of other non-sensitive items and asking respondents to identify the total number of applicable items. The cost of the additional anonymity, however, is more complexity than conventional question formats. In this paper, we explore the empirical properties of the list experiment in Tanzania and Kenya. We begin by showing evidence that respondents engage in satisficing, a response effect where individuals do not put forth the full effort required to evaluate a survey question and instead simply report a response they deem as reasonable or satisfactory. We then test two modifications of the list experiment which reduce the effort required to fully respondent to list experiment questions by (1) providing respondents laminated copies of the lists and a dry erase market and (2) providing respondents with cartoon handouts corresponding with items included on the lists. We find both techniques effectively reduce the prevalence of satisficing.
"Panel Survey Attrition in sub-Saharan Africa: The Promise of the Mobile Revolution" Preparing for Submission (with Sterling Roop)
Longitudinal research on citizen-level outcomes is critical to studying development in sub-Saharan Africa. However, panel research designs face attrition concerns that are particularly acute there, where poor infrastructure presents signicant logistical and nancial barriers to recontacting and tracing respondents. The explosion of cellular access across the sub-continentin the last decade presents mobile phones as a promising alternative to face-to-face panel interviews, but may also introduce unexpected sources of attrition. In this paper, we study attrition in Wasemavyo Wazanzibari (WWz ), a 30 round mobile-based panel survey carried out from 2013-2016 in Zanzibar. We assess three strategies we implemented in order to address attrition: (1) mobile/solar charger distribution, (2) an elected respondent leader from each enumeration area, and (3) compensation that varied over time. We find that mobile phone and solar distribution offsets logistical barriers to locate respondents, but not for respondents who already owned a phone and used it frequently. Our results also show that respondent-leader and other community-level attributes drive higher rates of participation, as well as the level of compensation for participation.
In this piece in the official newsletter of the APSA Experimental Section, we offer practical advice for carrying out list experiments in developing countries, drawing upon out shared experience of implementing eleven list experiments in Kenya and Tanzania from 2009-2012. This piece briefly summarizes the results which we expand on in our paper discussed above ("(Mis)Measuring.."), which is under review.
This paper presents results from a classroom experiment carried out at the University of Dar es Salaam. The study aimed to establish a baseline of sensitivity biases about political topics in Tanzania. Using a split-thirds design, I compare estimates from direct questions about topics like political violence to two alternative question formats evidence to increase respondent honesty by presenting the sensitive items in less obtrusive ways: the list experiment and randomized response technique. The paper finds that even for university educated students, the complexity of the randomized response technique may be too high for respondents. However, the list experiment masked individuals' responses and thus reduced the downward bias present with direct question formats. The technique resulted in higher self-reported support for political violence, beliefs that incumbents misuse state repressive forces, and views that elections are manipulated by the party in power---all by substantively and statistically significant amounts.