Interdisciplinary Workshops on Politics and Policy Archive 2020
About the workshops
Interdisciplinary Workshops on Politics and Policy are weekly seminars hosted by the Center for Political Studies. Speakers present current research on a wide range of topics. Abstracts of past workshops are available in the menu to the right.
Data Science, History, and US Politics
September 23, 2020 | Noon to 1:00 PM EDT
David Shor, Director of Political Data Science at Future Forward USA
Using over one million survey responses and machine learning to learn what happened in the 2016 and 2018 election, why nobody saw Trump coming, and how data science is being used in the 2020 election
Testing Cannot Tell Whether Ballot-Marking Devices Alter Election Outcomes
September 30, 2020 | Noon to 1:00 PM EDT
Philip Stark, University of California Berkeley
Like all computerized systems, ballot-marking devices (BMDs) can be hacked, misprogrammed, and misconfigured. BMD printout might not reflect what the BMD screen or audio conveyed to the voter. If voters complain that BMDs misbehaved, officials have no way to tell whether BMDs malfunctioned, the voters erred, or the voters are attempting to cast doubt on the election. Several approaches to testing BMDs have been proposed. In pre-election logic and accuracy (L&A) tests, trusted agents input known test patterns into the BMD and check whether the printout matches. In parallel or live testing, trusted agents use the BMDs on election day, emulating voters. In passive testing, trusted agents monitor the rate at which voters “spoil” ballots and request another opportunity to mark a ballot: an anomalously high rate might result from BMD malfunctions. In practice, none of these methods can protect against outcome-altering problems. L&A testing is ineffective against malware in part because BMDs “know” the time and date of the test and the election. Neither L&A nor parallel testing can probe even a small fraction of the combinations of voter preferences, device settings, ballot language, duration of voter interaction, input and output interfaces, and other variables that could comprise enough votes to change outcomes. Under mild assumptions, to develop a model of voter interactions with BMDs accurate enough to ensure that parallel tests could reliably detect changes to 5% of the votes (which could change margins by 10% or more) would require monitoring the behavior of more than a million voters in each jurisdiction in minute detail—but the median turnout by jurisdiction in the U.S. is under 3000 voters, and 2/3 of U.S. jurisdictions have fewer than 43,000 active voters. Moreover, all voter privacy would be lost. Given an accurate model of voter behavior, the number of tests required is still larger than the turnout in a typical U.S. jurisdiction. Even if less testing sufficed, it would require extra BMDs, new infrastructure for creating test interactions and reporting test results, additional polling-place staff, and more training. Under optimistic assumptions, passive testing that has a 99% chance of detecting a 1% change to the margin with a 1% false alarm rate is impossible in jurisdictions with fewer than about 1 million voters, even if the “normal” spoiled ballot rate were known exactly and did not vary from election to election and place to place. Passive testing would also require training and infrastructure to monitor the spoiled ballot rate in real time. And if parallel or passive testing discovers a problem, the only remedy is a new election: there is no way to reconstruct the correct election result from an untrustworthy paper trail. Minimizing the number of votes cast using BMDs is prudent election administration.