Abstract
This article builds a theoretical framework with which to confront the racializing capabilities of artificial intelligence (AI)-powered real-time Event Detection and Alert Creation (EDAC) software when used for protest detection. It is well-known that many AI-powered systems exacerbate social inequalities by racializing certain groups and individuals. We propose the feminist concept of performativity, as defined by Judith Butler and Karen Barad, as a more comprehensive way to expose and contest the harms wrought by EDAC than that of other “de-biasing” mechanisms. Our use of performativity differs from and complements other Social Studies of Science and Technology (STS) work because of its rigorous approach to how iterative, citational, and material practices produce the effect of race. We focus on Geofeedia and Dataminr, two EDAC companies that claim to be able to “predict” and “recognize” the emergence of dangerous protests, and show how their EDAC tools performatively produce the phenomena which they are supposed to observe. Specifically, we argue that this occurs because these companies and their stakeholders dictate the thresholds of (un)intelligibility, (ab)normality, and (un)certainty by which these tools operate and that this process is oriented toward the production of commercially actionable information.
Original language | English |
---|---|
Journal | Science, Technology, & Human Values (ST&HV) : journal of the Society for Social Studies of Science |
Publication status | Published - 27 Mar 2023 |
Keywords
- AI Bias
- Racist AI
- performativity
- Predictive policing
- protest detection