Abstract
This article builds a theoretical framework with which to confront the
racialising capabilities of AI-powered real-time Event Detection and Alert
Creation (EDAC) software when used for protest detection. It is well-known that many AI-powered systems exacerbate social inequalities by
racializ=sing certain groups and individuals. We propose the feminist
concept of performativity, as defined by Judith Butler and Karen Barad,
as a more comprehensive way to expose and contest the harms wrought
by EDAC than other ‘de-biasing’ mechanisms. We explain how our use of
performativity differs from and complements other STS work because of
its rigorous approach to how iterative, citational, and material practices
produce the effect of race. We focus on Geofeedia and Dataminr, two
EDAC companies that claim to be able to ‘predict’ and ‘recognise’ the
emergence of dangerous protests, to show how their tools performatively
produce the phenomena which they are supposed to observe.
Specifically, we argue that this occurs because these companies and
their stakeholders dictate the thresholds of (un)intelligibility,
(ab)normality and (un)certainty by which these tools operate, and that
this process is oriented towards the production of commercially
actionable information
racialising capabilities of AI-powered real-time Event Detection and Alert
Creation (EDAC) software when used for protest detection. It is well-known that many AI-powered systems exacerbate social inequalities by
racializ=sing certain groups and individuals. We propose the feminist
concept of performativity, as defined by Judith Butler and Karen Barad,
as a more comprehensive way to expose and contest the harms wrought
by EDAC than other ‘de-biasing’ mechanisms. We explain how our use of
performativity differs from and complements other STS work because of
its rigorous approach to how iterative, citational, and material practices
produce the effect of race. We focus on Geofeedia and Dataminr, two
EDAC companies that claim to be able to ‘predict’ and ‘recognise’ the
emergence of dangerous protests, to show how their tools performatively
produce the phenomena which they are supposed to observe.
Specifically, we argue that this occurs because these companies and
their stakeholders dictate the thresholds of (un)intelligibility,
(ab)normality and (un)certainty by which these tools operate, and that
this process is oriented towards the production of commercially
actionable information
Original language | English |
---|---|
Journal | Science, Technology, & Human Values (ST&HV) : journal of the Society for Social Studies of Science |
Publication status | Published - 27 Mar 2023 |
Keywords
- AI Bias
- Racist AI
- performativity
- Predictive policing
- protest detection