top of page
bias is a red herring.

It is now widely established that racial, class, disability, and other biases are baked into carceral AI systems, and much of the existing conversation about these technologies has centered attention on the ‘fairness’ of algorithmic predictions. For example, facial recognition technology has been found to have significantly higher error rates for Black faces, particularly women of color, which has led police to falsely accuse and arrest Black individuals for crimes they had nothing to do with. These disparities have fueled efforts to improve the accuracy of facial recognition and implement fairness benchmarks.

face_transparent.png

While minimizing algorithmic bias can sometimes be a useful strategy for harm reduction, too narrow a focus on making carceral AI ‘unbiased’ can work against abolitionist aims and risks obscuring the ways in which carceral technologies are inherently political. For example, according to a report by O Panóptico, the adoption of facial recognition by police in the Brazilian state of Bahía has had no significant effect on public safety despite the technology’s steep price tag, while basic quality of life issues like sanitation receive comparatively little investment. Moreover, with few data protections in place, there are troubling questions about how this mass surveillance data will be used in the future and who will have access to it.

 

In this case, framing the issue around ‘bias’ does not permit us to reject carceral AI on the basis of its societal harms; thus ‘bias’ becomes a red herring that elides more substantive critique.

 

A narrow focus on algorithmic fairness without enough theoretical grounding can also inadvertently make some false assumptions. First, it sometimes assumes that non-biased software is possible. All software contains biases, and the impulse to “eliminate bias” is actually a move towards creating software that is aligned with values of often-powerful creators, and thus perhaps seems unbiased. However, “unbiased software” is simply software with biases that we either do not see or do not find problematic. A better alternative is to design software to be explicitly biased towards our values. We write about this more in point 3 of our recommendations.

 

Second, it often assumes that technological advances, if properly calibrated, will provide an advance over human decision-making. Algorithms are often introduced as part of an “objectivity campaign” that paints algorithmic decisions as more impartial, scientific, and reliable than human decision-making. This assumption, however, ignores that people are always involved in deciding how to use and interpret algorithmic outputs. Research shows that police departments use technology to distance themselves from accusations of racial bias, judges use risk assessments differently for Black versus white defendants, and DNA analysts alternately emphasize or minimize the software’s role in order to maintain authority.

​

Third, it assumes that the variables used by carceral AI are good proxies for complex phenomena like crime. For example, police reports are often used as a proxy for criminal activity, when they in fact measure where police have gotten involved and filed reports. Higher presence of police patrols in Black neighborhoods makes police more likely to observe violations in those neighborhoods. ShotSpotter, a controversial gunshot audio detection technology, is used in many municipalities to dispatch police to areas with suspected recent gunfire, but its microphones are disproportionately installed in communities of color and have low accuracy, leading to heightened fatal interactions with police. Additionally, variables such as zip code are highly correlated with race. Using such proxies can lead to a self-fulfilling prophecy, where past bias produces future bias. 


Focusing the conversation on algorithmic fairness accepts that carceral AI will be used, it simply needs to be improved. As a result, it limits the scope of the questions we ask to how to improve the accuracy of a tool’s output, obscuring questions such as: what are the goals and underlying values of a tool like this? Is it working to create the future we want?

​

Suggested Readings:

  • Gerchick, M., & Cagle, M. (2024). When it Comes to Facial Recognition, There is No Such Thing as a Magic Number. ACLU News and Commentary. https://www.aclu.org/news/privacy-technology/when-it-comes-to-facial-recognition-there-is-no-such-thing-as-a-magic-number

  • Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning.

  • Keyes, O., Hutson, J., & Durbin, M. (2019, May). A mulching proposal: Analysing and improving an algorithmic system for turning the elderly into high-nutrient slurry. In Extended abstracts of the 2019 CHI conference on human factors in computing systems (pp. 1-11).

  • Nunes, P., Lima, T. G. L., Cruz, T. G. (2023). The hinterland will turn into sea: facial recognition expansion in Bahia. Rio de Janeiro : CESeC, O Panóptico. https://drive.google.com/file/d/1eP_M11C_P5TFGu-b9wisEQgJVSEiSNha/view

  • Pruss, D. (2021). Mechanical jurisprudence and domain distortion: How predictive algorithms warp the law. Philosophy of Science, 88(5), 1101-1112.

bottom of page