top of page
carceral AI is used in inconsistent, discretionary, and unpredictable ways.

Carceral AI is often promoted as an “evidence-based” reform to remove subjectivity from human decision-making, but the use of such technology does not remove human input so much as obfuscate it. In practice, human decision-makers like judges, police officers, and probation officers differ widely in their adherence to algorithmic recommendations and follow them inconsistently in different contexts, leading to new sources of conflict and less clear accountability.

gavel_transparent.png

While practitioners hold up the objective backing that the algorithm provides, researchers have found that algorithmic recommendations often get applied selectively and in ways that can increase discrimination in practice. For example, judges using a sentencing risk assessment tool in Virginia provided harsher sentences for Black defendants than for white defendants who received the same risk score. In other cases, practitioners exhibit “algorithm aversion,” or resist engaging with carceral AI at all. Police officers have been found to ignore predictive policing hotspots in favor of their own judgment in an effort to reaffirm their decision-making authority, and judges and pretrial officers have been observed ignoring or overriding risk assessment recommendations. These powerful decision-makers express resentment to how the technology threatens their authority, and their resistance to its implementation can at times render it obsolete. While technology with minimal impact is better than technology with negative impact, a better alternative would be low-tech decarceration efforts that do not rely on individual decision-makers’ alignment with policy goals.

​

Carceral AI may also be used beyond its initial purpose in a process deemed “function creep,” such as being repurposed to surveil workers. Police agencies collect massive amounts of dragnet surveillance data without a clear plan for how it will be used or shared later on. Access to technology may also further reproduce existing power imbalances. Prosecutors have greater access to technology providers, data, and the experts doing analysis than public defenders, exacerbating an already prevalent imbalance in the use of surveillance technology, risk assessment instruments, and DNA profiling.

 

Finally, while the exact responses to technological intervention vary, its use consistently adds a level of opacity into the decision-making process and obscures accountability for the ultimate judgment. The “black box” nature of algorithms, particularly proprietary software whose source code is hidden from public scrutiny, makes it more difficult for defendants to contest life-altering decisions or pinpoint potential sources of error.

​

Suggested readings:

  • Albright, A. (2023). The hidden effects of algorithmic recommendations. https://apalbright.github.io/pdfs/Algo_Recs_July_2023.pdf

  • Brayne, S., & Christin, A. (2021). Technologies of crime prediction: The reception of algorithms in policing and criminal courts. Social problems, 68(3), 608-624.

  • Pruss, D. (2023). Ghosting the machine: Judicial resistance to a recidivism risk assessment instrument. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 312-323).

  • Pullen-Blasnik, H., Eyal, G., & Weissenbach, A. (2024). ‘Is your accuser me, or is it the software?’Ambiguity and contested expertise in probabilistic DNA profiling. Social Studies of Science, 54(1), 30-58.

  • Riley, S. (2024). Overriding (in) justice: pretrial risk assessment administration on the frontlines. In The 2024 ACM Conference on Fairness, Accountability, and Transparency (pp. 480-488).

  • Stevenson, M. (2018). Assessing risk assessment in action. Minn. L. Rev., 103, 303.

bottom of page