top of page
what is carceral AI?

This report uses the term carceral AI to refer to a growing class of algorithmic and data-driven practices designed to police, incarcerate, surveil, and control people. Carceral AI often reinforces or masks existing structural injustices, expands the reach of carceral systems under the guise of scientific rigor, and interacts in complicated ways with existing legal systems, which are ill-prepared to handle the changes introduced by such technology. 

predpol_transparent.png

Consider a pair of programs the Los Angeles Police Department (LAPD) developed to mine massive amounts of police data and rate both people and places as sources of future crime. Titled PredPol (literally “predictive policing”) and LASER (“Los Angeles Strategic Extraction and Restoration”), the programs were sold as reforms to increase police accountability and efficiency. Both programs were built in close collaboration with university researchers and software company Palantir, leveraging LAPD’s massive budget, which over the past decade has consumed roughly half of the municipality’s annual discretionary spending.

 

Despite their high price tag and rhetoric of reform, in practice, carceral AI systems cause the most harm to those our society tends to criminalize – communities of color, immigrants, people with disabilities, and poor people.

 

An analysis by the Stop LAPD Spying Coalition found that one fifth of the people the LASER program labeled “chronic offenders” had zero prior arrests or police contact. In a city where Black people make up less than a tenth of the population, nearly half of those targeted were Black. Numerous businesses, residences, and community gathering places were marked “crime generators,” and officers were deployed with vague profiles of who to target. LAPD killed 21 people in 2016, the year these programs expanded citywide, including six killings in LAPD’s so-called “LASER zones” in just a short 6-month period. All of the men and boys killed were Black or Latino, four were shot in the back, four were teenagers, and two were under the age of 18.

Although the use of carceral AI systems like predictive policing is widespread and growing, so is resistance to it. Many activists, academic researchers, community groups, policymakers and artists have stood up to oppose the use of carceral AI systems in their community. One such group of critical voices recently gathered in Pittsburgh, Pennsylvania to exchange their knowledge of carceral AI systems and craft this report. 


In Prediction and Punishment, we discuss a range of carceral AI systems, including predictive policing tools like PredPol and LASER, facial recognition technology, recidivism risk assessment instruments, automatic license plate readers, border surveillance drones, biometric databases, electronic monitoring, and audio gunshot locators (see the glossary below for a definition of terms). We advocate against carceral technologies and urge the public, policymakers, and researchers to be wary of these so-called ‘smart’, ‘evidence-based’, or ‘data-driven’ reforms. ​​​

Learn more about the state of carceral AI and our recommendations in the sections below.

part i: state of carceral AI

part ii: recommendations and paths forward

To mitigate the use and expansion of carceral AI, we recommend:

1. divesting from carceral technology and investing in communities

2. blocking the rebranding of scrapped carceral AI systems under new names

3. expanding how we think about ‘evidence-based’ policy

4. increasing public access to information about carceral AI systems

5. building technology that intentionally centers our values

6. community building to resist carceral AI

glossary of terms

  • Predictive policing: software used by police departments to target specific areas for patrols. These areas are determined by past data such as arrests, crime reports, and calls for service and are then used to predict the locations (or individuals) more likely to have future crime. Famously, these systems often lead to feedback loops where predictions become self-fulfilling prophecies. 
     

  • Risk assessment instruments: tools used to estimate the likelihood, or “risk,” of an (often negative) outcome based on known information. These tools are used throughout the criminal legal system to aid in decision-making, commonly in the courts during pretrial to assess bail and likelihood to appear, at sentencing for duration of sentence, at prison intake to assign to programs, at parole to decide recidivism risk, as well as in other areas of law enforcement such as determining child welfare. These tools can be predictive models that take in information about past behavior, schooling, family relations, and other available information. They may also be as technologically simple as a checklist.
     

  • Automatic license plate readers (ALPRs): cameras that automatically capture license plate information of cars driving past. These cameras can be mounted to poles, streetlights, overpasses, or police patrol cars. They upload the license plate photos to a central database along with the location and time. This information can then be used to identify all cars at a location of interest or to track cars of interest across locations. 
     

  • Border surveillance systems: broad class of technologies of surveillance and migration control such as aerial drones, AI-powered towers, robodogs, biometric data collection, and other types of tools. 
     

  • Electronic monitoring systems: devices, usually attached to the ankle or wrist, that monitor a person’s location and occasionally their blood alcohol and/or breath. They are most often used as a form of digital incarceration, where the individual is prevented from leaving their house or a small radius without advance permission or they risk incarceration. Their use has become increasingly common and can be before trial, after conviction, or after release from prison. They are also used by ICE and in drug rehabilitation programs. 
     

  • Audio gunshot locators: audio sensors that are triggered by loud noises such as gunshots. These sensors are typically mounted on streetlights or buildings and, when triggered, triangulate the location of the noise and send an alert of the location and time. Examples: ShotSpotter.
     

  • Facial recognition: a method of identifying individuals based on facial features in a photo, video, or in person. These systems rely on using large databases of images of people’s faces to train the algorithm. They can then be used for face verification, or confirming that a known individual matches to their known profile, or face identification, which tries to estimate the likelihood that an unknown face matches to a known profile. Facial recognition is used by police and private companies and can be used to identify someone in surveillance footage, verify someone’s identity compared to their ID, or find someone in a crowd. 
     

  • Probabilistic genotyping/Probabilistic DNA profiling: software used to examine found DNA mixtures to determine the likelihood that a known individual contributed DNA to the sample. While DNA is unique to each individual, in practice it is often found in very small amounts, in combination with DNA from other people, and only in partial profiles (not with all the information). The software assesses the probability that a known DNA profile contributed DNA to the found mixture. DNA profiles are recorded and stored in databases ranging from local to federal levels that can be accessed for comparison in different scenarios.

bottom of page