top of page
recommendations and paths forward

i. invest in communities, not carceral ai.

 

laptop_transparent.png

In the first part of the report, we described how AI systems have expanded the reach of the carceral system beyond the prison to our neighborhoods, our borders, and our homes. Mass surveillance and sorting of people for punishment has threatened our civil liberties and has had clear harms that are disproportionately felt by marginalized communities. Consequently, our first recommendation about carceral AI is quite basic: it should not be developed, funded, or used. Technology is not the solution to our crises, themselves the result of deliberate discriminatory policy choices like the legacy of the US war on crime and drugs. Instead, we advocate for reducing the size and scope of the carceral system through low-tech community-oriented interventions. We do not need additional surveillance or data to know that we need policy interventions that center decarceration and care-based approaches. These approaches should always be prioritized over technological innovation that replaces, expands, or reaffirms parts of the current system. To this end, we urge researchers to be cautious when pursuing technology-based harm reduction or “AI for social good” approaches, such as recidivism and crime prediction, as these strategies may implicitly reaffirm the normalcy of the current dehumanizing system and engage in extractivist and colonial research practices. Instead we advocate for researchers to focus their attention on technologies that center participatory and liberatory values (2.3) and engage in data activism (2.5). We urge policymakers and the public to divest from carceral AI systems. Resources should instead be directed to other models of decarceration, community care, housing programs, universal healthcare and education. Formal models for violence and crime reduction include Community Violence Intervention, Public Health-Based Approaches, and Mentorship and Workplace Programs. For example, reentry support can assist formerly incarcerated individuals through peer mentorship, workforce development training, transitional housing, and discussion groups. By reducing the systemic barriers that people caught up in the criminal legal system face, like low employment, lack of healthcare, limited educational opportunities, and insecure housing, money can be put towards addressing root causes of incarceration, rather than further entrenching carceral systems in our communities. Locally-based initiatives require material resources to operate and are a proven alternative to investing in expensive, unproven, and potentially harmful technologies. Suggested reading: Green, B. (2019). “Good” isn’t good enough. In Proceedings of the AI for Social Good workshop at NeurIPS (Vol. 17). Peirce, J., Bailey, M., Kajeepeta, S., Crutchfield, C. (2021) A Toolkit for Jail Decarceration in Your Community. Vera Institute. https://www.vera.org/a-toolkit-for-jail-decarceration-in-your-community Mijente (2019). Take Back Tech: How to expose and fight surveillance tech in your city. https://mijente.net/wp-content/uploads/2019/07/Tech-Policy-Report_v4LNX.pdf Teng, S. and Nuñez S. (2019). Measuring Love in the Journey for Justice: A Brown Paper. https://latinocf.org/wp-content/uploads/2019/07/Shiree-Teng-Measuring-Love.pdf

Screen_Shot_2024-08-21_at_1.15.11_PM-removebg-preview-removebg-preview.png

ii. anticipate and preemptively block new brands of carceral ai. 

 

As we saw in the first part of the report, carceral AI systems and the companies that produce them often rebrand themselves in response to public criticism, reemerging under new names and appropriating the language of public calls for accountability. Recall the rebranding of PredPol and Project LASER under the guise of “community policing” and the rebranding of the discredited gunshot audio detection system ShotSpotter as SoundThinking. It is essential to see past the marketing of carceral AI systems as innocuously improving our social problems and see them for what they are: technologies that reaffirm and amplify harmful carceral logics. To better understand how to identify and resist new branches of carceral AI, we can learn from past successful organizing efforts. From 2014 to 2019 in Ramsey County, Minnesota (home to the capital city St. Paul), the county attorney’s office sought to implement an early-warning predictive algorithm to identify “at-risk students before they turn to crime.” The county and the school district signed a data-sharing agreement that would allow the risk assessment algorithm to be trained on school data and juvenile justice records. When parents and other community members learned about these plans in 2018, they organized to oppose the data-sharing agreement, worried that the algorithm would exacerbate the well-documented school-to-prison pipeline and justify further surveillance and criminalization of Black, Brown, and Indigenous youth in the district. The county attorney’s office acquiesced and ended their plan to build the algorithm in 2019. We highlight this example first to emphasize the community’s strategy to target not the algorithm itself, but the data sharing agreement between the school district and the county attorney’s office. The most effective strategy to contest new forms of carceral AI is to identify and block the conditions that allow for it to exist, before these harmful technologies have the chance to be developed. We encourage advocates and organizers to preempt overzealous data sharing practices with carceral institutions to prevent the development of new carceral technologies from collated data. Furthermore, the Ramsey County example illustrates rhetorical strategies used to market and repurpose carceral AI to new contexts. Although the design of the algorithm proposed in Minnesota was nearly identical to the recidivism risk assessment algorithms used to judge people throughout the adult criminal legal system, the county attorney’s office claimed that the new algorithm would identify which children and families “get support and services” to prevent children from turning to crime. Thus, the government attempted to rebrand carceral AI as benevolently improving preventive and supportive measures for “at risk” groups. We see this pattern recurring throughout domains like housing, public benefits, child welfare, and healthcare. Los Angeles County, for instance, is in the process of creating an algorithm to help child welfare workers decide which families to investigate. This algorithm is similar to other child screening algorithms like the better-known Allegheny Family Screening Tool; however, Los Angeles County claims that their algorithm will be used to identify families who “might benefit from additional engagement and support” – thereby framing carceral AI as supporting the people who are already scrutinized and surveilled by the child welfare system. Suggested reading: Carceral Tech Resistance Network and Inter-Faith Peace and Action Collaborative (2023). on the testing + procurement of a gunfire detection surveillance system. https://static1.squarespace.com/static/5d7edafcd15c7f734412daf2/t/641c9727c03d2c595c2729a5/1679595324529/2023-01-29+CTRN+x+IPAC+x+StopShotSpotter.pdf Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press. Stop the Cradle to Prison Algorithm Coalition. Improving Outcomes for Kids & Families: Beyond Predictive Analytics & Data Sharing (Last accessed April 7, 2024). https://www.tciamn.org/cpa-journey. Stop LAPD Spying Coalition. Predictive Policing Archives (Last accessed April 7, 2024). https://stoplapdspying.org/action/our-fights/data-driven-policing/predictive-policing/

Screen_Shot_2024-08-21_at_1.17.03_PM-removebg-preview.png

iii. build technology that intentionally centers values important to us.

 

As we discussed in the section 'bias is a red herring', all software is biased, at all levels. This means that data is biased, methods of software creation are biased, and of course the results are biased. The path forward is not to seek to eliminate bias, but to be explicit about what our biases are. We can use the notion of bias to think about what we might want our software to be oriented towards – can we build software that is biased towards justice? Towards equity? Towards eliminating systemic harms? How do we start out by asking better questions that might lead to technology oriented to different problems? Of course, building software based on our individual values is not a straightforward task, and there are real barriers to doing so. For example, many of us have to work jobs that do not align with our values. We might lack the technical skills to execute what we envision. We also might look around us and not see examples of others doing work that we want to emulate. However, we believe that it is possible to build technology that does not reproduce existing harms, or perhaps even sits outside of the harm-based framework. A common question at this point is “can technology be part of a non-reformist reform?” Those who say no might insist that computing technology is grounded in capitalism and white supremacy, and that rather than recover the tools we have, we need to envision new tools. However, if we are frozen by the history or foundations of a tool, we will overlook many ways forward in search of something that is “purely good.” But we resist purity and instead we think with Ruth Wilson Gilmore’s invocation of Audre Lorde’s famous quote: the issue is not the “master’s tools,” but the apostrophe s in master’s. That is, the issue is the ownership and control of the tool, and not the tool itself. Gilmore and others argue that we must be attendant to issues of ownership and control. What might this look like in practice? It could look like beginning with a theory of change or other framework that resonates with one’s community, values, or goals. For example, one might start with activist and community organizer adrienne maree brown’s emergent strategy framework, and use that to guide each step of the development process. Another approach would be to use a more “traditional” academic framework about technology and data ethics. For example, one might follow the seven principles of data feminism, and apply them as recommended in that work. Regardless of which framework one begins with, it is important to decenter the technology itself. That is, we do not recommend an approach like “we want to build an AI system, we just need to find a problem to solve.” Instead, we believe that any community situation that is improved by technology will be identified by that community and will be built in collaboration with that community. Participatory approaches to building technology have been in use for decades and while they are imperfect at best (because power imbalances are especially acute between software makers and community software users), they are a good starting point for thinking about how to engage community activists in technology development. We outline some ways to rethink what gets considered ‘evidence’ in this space in section 2.4 and ways to engage in data activism in section 2.5 – sousveillance, script flipping, and counterdata generation. Suggested reading: Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press. D'Ignazio, C., & Klein, L. F. (2023). Data feminism. MIT press. Lewis et al. (2020). Indigenous Protocol and Artificial Intelligence Position Paper. https://www.indigenous-ai.net/position-paper/ On participatory design / participatory technology creation processes: Delgado, F., Barocas, S., & Levy, K. (2022). An Uncommon Task: Participatory Design in Legal AI. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), 51:1-51:23. https://doi.org/10.1145/3512898 Delgado, F., Yang, S., Madaio, M., & Yang, Q. (2023). The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, 1–23. https://doi.org/10.1145/3617694.3623261

Screen_Shot_2024-08-21_at_1.13.39_PM-removebg-preview.png

iv. update what counts as ‘evidence’ in ‘evidence-based’ policy.

 

As we discussed in the first part of the report, carceral AI systems are often sold as ‘smart’ or ‘evidence-based’ reforms, referring to their reliance on large datasets or new technologies. However, the ‘evidence-based’ label is often a misnomer, helping states capitalize on the scientific authority that is associated with the objectivity of data and algorithmic systems while ignoring empirical and community-sourced evidence of the technologies’ on-the-ground impacts. Sources of evidence showing that carceral AI systems are ineffective or harmful, misguided, and unwanted by marginalized communities get sidelined in evidence-based policy. In the context of ‘evidence-based’ sentencing, for instance, empirical evidence about the impacts of risk assessment instruments shows that the tools have minimal impacts on sentencing, and that those effects are distributed in arbitrary ways and tend to wash out over time. Critical voices from system-impacted communities most clearly call out the harms of carceral AI systems but their concerns rarely get taken seriously as evidence. We maintain that in order for policy around carceral AI systems to be ‘evidence-based’, the following updates should be made to how ‘evidence’ gets generated and what counts as evidence in the first place. Embracing “small” and local data. Evidence-based policy must think beyond “big data” and speculative lab-based studies toward data on the impacts carceral AI has on the ground in real situations, organizations, and cities, with attention to interactions between people and technology. Qualitative context is important, and first-hand testimony of data gathered by local community members must play a central role. Decentering the expertise of academic researchers. Academic researchers can play a crucial role in generating data on carceral AI systems, but their voices can easily drown out the expertise of system-impacted communities or reaffirm the systems that community organizations fight to dismantle. Because most academic researchers in the global north come from privileged race, class, and able-bodied backgrounds and often have no personal relationship to incarceration, they may be blind to default dominant assumptions that pervade academic research in this space, leading too often to extractivist relationships with communities. All experts are chosen, and we must decenter academics as experts toward academics as learners and vectors. Academic researchers without lived experience must embrace humility, accept that they are not the definitive experts, and adopt the role of learners and amplifiers, using their power strategically to make room for impacted communities to set the terms of research and conduct it themselves wherever possible. Impacted communities must be at the helm of research on carceral AI. Participatory action research is a methodology that allows participants to directly conduct research themselves to challenge inequalities and answer questions important to them, producing knowledge in collaboration with researchers. Participants here are treated as experts in their own stories and experiences and willingly share this data with academic researchers, rather than being treated as test subjects. Citizen science is another kind of participatory research in which members of the public collaborate to gather evidence about something happening in their community. Suggested reading: Dillahunt, T. R., Lu, A. J., & Velazquez, J. (2023, July). Eliciting alternative economic futures with working-class Detroiters: Centering afrofuturism in speculative design. In Proceedings of the 2023 ACM Designing Interactive Systems Conference (pp. 957-977). Farrell, L., Young, B., Willison, J. B., and Fine, M. (2021). Participatory Research in Prisons. https://www.urban.org/sites/default/files/publication/104153/participatory-research-in-prisons_0.pdf Molnar, P. (2024). “Nothing about us without us”: People on the move interrogate border tech with the Migration and Technology Monitor. https://www.openglobalrights.org/people-on-move-interrogate-border-tech-migration-technology-monitor/ Prison Journalism Project. https://prisonjournalismproject.org/

Screen_Shot_2024-08-21_at_1.14.42_PM-removebg-preview.png

v. engage in data activism.

 

Having emphasized caution about the limits of academic researchers’ expertise in the context of carceral AI, we note several recommendations of concrete ways that academic researchers can contribute their particular skillsets. Resisting disparities in access to knowledge. It is common for people in different positions of power to have unequal access to information. Consider the informational imbalances faced by public defenders and prosecutors, city residents and police officers, and undocumented people and border enforcement agents, respectively. Academic researchers have a responsibility to resist these imbalances by ensuring that their work is open access (that is, not behind a paywall), such as by using creative commons licenses, and engaging in translational work to ensure that their work is publicly legible. The latter may require academics to work in interdisciplinary partnerships – no single academic needs to have all these skills individually. Public pedagogy. One of the ways that academics can have a positive impact is through teaching and disseminating knowledge. Academic researchers can use their privileged positions to promote work by scholars from marginalized backgrounds and engage in decentralized sites of knowledge sharing outside of standard academic publications, such as community workshops, public teach-ins, zines, websites, white papers, and policy briefs. Academics should promote decentralized networks of knowledge production and sharing and normalize the value of community engagement and relationships. Alternate data sources and analyses. Counterdata are an alternative to data officially reported by the state and can provide data that are otherwise missing from the public record. These alternate data sources can be used to challenge official definitions, measurements, and analyses, calling out state inaction, mobilizing public attention, and promoting policies to help repair communities. For example, in her book Counting Feminicide: Data Feminism in Action, Catherine D’Ignazio details how activists gather counterdata on feminicide to challenge piecemeal and biased state narratives throughout Latin America. We have included below a list of recommended approaches and alternate data sources that academic researchers can engage in to actively push back on data and narratives that uphold the status quo. Sousveillance, a term coined by engineer Steve Mann, comes from the French word ‘sous’, meaning ‘below’, as opposed to ‘sur’, meaning ‘above’. Unlike surveillance, in which authorities or institutions observe individuals, sousveillance inverts this power dynamic – members of the public observe authorities, such as by recording a police interaction on their phone. Sousveillance is thus a powerful strategy for creating counterdata. An example of sousveillance is copwatching, in which community members collectively observe and document police abuses, with roots in the Black Panthers’ armed watches of police. The Watch the Watchers project by Stop LAPD Spying is an excellent example of the power of leveraging publicly accessible data for copwatching. Script flipping. Instead of studying marginalized populations in low positions of power, researchers may engage in what anthropologist Laura Nader (1972) calls “studying up” – gathering data on “the most powerful strata” of society in order to “understand those who shape attitudes and actually control institutional structures.” For instance, instead of making predictions about criminal defendants, such as predictions of recidivism or failure to appear in court, academics may turn the gaze upward to build algorithms that predict the risk the carceral system poses to the individual, a judge’s risk of failing to adhere to the law, or the city blocks where financial crimes are likely to occur. Freedom of information laws. Many cities, states and countries have laws requiring government transparency through the release of meeting minutes, collected datasets, voting records, and so on. These data can in principle be requested by the general public but may be inaccessible in practice. Academics are in positions of power relative to most members of the public and use their resources to request and use these data. For example, FOIA requests allowed researchers at the University of Illinois Champaign to map the usage of Automatic License Plate Readers around the county. “New” and social media. While social media platforms like Instagram, TikTok, and X (formerly Twitter) can be a site of surveillance and mis- and disinformation, they can also allow for the spread of alternative narratives, counterdata, and peer-to-peer storytelling. Academics can participate in these data communication narratives themselves or be supportive of those who do so. Prison TikTok is one example in which incarcerated and formerly incarcerated people post content online – answering questions, dispelling myths, and participating in social media trends and challenges – breaking down boundaries between the general public and prison life, enabling viewers to see into a realm of society that is normally closed off. MigrantTok is another, with people on the move sharing their experiences and journey while crossing borders and navigating immigration systems. Suggested reading: Entries on Counterdata and Missing data in Keywords of the Datafied State. https://datasociety.net/library/keywords-of-the-datafied-state/ D'Ignazio, C., & Klein, L. F. (2023). Data feminism. MIT press. Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press. Meng, A., DiSalvo, C., & Zegura, E. (2019). Collaborative data work towards a caring democracy. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-23.

Screen_Shot_2024-08-21_at_1.20.41_PM-removebg-preview.png

vi. build community and escape from disciplinary silos.

 

Finally, studying and contesting carceral AI requires interdisciplinary and cross-community collaborations. Any particular method is by its nature a partial view of the carceral AI landscape, and people have access to different information streams depending on their positionality and approach. It is impossible to collectively reimagine alternatives to existing carceral AI systems when we are all isolated in our respective bubbles. The interdisciplinarity of the Pittsburgh carceral AI workshop was thus its major strength, revealing how researchers and activists approach and communicate about carceral AI systems in a variety of ways while converging on a common set of findings and understandings. We want to emphasize the importance of building and strengthening bridges between affected communities, academics, activists, policymakers, artists, and coders/makers. This can take the form of future events following the format of the carceral AI workshop in Pittsburgh, community events, art exhibits, teach-ins, and virtual spaces like listservs and social media sites. We encourage anyone interested in organizing or participating in a future event like this to reach out to us at carceral.ai@gmail.com.

Screen_Shot_2024-08-21_at_1.37.17_PM-removebg-preview.png
bottom of page