AI in Criminal Justice: Reasons for Concern and Recommendations for the Future

Artificial intelligence (AI), automation powered by AI, advanced algorithms, and other emerging technologies are now being used in criminal courts across the United States, raising serious concerns about civil rights and the trustworthiness of these tools. (For a basic primer on AI, see item 1 in the Notes section.[1])

Mapping Pretrial Injustice, a website created by the Movement Alliance Project and MediaJustice, notes that “most states” now use at least one risk assessment algorithm tool (RAT), including various forms of artificial intelligence, to “help judges and magistrates decide everything from bail and pretrial release or supervision to sentencing and gravity of parole or probation supervision.”[2] At present, AI systems in America do not make judicial decisions on their own. A judge must approve the recommendations made.

Mapping Pretrial Injustice further explains,

RATs are based on aggregate data. Designers create RATs by analyzing a dataset, which is often a large sample of historic information about a group of people, and finding factors consistent with the results they are trying to predict. This means that a RAT’s outcome tries to predict what someone with certain characteristics, such as where someone is from, how many times they were arrested or convicted, or how old they are, might do. They are not individualized for each person.[3]

Although most states utilize algorithms in at least one jurisdiction, their use varies widely. In Alaska, Arizona, California, Hawaii, Utah, Nevada, Minnesota, Indiana, Kentucky, Ohio, Virginia, West Virginia, Vermont, Connecticut, Delaware, and Rhode Island, most jurisdictions use at least one risk assessment tool.[4]

In states such as Colorado and Florida, the use of AI and algorithms in criminal justice systems is popular, but many jurisdictions still refuse to adopt them. While in other states, including Texas and Georgia, only a small number of jurisdictions allow courts to use algorithms or AI.[5]

The Debate

Supporters of risk assessment tools, including those that provide judges with sentencing recommendations, argue that algorithms and artificial intelligence improve public safety and reduce recidivism and bias in the criminal justice system.

Critics of risk assessment tools, especially those aligned closely with left-wing organizations and movements, argue that most RATs are immoral and should be avoided entirely, because the datasets used to train and operate them have been embedded with past and present social and racial biases. Critics claim that by utilizing RATs, courts are continuing or even exacerbating past injustices.[6]

Due to these and other concerns, in July 2018 “more than 100 civil rights and community-based organizations, including the ACLU and the NAACP, signed a statement urging against the use of risk assessment,” according to a report by the MIT Technology Review.[7]

Can We Trust AI?

Regardless of whether you accept the views of organizations like the NAACP and ACLU, there are other significant reasons supporters of limited government and constitutional values should be worried about the growing use of AI and algorithms in state criminal justice systems.

First, research shows judges do not adhere to RAT recommendations at equal rates for all demographic groups. For example, at least one study shows judges in districts that use risk assessment tools are on average more lenient with female defendants than they are with males.[8]

Second, risk assessment tools allow judges to hide behind AI and algorithms, shifting the blame to unaccountable machines when legal decisions prove to be too lenient or harsh.

Third, some of the most popular risk assessment tools have been developed by organizations that espouse left-wing values or receive funding from left-wing special interests. Although this fact doesn’t necessarily indicate that risk assessment tools promote leftist ideals, it should, at the very least, be a cause for concern.

For instance, the widely used Public Safety Assessment tool was created by the Larura and John Arnold Foundation in 2013.[9] (The organization has since been renamed to Arnold Ventures.) Arnold Ventures has a long track record of funding liberal organizations and causes, including the Center for American Progress, Environmental Defense Action Fund, and Planned Parenthood, among many others.[10],[11]

Fourth, AI and algorithm designers are under a tremendous amount of pressure from federal officials to alter their systems so that they can be used to promote progressive goals.

In October 2022, the White House released its “Blueprint for an AI Bill of Rights.” The AI Bill of Rights includes a troubling section titled “Algorithmic Discrimination Protections,” which reads in part, “Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race” and other factors.[12] In other words, if AI isn’t producing equitable outcomes, then the White House claims it’s guilty of “discrimination.”

Among other things, the Biden administration wants AI designers to use “proactive equity assessments as part of the system design,” a move that would deliberately bias AI to achieve social or economic goals favored by President Biden.[13]

Fifth, researchers studying more than 56,000 cases in Virginia found “AI recommendations significantly increase the probability of offering alternative punishments, lower the probability of incarceration, and shorten the length of imprisonment.”[14] More research is required to know whether AI’s comparatively lenient sentencing recommendations are prudent, as well as to determine if other states are experiencing similar results.

Policy Recommendations

There are benefits to using artificial intelligence systems to assist courts in criminal cases, but the risks remain too great to justify their use.

In jurisdictions across the United States, judges have been elected by voters or appointed by voters’ representatives. They, alone, should use their judgement when determining how to treat citizens appearing in the criminal justice system. They, alone, should be held accountable for the decisions made by courts.

Using AI might reduce bias and poor decision-making in some cases, but ultimately, it poses a grave threat to the trustworthiness and reliability of the judicial system. It moves the system toward greater centralization, putting courts at a higher risk of being controlled or manipulated by a handful of nonprofits and for-profit organizations, raising important questions about bias and accountability.

Ultimately, defendants and citizens convicted of crimes should be judged as individuals, not with a one-size-fits-all approach, and their criminal actions and past behavior—good and bad—should ordinarily be the only considerations taken into account. Personal traits, income, living conditions, race, and most other factors should not be used.

[1] Justin Haskins, “AI and ESG: How Artificial Intelligence Is Being Designed to Advance Left-Wing Goals,” Legislative Tip Sheet, The Heartland Institute and Henry Dearborn Institute, October 2023.

[2] “Risk Assessment Tools,” Mapping Pretrial Injustice, pretrialrisk.com, last accessed September 21, 2023, https://pretrialrisk.com/the-basics/risk-assessment-algorithms

[3] “Risk Assessment Tools,” Mapping Pretrial Injustice.

[4] “Where Are Risk Assessments Being Used?” Mapping Pretrial Injustice, pretrialrisk.com, last accessed September 21, 2023, https://pretrialrisk.com/national-landscape/where-are-prai-being-used

[5] “Where Are Risk Assessments Being Used?” Mapping Pretrial Injustice.

[6] For example, see “The Danger,” Mapping Pretrial Injustice, pretrialrisk.com, last accessed September 21, 2023, https://pretrialrisk.com/the-danger

[7] Karen Hao, “AI is sending people to jail—and getting it wrong,” MIT Technology Review, technologyreview.com, January 21, 2019, https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai

[8] Yi-Jen Ho, Wael Jabr, and Yifan Zhang, “AI Enforcement: Examining the Impact of AI on Judicial Fairness and Public Safety,” SSRN, ssrn.com, August 6, 2023, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4533047

[9] “Common Pretrial Risk Assessments,” Mapping Pretrial Injustice, pretrialrisk.com, last accessed September 21, 2023, https://pretrialrisk.com/the-basics/common-prai

[10] “Our Supporters,” Center for American Progress, americanprogress.org, last accessed September 21, 2023, https://www.americanprogress.org/c3-our-supporters

[11] “Grants,” Arnold Ventures, arnoldventures.org, last accessed September 21, 2023, https://www.arnoldventures.org/grants-search

[12] “Algorithmic Discrimination Protections,” Blueprint for an AI Bill of Rights, WhiteHouse.gov, U.S. Office of Science and Technology Policy, last accessed August 29, 2023, https://www.whitehouse.gov/ostp/ai-bill-of-rights/algorithmic-discrimination-protections-2

[13] “Algorithmic Discrimination Protections,” Blueprint for an AI Bill of Rights, WhiteHouse.gov, U.S. Office of Science and Technology Policy.

[14] Yi-Jen Ho, Wael Jabr, and Yifan Zhang, “AI Enforcement: Examining the Impact of AI on Judicial Fairness and Public Safety.”