Can AI Used in Criminal Justice Lead to Wrongful Convictions? 

Can AI Used in Criminal Justice Lead to Wrongful Convictions

With AI technology evolving, its role as a useful tool in many fields is expanding. From e-commerce to healthcare and finance, many industries are finding ways to take advantage of AI as a tool to speed up processes. The criminal justice system is no exception. 

For criminal justice, AI-powered tools have an advantage in analyzing large amounts of data, from facial recognition to predictive policing, streamlining investigations, and legal processes. While a useful tool, a reliance on AI brings up a concern that false positives could occur. AI can make legal systems faster and more efficient, but could innocent individuals be mistakenly identified or implicated because of these tools? This is a growing concern for legal professionals, including those in Massachusetts cities like Concord criminal defense lawyers, who may find themselves challenging evidence gathered by these systems.

As the justice system embraces these technological innovations, the question arises of how significant AI errors can be and if the benefits outweigh the risks. 

Exploring the Role of AI in the Justice System

The use of AI in criminal justice has just begun to take shape. With applications in facial recognition, predictive policing, and crime analysis, its use is a game changer. AI has the potential to cut time off of processes with systems such as facial recognition software that can identify suspects in record time. With these advances, the concern that accuracy is not a guarantee arises. 

Efficiency that can lead to quicker resolutions and reduced manual work can also raise the potential for unchecked errors. While the intentions to reduce time spent on certain tasks are positive, the consequences of mistakes in identification or crime predictions can have life-altering consequences. Law enforcement agencies must be aware of the potential for AI to make mistakes and the necessity of human oversight, to keep the potential consequences in check. 

Facial Recognition Software

As AI systems become more frequently used, the risk of AI misidentifying someone becomes a growing concern. When AI makes a mistake in identifying individuals or interpreting evidence, the consequences can lead to wrongful convictions. 

In the case of facial recognition software, AI can be used by law enforcement to streamline suspect identification. Law enforcement agencies may use facial recognition technology to search through billions of images to locate an unidentified suspect from a crime scene. The benefits of speeding up a manual process and getting closer to identifying a suspect are alluring, but this technology’s tendencies to misidentify people should be considered. Tests performed on this technology have shown facial recognition systems to be less accurate at identifying people of color, particularly black women. These technologies often fail to account for low image quality which could lead to misidentification. Given these concerns, it is vital to implement strict oversight and continuously refine AI technologies to prevent the devastating consequences of wrongful convictions.

Predictive Policing Algorithms 

Predictive policing algorithms use AI and data science to help law enforcement predict criminal activity and determine where to allocate resources. The most common type of PPA is location-based. These use existing crime data to identify areas and times that would have a higher risk of crime so that police departments can better allocate their resources towards those places and times. These technologies are demonstrated to be biased toward communities of color. A 2018 study showed that if the most common predictive policing algorithm were applied to Indianapolis, minority communities would experience up to 400% greater patrol presence than white communities. Proponents of these algorithms argue that they are expensive and have not been shown actually to reduce crime across a decade of their usage. 

As AI is used more as a tool for law enforcement, a balance should be struck to not become too reliant on the technology and to not assume its accuracy. Acknowledging the flaws and incorporating more human oversight could prevent mistakes that lead to false imprisonment. 

The Threat of Wrongful Convictions

AI technology has demonstrated significant flaws in criminal investigations, leading to real-life consequences. Nijeer Parks, for instance, was wrongfully jailed for ten days due to a mistaken identification made by facial recognition software, despite physical evidence suggesting otherwise. 

Similarly, the Chicago Police Department’s “heat list,” a predictive policing program launched in 2012, aimed to identify individuals likely to be involved in gun violence. However, an analysis by the RAND Institute revealed the program’s inefficacy, as it disproportionately targeted communities of color by relying heavily on arrest records. 

These cases highlight how AI when improperly implemented, can exacerbate bias and lead to wrongful convictions. As AI tools become more integrated into law enforcement, careful oversight and reform are critical to prevent similar injustices from occurring in the future.

Could Performance Scores be a Solution?

Policymakers have brought up that requiring a particular high-performance score for AI technology could reduce the risk of false positives. The ACLU, however, argues that performance scores are not a good means of reducing false positives with AI detection tools. The ACLU brings up that AI facial recognition tools are used to generate lineups and are considered a “success” when they generate a match. But after the tool is used, there’s no guarantee the actual suspect will be selected from the lineup, making the performance scores difficult to interpret. 

In NIST testing, many algorithms have exhibited a pattern of producing higher false positive rates for black men and even higher ones for black women. The ACLU, rather, believes that while an overall performance score may look positive, it’s a general score that overlooks the technology’s discrepancies.  

With the risk of a false positive being so life-altering for real people, the possibility of mistakes by AI technology should be fully considered as they’re incorporated into the criminal justice system. 

Addressing the Flaws of AI

There has been growing concern over the possibility of AI replacing human work in many fields, such as the customer service industry. However, the conversation around AI’s limitations becomes even more critical when applied to more sensitive sectors like criminal justice.

While AI systems can speed up processes, particularly in the case of DNA analysis, it’s important for the people using them to have a realistic perception of what the technologies can do. This means that AI should not fully replace human judgment and should not be interpreted as a foolproof technology, especially when used in sensitive life-altering areas like criminal justice. The stakes for false positives are high, so human oversight should be in place to mitigate the risk of those errors. By addressing the weak points of these technologies, we can also work to take different approaches that evolve criminal justice AI to produce fewer errors. 

To understand how these technologies might impact you, take note of the laws and policies in your community towards AI in criminal justice. There are 13 states and 23 local jurisdictions that have laws addressing how AI can be used by law enforcement. While many states have warnings and policies that note that facial recognition technology should not be used as evidence, the bias associated with the technology’s accuracy still leads to wrongful arrests. 

As AI continues to be integrated into the criminal justice system, it is essential to recognize both its potential benefits and the risks it poses. While AI can streamline processes and assist in data analysis, it is not infallible. Errors, especially in facial recognition and predictive policing, can lead to wrongful convictions, disproportionately impacting communities of color. Striking a balance between technology and human oversight is crucial to minimizing these risks.

Robert Simpson is a seasoned ED Tech blog writer with a passion for bridging the gap between education and technology. With years of experience and a deep appreciation for the transformative power of digital tools in learning, Robert brings a unique blend of expertise and enthusiasm to the world of educational technology. Robert's writing is driven by a commitment to making complex tech topics accessible and relevant to educators, students, and tech enthusiasts alike. His articles aim to empower readers with insights, strategies, and resources to navigate the ever-evolving landscape of ED Tech. As a dedicated advocate for the integration of technology in education, Robert is on a mission to inspire and inform. Join him on his journey of exploration, discovery, and innovation in the field of educational technology, and discover how it can enhance the way we learn, teach, and engage with knowledge. Through his words, Robert aims to facilitate a brighter future for education in the digital age.