Connect with us

Technology

The Problem With Crime-Stopping AI

Brian Wallace

Published

on

The Problem With Crime-Stopping AI

According to Nuria Oliver, Data-Pop Alliance’s Chief Data Scientist, “There’s a massive opportunity for using big data to have positive social impact… But at the same time, we need to be aware of its limitations and be honest in terms of its performance.” Crime-Stopping artificial intelligence (AI), also known as Predictive Policing, is on the rise. In fact, at least 5 major cities use real-time facial recognition software, and over 50 police departments across the U.S.A. use forecasting software to predict future hotspots for minor crimes. Despite its growing use, is crime-stopping AI effective? You be the judge.

In a test of Amazon’s facial recognition software, 28 members of Congress were falsely identified as criminals. At large, African Americans are more likely to be included in facial recognition databases due to the over-policing of black communities. Retouching on Amazon’s facial recognition software test, only 20% of members of Congress are people of color – yet 39% of their false matches were people of color. Errors like this abound in facial recognition software, and it’s a well-documented phenomenon. Regardless of the documented shortcomings, AI facial recognition is being used across the world already.

Predictive policing is composed of several algorithms. Take PredPol for example, AI software developed by the LAPD back in 2008 to forecast the future places minor crimes such as theft and vandalism will take place. This technology finds its targets based on recent police reports and can target patrols down to a 500 square foot area. Outside of this, crime prediction software is built using pre-existing AI models and historical crime data.  This information is then used to step up police presence in areas where crime is predicted to happen, based on the belief that crime begets more crime.

When AI is built upon historical crime data, predictions may become self-fulling and existing bias becomes a core component in its predictive algorithms. Moreover, results and enforcement ignore crimes that go unreported. In 2018, the Bureau of Justice Statistics said only 43% of violent crime and 34% of property crime was reported to the police. The primary reason for this is that people are less likely to report crimes they think will go unsolved. This further adds to the inaccuracy of artificial intelligence predicted crime as the predictions are based on incomplete and faulty information.

Andrew Ferguson, University of D.C. Law Professor & author of The Rise of Big Data Policing believes, “There’s a real danger, with any kind of data-driven policing, to forget that there are human beings on both sides of the equation.” Saying this, the tech powering crime prediction software such as Ford’s Self-Driving Police Car, the Knightscope K5, and even the Domain Awareness System must be taken into question. Especially since more police agencies are rumored to be using the tech without public disclosure.

AI isn’t new, but its role in criminal justice is. Is this technology able to be trusted to give us better community policing outcomes or should we stick to more traditional methods? Keep reading for more information on the spread of crime-stopping AI.

The Problem With Crime-Stopping AI

 

Brian Wallace is the Founder and President of NowSourcing, an industry leading infographic design agency , based in Louisville, KY and Cincinnati, OH which works with companies that range from startups to Fortune 500s. Brian also runs #LinkedInLocal events nationwide, hosts the Next Action Podcast, and has been named a Google Small Business Advisor for 2016-present and joined the SXSW Advisory Board in 2019. Follow Brian Wallace on LinkedIn as well as Twitter.

Newsletter

Facebook

Trending