AI
Police State Conspiracy Constitutional Rights Technology Top News

Stanford Researchers Warn U.S. — Cops Already Using AI to Stop Crimes BEFORE They Happen

Pre-crime, a term coined by science fiction author Philip K. Dick and loosely described as the use of artificial intelligence to detect and stop crime before it happens, has become a terrifying reality — and will likely be business-as-usual for police in just 15 years.

“Cities have already begun to deploy AI technologies for public safety and security,” a team of academic researchers wrote in a new report titled Artificial Intelligence and Life in 2030“By 2030, the typical North American city will rely heavily upon them. These include cameras for surveillance that can detect anomalies pointing to a possible crime, drones, and predictive policing applications.”

First in an ongoing series for the Stanford University-hosted One Hundred Year Study on Artificial Intelligence (AI 100), the report is intended to spark debate on the benefits and detriments of AI’s growing presence in society — and, as in the area of law enforcement, the removal of the human factor won’t necessarily end well.

As the academics point out, for example, AI already scans and analyzes Twitter and other social media platforms to identify individuals prone to radicalization with the Islamic State — but even that seemingly well-intentioned use expanded drastically.

“Law enforcement agencies are increasingly interested in trying to detect plans for disruptive events from social media, and also to monitor activity at large gatherings of people to analyze security,” the report notes. “There is significant work on crowd simulations to determine how crowds can be controlled. At the same time, legitimate concerns have been raised about the potential for law enforcement agencies to overreach and use such tools to violate people’s privacy.”

Police predicting crimes before they’re committed presents obvious risks to more than just people’s privacy. Indeed, the report warns of the possibility artificial intelligence could cause law enforcement to become “overbearing and pervasive in some contexts,” particularly as technology advances and is applied in different fields.

While “AI techniques — vision, speech analysis, and gait analysis — can aid interviewers, interrogators, and security guards in detecting possible deception and criminal behavior,” its possible application in law enforcement monitoring by surveillance camera, for instance, presents a remarkable capacity for abuse.

Imagine police CCTV cameras zeroing in on an individual who appears out of place in a certain neighborhood — AI might conclude they intend to burglarize a business or residence and trigger the deployment of officers to the scene — even if that person simply lost their way or just went for a walk in a new area. Were we not currently in the midst of an epidemic of violence perpetrated by law enforcement, that error wouldn’t be life-threatening — but the police brutality aspect must be considered in the removal of the human element in pre-crime.

Besides restricting freedom of movement and potentially escalating a non-criminal situation into a deadly one, the assumptions made about a person’s presence in an area can have potentially deleterious effects on both the person and the neighborhood.

As police anti-militarization advocate and author Radley Balko reported for the Washington Post in December, several cities have begun sending letters to people simply for having visited neighborhoods known to police — but not established in a court of law — as high-prostitution areas. Such assumptions embarrassingly alienate the innocent and legally-guilty alike, but also further stereotype whole neighborhoods — as well as residents — rather than addressing the issue of prostitution, itself.

“Machine learning significantly enhances the ability to predict where and when crimes are likely to happen and who may commit them,” the report states“As dramatized in the movie Minority Report, predictive policing tools raise the specter of innocent people being unjustifiably targeted. But well-deployed AI prediction tools have the potential to actually remove or reduce human bias, rather than enforcing it, and research and resources should be directed toward ensuring this effect.”

As positive as that sounds, the removal of human bias and judgment is a rather pronounced double-edged sword. While that element undoubtedly stands at the core of increasing police violence, machine-assisted preconception sends officers to address a situation under the assumption a criminal act is imminent — regardless of that assumption’s veracity.

However sunny a picture the academics paint about artificial intelligence in law enforcement, one of the largest experiments in AI-assisted policing in the United States already proved to be an astonishing failure.

Beginning in 2013, the Chicago Police Department partnered with the Illinois Institute of Technology to implement the Strategic Subjects List, which “uses an algorithm to rank and identify people most likely to be perpetrators or victims of gun violence based on data points like prior narcotics arrests, gang affiliation, and age at the time of last arrest,” Mic reported in December 2015. “An experiment in what is known as ‘predictive policing,’ the algorithm initially identified 426 people whom police say they’ve targeted with preventative social services.”

But rather than proving efficacy in preventing violent crime, the experiment failed miserably.

As the American Civil Liberties Union criticized, Chicago Police have been less than transparent about who ends up on the list and how the list is actually being used. And despite the claim social services would be deployed to address underlying issues thought to predict future criminal activity, that has not been the case.

Indeed, RAND Corporation’s study of the Strategic Subjects List found those unfortunate enough to be identified by the algorithm were simply arrested more often. Although study authors couldn’t conclude precisely why this happened, it appears human bias — as mentioned above — plays a predictably major role.

“It sounded, at least in some cases, that when there was a shooting and investigators went out to understand it, they would look at list subjects in the area and start from there,” lead author Jessica Saunders told Mic.

Chicago Police had implemented a newer version of the list by the time RAND’s study was published, but several issues had yet to be addressed — among them, the lack of guidance given to officers on how to interact with listees, including which social services to deploy. Generally, the study discovered, police simply increased their interaction with target subjects — a factor known to contribute to police violence and curtailment of civil rights and liberties.

“It is not at all evident that contacting people at greater risk of being involved in violence — especially without further guidance on what to say to them or otherwise how to follow up — is the relevant strategy to reduce violence,” the study stated, as cited by Mic.

But issues with AI prediction aren’t held to just the government’s executive branch — criminal courts across the country have been using an algorithm called Northpointe, “designed to predict an offender’s likelihood to commit another crime in the future” — but its application, like Chicago’s, hasn’t gone smoothly.

Gawker reported in May this year [emphasis added]:

ProPublica published an investigation into Northpointe’s effectiveness in predicting recidivism … and found that, after controlling for variables such as gender and criminal history, black people were 77 percent more likely to be predicted to commit a future violent crime and 45 percent more likely to be predicted to commit a crime of any kind. The study, which looked at 7,000 so-called risk scores issued in Florida’s Broward County, also found that Northpointe isn’t a particularly effective predictor in general, regardless of race: only 20 percent of people it predicted to commit a violent crime in the future ended up doing so.”

Whatever hopes the Stanford report glowingly offers for the potential uses of artificial intelligence, policing — and the criminal justice system, in general — would benefit from further advances and research prior to more widespread active implementation. Hastily applied science, when not thoroughly tested or possible repercussions exhaustively debated, has a penchant for egregious unintentional consequences down the line.

Although the report notes “the technologies emerging from the field could profoundly transform society for the better in the coming decades” — it’s imperative to realize the likelihood that transformation could as easily be for the worse.

Avatar
Claire Bernish
Born in North Carolina on the first of March in a year not so long ago, Bernish currently resides in San Diego, California. Educated at University of Cincinnati and School of the Art Institute of Chicago, she finds interest in thwarting war propaganda through education, the refugee crisis & related issues, 1st Amendment concerns, ending police brutality, and general government & corporate accountability.
http://thefreethoughtproject.com/author/clairebernish/

Leave a Reply

Your email address will not be published. Required fields are marked *