The most recent application of artificial intelligence in the criminal justice system might shock you. AI is being used to make big decisions on criminal sentencing. Municipal and state courts are starting to use AI more and more to predict future actions by those in the system. AI might be harmless in less high stakes situations, but using it to make life changing decisions that will affect a person’s permanent record is a different story. Because of these new concerns surrounding AI in criminal sentencing, this relatively new use of the technology is proving itself controversial, to say the least.

Algorithms

We encounter algorithms in our lives every day. Our online presence and decisions are guided by algorithms created by artificial intelligence. What we buy on Amazon, what we watch next on Netflix, and even which friends to add on social media are all determined by AI. But using this technology to suggest products and acquaintances is a far cry from using it to make judicial decisions.

Determining the likelihood of future actions

 

One very important factor of the judicial system is determining the risk of recidivism. Recidivism is the likelihood of a charged person reoffending once they have completed their initial sentencing. Unfortunately, this happens more often than you may think. Before the rise of algorithms and artificial intelligence, the risk of recidivism was left up to the decision of a judge. The judge would assess the risk of someone reoffending based on little more than a gut feeling.

Benefits

AI leaves less room for human error. Although the criminal justice system is supposed to be completely neutral, this is rarely the case. Factors like media, public opinion, and even the judge can all affect the outcome of a trial. Studies show that judges are more likely to go easy on a defendant in the early morning or after a scheduled break. If you happen to catch the judge right before lunch, for example, you may not be as lucky as the person facing the judge right after lunch.

AI can decrease the risk of wrongful convictions

If an individual fits a description of a wanted criminal and lives in the area, for instance, a human mind might want to get this person convicted as fast as possible. Up to the mercy of the judge and little evidence, someone innocent could very easily go to prison. But with AI, the technology puts together a file based on the individual’s past and predicted future actions. With this document on your side, the judge now sees you as a low flight risk with a low chance of recidivism.

Quicker sentencing

Many of those sitting in jail between the time of arrest and their trial are there because they cannot afford bail. However, not all of these individuals are guilty of the crimes they are accused of. Artificial intelligence technology can be applied to assist law enforcement officials with their search for evidence. The software processes all information much faster than a human mind ever could. AI then prepares case files and makes a conclusive guess based on the information given, and without human bias. Because of this new tech, judges can now see more cases and work more efficiently. A computing program now takes out all the racial, gender, and social class bias that pervades human thought when convicting criminals. At least, this is the idea. But it doesn’t always work out that way.

Risks

Unfortunately, the input data and creators of a program are not always completely fair. One algorithm known as COMPAS is widely used among criminal justice officials and courts. The program was recently found to have a distinct racial bias against people of color. It concludes that black defendants pose a much higher risk of reoffending than they actually do, and the opposite for white defendants. This sets a dangerous precedent for the many who go through the criminal justice system, and puts our society back many steps when it comes to social justice.

Another well used program known as PredPol predicts when and where a crime will likely take place. It has the unfortunate side effect of leading police to target specific neighborhoods, specifically low income areas. It was found that the algorithm incorrectly placed bias on areas with a high amount of persons who identify as racial minorities, regardless of the real crime rate in the area.

Bottom line: Although AI is becoming better every day, the algorithms we see every day still have a long way to go before being safely applied to the criminal justice system.