At first glance it may be hard to imagine how these two topics are related, but there is a growing connection between Artificial Intelligence and child abuse. As the internet has rapidly expanded, so have all the ways that communication can occur between people who have never met and all the ways that the publishing of explicit content can occur. There has been a growing desire for a technology answer to a problem technology has perpetuated, so there has been an ongoing search for a better way to protect children against abusive content online. Enter AI.
Google had been coming under fire in the U.K. and United States, among a handful of other countries, for allowing the presence of child abuse content on the internet to continue to grow. One specific example of this growing criticism came from Jeremy Hunt, the U.K. Foreign Secretary, who tweeted the following in August 2018,
Seems extraordinary that Google is considering censoring its content to get into China but won’t cooperate with UK, US and other 5 countries in removing child abuse content. They used to be so proud of being values-driven…Jeremy Hunt
In the same vein the British National Crime Agency stated that reports of child sexual abuse material were 700% higher in 2017 than in 2012.
In the time immediately following that criticism, Google responded with what appears to be serious steps in the right direction, they are using AI to dramatically increase the recognition of different types of child abuse content online. The new toolkit, which uses neural networks to assist human reviewers trying to identify child abuse on the internet, may make reviewers up to 700% more successful than they would be on their own, according to Google’s blog post describing their trials with the AI before it was publicly released. An additional benefit is that the human reviewers will have to spend dramatically less time deeply studying the extremely disturbing and repetitive graphic content and can rely on the technology to do a lot of the heavy lifting in terms of identification.
Google has made this technology available for free to a variety of different groups focused on this issue, including a Google partner non-profit organization, the Internet Watch Foundation (IWF), who has the mission to “minimize the availability of ‘potentially criminal’ internet content, specifically images of child sexual abuse.” Additionally, Google has made the technology available to other technology companies and non-governmental organizations (NGOs) who they believe are on the front lines of protecting children against abuse online.
The CEO of IWF, Susie Hargraves described the difference this innovation would make to her organization:
We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders by targeting imagery that hasn’t previously been marked as illegal material. By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and usersSusie Hargraves
The specific innovation that these neural networks should enable is to identify new photos that have not already been reported as abusive. To date, the majority of the work on image recognition has been focused on taking an image that has been reported as abusive somewhere, and finding everywhere else that that image exists on the internet. Google’s work on the Content Safety API should be a step towards a more proactive monitoring and policing of abusive content.
The way that Google is sharing this tool more broadly now that it has been released is through a public API, dubbed their Content Safety API. Google hopes that by making this API available the internet will continue to become a safer place as the deep neural networks improve and the number of users harnessing its power grows.
If you are a technologist that wants to personally explore this space and the toolkit that Google has created, but don’t belong to any of the groups Google has partnered with to work on child abuse monitoring, there may still be a way to get access. In order to access the Content Safety API, you can start here and let the Google team know your plans. Hopefully this type of public-private collaboration to prevent the abuse of children is just the beginning and there can be continued innovation towards reducing abuse and making sure technology solutions keep up with problems technology perpetuates.
Victoria Liset is strategic business & technology consultant to SMEs. She helps businesses improve their performance by using data more efficiently, and helping them to understand the implications of new technologies such as AI, Machine Learning, Big data, blockchain and IoT.