According to the British Science Association, AI poses a greater threat to humanity than terrorism or climate change ever did. The change is happening so quickly, and the full measure of adapting AI to jobs, process automation, and their ensuring security has not been fully acknowledged. In recent times the benefits and dangers inherent in weaponizing AI have been more pronounced. With the UN pushing for the international ban of “Killer Robots,” more countries are becoming concerned about the safety of their citizens, the legal and moral implications of autonomous weapons, and how secure these technologies are from rogue states and terrorist organizations.
According to the British Science Association, AI poses a greater threat to humanity than terrorism or climate change ever did. The change is happening so quickly, and the full measure of adapting AI to jobs, process automation, and their ensuring security has not been fully acknowledged. In recent times the benefits and dangers inherent in weaponizing AI have been more pronounced. With the UN pushing for the international ban of “Killer Robots,” more countries are becoming concerned about the safety of their citizens, the legal and moral implications of autonomous weapons, and how secure these technologies are from rogue
Machine-Learning technology is gaining traction in various industries, and it is becoming more accessible. Terrorists have been known to adapt technology to their purposes as has been observed in some Islamic states and other militant groups where smaller drones were weaponized. Just recently (in 2017), ISIS weaponized some drones with grenades in the battle for Moul to retake a city from opposing forces. The ease of procurement and affordability of these drones make them easy tools in terrorism, and the threat of AI-enabled drone swarming is becoming more feasible. Car bombings can also be orchestrated remotely in the cases of the self-driving vehicles and extortion, kidnapping, or other criminal attacks are possible through social engineering.
This trend of terrorist technological adaptation is becoming the norm. Several adaptations have been made to social media and encryption technology by these organizations to perpetrate their acts, and AI is next in line. In the hands of these terrorist groups, the AI technology could be used for massive destructive acts. Counterterrorism efforts are on the rise, and while the number of attempted terrorist attacks has been fairly constant, there has been a decline in successful attacks for the last four years. Data from the Centre for the Analysis of Terrorism reveals that 47 terrorist plans were frustrated in 2017. Law enforcement agencies around the world are continually adapting emerging AI technology to fight terrorism.
The psychic awareness of Facebook and Google coupled with tools like Siri and Alexa are examples of AI that have become integrated with the fabric of society. We will indeed witness more changes from the use of AI in the future than we have experienced with the advent of the internet. Facebook presently employs AI (image-matching technology) to sieve terrorist contents from its platform coupled with Machine Learning algorithm to identify patterns in terrorist propaganda. AI is being employed in the Netherlands for analyzing bulky data to abstract crucial pieces of information on possible terrorist attacks. This information would have required immense manpower to analyze and would have taken a longer time. In Dubai, AI-powered robots are used to enforce the law. Big data can also be analyzed with AI technology to identify patterns that are indicative of terrorist attempts from the analysis of trend and crime patterns. AI technology has helped law enforcement agencies become more efficient. Some of these solitary terrorist acts are perpetrated by means of high-production-value web–based propaganda. In the United Kingdom, automated detection technology has been deployed to detect and delete contents of this nature from online content platforms.
The fight against terrorism is becoming fierce; a Data Science Network was launched in 2018 by Tech Against Terrorism in conjunction with the Montreal Institute of Genocide Studies at Concordia University. This network serves to employ data in identifying and tackling terrorist and their use of internet technologies. Presentations and Conferences for policymakers, business leaders, heads of data science and C-level executives on how AI can be used to counter terrorism and its effect on national security were held in 2018.
The possible ways by which the AI technology can be hijacked and harnessed by terrorists is only limited by the power of imagination, and counterterrorism measures need to be constantly enforced for the safety of lives and properties. It is the responsibility of the government to protect its society from possible threats; and industries, government, and academia need to take decisive actions to avoid a public backlash in terms of the usage of AI. The technology needs to be controlled and regulated; if this step is not taken there is a high chance of having AI development becoming monopolized by a handful of companies and the technology becoming sinister.
Victoria Liset is strategic business & technology consultant to SMEs. She helps businesses improve their performance by using data more efficiently, and helping them to understand the implications of new technologies such as AI, Machine Learning, Big data, blockchain and IoT.