In 2018, an AI ran as a mayoral candidate in an area of Tokyo. Though running officially as human Michihito Matsuda, the candidate- who ended up coming in third, with 4,000 votes- was an artificially intelligent creation dreamed up by Matsuda and financed by Softbank VP Tetsuzo Matsumoto and former Google VP Norio Murakami. This isn’t the first- nor will it be the last- time an AI has run for office (anyone remember the AI “Alice” in Russia?) but what made Michihito Matsuda interesting was its vision of a politician who would bring greater fairness through statistics and algorithms.
It’s an idea that can seem attractive. Instead of decision-making relying on human understanding of a particular phenomenon, an AI mayor could make changes based on empirical evidence of, say, necessary funding to prevent poverty in a particular community. An AI is regarded by some as a welcome change from heated two-party conflicts that dominate politics in the Western world.
However, there are issues with this analysis. Firstly, and most obviously, more and more research disproves the idea of AIs as neutral creations. Far from being unaffected robots who simply take in information and come to a statistically- sound conclusion, AIs are the product of and are operated by humans. In the above example, those funding Michihito Matsuda would be the ones to function and operate it; humans with financial and political interests of their own. Secondly, AI’s involvement with politics can be undesirable because of one key feature: distrust.
Politicians build on the sense of trust from their bases in order to secure votes. In an age of constant political ads and draining, exhaustive election coverage, a choice in politician can come down to who a voter feels more personally connected to- think of the infamous rhetoric surrounding George W. Bush and how he was someone who you could sit down and have a beer with. And in the era of increased use of Artificial Intelligence, particularly as it gains prominence and criticism, the trust we place in our elected officials- and even in ourselves- could be put in further jeopardy.
Indeed, while Michihito Matsuda did not win the election, AI in general is being used more and more as a tool to influence voters and win their trust. A process known as “micro-targeting,” for example, aims to determine the outcome of a voter’s choice in an election based on their individual psychiatric make-up. For example, data mining- a controversial and well-understood use of AI- can track an individual’s online activity and make a series of targeted advertisements based on their personality and emotions. When applied to the 2016 U.S. elections, Cambridge Analytica (a firm dedicated to data science) was able to tailor political ads based on an algorithm that determined the personality type of voters. From thereon, the targeted ad you received for a specific candidate would focus on areas of their platform that appealed the most to you.
This information can come from a variety of sources: what kinds of articles do you click on? What campaigns or politicians do you “Like” on Facebook? Do you tend to share daily posts about what you had for breakfast or do you only post the occasional GoFundMe link? Is your online presence mostly pictures of your kids or do you routinely participate in campaigns? Your temperament can be determined by what you curate, and then your political ads will be tailored to that temperament.
This, of course, raises a number of ethical questions regarding privacy and transparency of politicians who use these tactics. Is our vote really ours if it has been carefully and neatly influenced by algorithms targeted towards our very psychological make-up? Just as an AI politician cannot claim neutrality any more than a human politician can, it’s easy to see how our vote could feel like it is no longer ours in the wake of AI being able to statistically determine our voting predispositions.
In the wake of greater use of AI by politicians, strategists, and campaigns as a whole, it is on the voter to be aware of the methods that can be used in order to influence the manner of voting. AI can have both positive and negative outcomes on the election process: some organizations use AI to, for example, identify unregistered voters and remind them of deadlines to register. This can have a positive effect on the democratic process and voter turnout. But being aware of the negative impact AI can have on politics in terms of our privacy is crucial, to ensure that our political futures are determined by us- and not by a machine.
Victoria Liset is strategic business & technology consultant to SMEs. She helps businesses improve their performance by using data more efficiently, and helping them to understand the implications of new technologies such as AI, Machine Learning, Big data, blockchain and IoT.