The United States is behind in a field we are advancing in at startling rates. This irony is in reference to artificial intelligence. This field has been growing at an alarming rate, yet fundamental questions remain unanswered. As Americans, we are cheering for the arrival of a house with unprecedented style, without pausing to address the home’s underlying foundation. If this seems premature, think about the privacy battle taking place between tech giants such as Apple and Google and the government. These invasions of privacy, and predictive software have been present for several years, yet the questions are only now reaching courtrooms. The goal of this piece is to address some of the concerns that need to be addressed before artificial intelligence becomes so pervasive in our lives it is too late to address these concerns.
Before diving in to these concerns, there are a few important observations to make about artificial intelligence. First, this technology has the ability to revolutionize many fields. For example, the medical technology available to us will advance at stunning rates. Additionally, doctors may be connected by a network that instantly updates them. Think of a software update rather than millions of doctors needing to go to school again. As another example, think of driverless cars. Currently accidents are a major killer globally. However, driverless cars stand to remove much of the human error that causes these accidents. Finally, think of warfare. Artificial intelligence may limit the number or civilian and military lives we put at risk due to intelligent weapons that operate remotely with technology we could not even have imagined a generation ago.
Having addressed some of the potential upsides to artificial intelligence, I would like to begin addressing some concerns with its foundation. Although the technology that makes artificial intelligence work is advancing quickly, at what cost to ourselves will it arrive? Throughout time, humans have advanced themselves by searching for knowledge and educating themselves.
As Henry Kissinger wrote about the enlightenment that followed the invention of the printing press in the 15th century,
“allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion.”
Kissinger is making the point that it was the search for reason and thought that drove how our current world is structured. The problem with artificial intelligence is that it removes this element. No longer would solutions be given to us by our thought, debate, and research. Instead, it would be told to us by an algorithm that lacks any context of the world around it. On top of that, what person will argue that their reasoning is better than a software program that aggregates millions of pieces of data and synthesizes a solution? This lack of reason may not only cause humans to become complacent, but could cause humans to become a backseat driver in their own world.
The second area I would like to address is that of trust. In human history, we have arrived at solutions by arguing, thinking, communicating, and often fighting. Our world has advanced alongside these activities, and not outpaced them. Since advance in all fields has been human driven, this was never a concern. However, artificial intelligence could change all of that. If you think about our world, we are becoming less and less thinkers and achievers. We now trust our phones to tell us how to get where we are going, what food we should eat, what clothes we should buy, and much more. We are increasingly becoming pieces of data. Although collecting more data and synthesizing it is a good thing, and a means to reach educated decisions, it may come at a cost. Do we trust artificial intelligence to give us the right response to the questions we ask it? If the question is how to arrive at a Taco Bell, the answer is probably yes. After all, who after a long day and night wants to think about directions? But what if the question was what kind of life would a sick old man lead in the future? This is a question we would not want made for us. The software relying on artificial intelligence may spit out logic based on cost, quality of life, life expectancy, and time taken by doctors that could have been use on younger, healthier patients. However, let’s suppose this man is your grandfather. You know he is sick, may not live for long, and will be limited in what he can do. But you want him around for as long as possible out of love. You don’t care about the cost. Despite this love, what if your insurance company decides actions based on the aforementioned artificial intelligence software? Are you ready to trust a life to it? Remember, artificial intelligence is based on data, accumulating vast amounts, and learning from this data to produce responses per its mandate. The human element is not present. We have not addressed this concern, and we need to address it immediately. Are we ready to trust artificial intelligence? For what topics? Must we all comply? This technology is on the immediate horizon, and some of it has already arrived. Yet, these fundamental concerns have yet to be addressed in any meaningful way in the United States.
The final area the United States is behind is the area of management of artificial intelligence. This overlaps with my last idea of setting up parameters for artificial intelligence, but it goes a step further. Artificial intelligence grows rapidly as it collects more data and its software improves. This is good for the technology, and its capabilities, but who will regulate it? Will this be done by company, industry, country, or on a global scale? This software is already being deployed in our everyday lives, yet this issue has not been addressed, and certainly not decided.
The time to address these concerns is now. Until the United States addresses management, they will remain behind their peers. In a recent survey, 1,400 global executives were asked whether the internet or artificial intelligence would have a bigger impact in the long term. 84% of Chinese executives voted for artificial intelligence, while only 38% of US executives said the same thing.
As a further example of how current management of artificial intelligence in the United States is hurting growth, in the same poll, 25% of executives of Chinese companies said their firms used artificial intelligence often, compared to only 5% of US executives. Further, President Xi Jinping in China is leading the country into exploration of artificial intelligence with such force, as of 2017 China venture capital firms comprised 48% of investments into artificial intelligence. For the first time, this passed the United States
A recent report on the status of the artificial intelligence race said the main problem appears to be the lack of a national solution in the US.
This is why Kissinger is calling for a presidential commission of top thinkers to begin to develop this strategy. The time is now for this action. Other governments, including China, Japan, and the UAE are directly funding artificial intelligence, America is behind on developing any planning committee. Since other governments, such as China, are more autocratic than the US, and can quickly move focus to artificial intelligence, in order to compete, the US government needs to find ways to entice private companies on the scale of these other nations.
The next issue under management is understanding the solutions artificial intelligence presents us with. There is a danger that artificial intelligence could outpace humans in cognitive ability, and for the tasks assigned, begin producing results that we cannot really understand. We have not addressed how we will prevent this. If we cannot explain the logic behind a specific solution, then we cannot properly put it into context and manage it appropriately.
We are also behind in the ethical context of artificial intelligence. As discussed earlier with the sick, old man scenario, we have not decided if we are ready, and if so, to what extent, to allow artificial intelligence into our lives from an ethical standpoint. There is a popular hypothetical example of whether a driverless car can save the driver or a child in the street, but only one, which should it save? This is quite literally a decision of who lives and who dies. Should this decision be up to the driver, or up to a government? This also has not been addressed, but driverless cars are already on the roads.
As artificial intelligence advances, we need to understand how we will manage it. If it becomes dangerous in some way, can we stop it? Who will be the governing authority? Are we acquiescing to an age past that of human reason as the core driver of change in the world, and instead opting to become human data? Our world is becoming increasingly about data points and allowing software to tell us what we used to want to know for ourselves. While this represents tremendous upside to let humans take on other issues, we are far behind in preparing for the arrival of this technology.
There are many concerns with artificial intelligence from a developmental perspective. Having said that, before we move on to those issues, the United States has to address the fundamental concerns of removing the human element to putting change in context, when we want to trust artificial intelligence, and how we want to manage artificial intelligence but still encourage growth to keep up with other countries. These three areas must become central to any political debate moving forward in the United States so the US can catch up the progress being made by other countries.
Victoria Liset is strategic business & technology consultant to SMEs. She helps businesses improve their performance by using data more efficiently, and helping them to understand the implications of new technologies such as AI, Machine Learning, Big data, blockchain and IoT.