In the wake of Elon Musk’s comment that “A.I. is far more dangerous than nukes,” much controversy has brewed—perhaps most notably with Mark Zuckerberg’s response that
“AI is going to unlock a huge amount of positive things, whether that’s helping to identify and cure diseases, to help cars drive more safely, to help keep our communities safe.“
It is this dichotomy between visions of our future with artificial intelligence that Milken Institute’s panel discussion titled “Artificial Intelligence Advances, and the Ethical Choices Ahead” sought to address. Moderator Henry Blodgett, editorial director and CEO of Insider Inc, set the tone of the discussion by cautioning the audience that “fearless statements” would be heard over the course of the next hour. Blodgett then brought up a recent quote from panelist Vivienne Ming in which she remarked that the only reason Musk makes such comments is “because it prevents financial analysts from asking tough questions on quarterly calls.” True to Blodgett’s warning, fearless statements abounded, partially because of the provocative nature of the subject matter, and in no small part due to the bold personas which lined the stage.
One topic upon which all members could agree is that of worker displacement. Ming, who is a theoretical neuroscientist and AI expert, argued that the interaction between humans and machines is the key to understanding the future of worker displacement. University of Washington professor of computer science and head of machine learning at DE Shaw Pedro Domingo agreed with Ming’s point that white collar work is, contrary to past commonly-held belief, at much greater risk of automation than blue collar work, and concurs that the use of machines will lessen the workload of working professionals. However, the two’s points of view diverge when it comes to the social ramifications of such automation. While Ming sees this process as potentially harmful, citing “de-professionalization” and a potential for gaps in the traditional career path as mid-level work disappears, Domingo has a rosier view: one in which workers are freed to do more productive, more satisfying work as less-enjoyable tasks and ones better done by computers are taken care of by algorithms.
Blodgett questions whether it is ethical to fire workers who will no-doubt be considered ineffective in the face of advanced machine intelligence, and Domingo is quick to reply. “Absolutely,” he fires back, “In the short term there’s pain for everybody who loses their jobs, but if you look a generation ahead, everybody’s better off.” The professor argues that as efficiency inevitably increases and prices fall due to automation, economic productivity will increase as the public has more money to spend in their free time, and there will be a net increase in the number of available jobs. Ming counters by expressing doubt that these new jobs will actually be jobs that people want to do, and, for jobs which people are interested in, whether there will be sufficient support systems in place to prepare, for example, truck drivers to be artists and programmers.
Another theme which cropped up repeatedly was the idea that humans and AI are better together rather than individually, and that the synergy between the two is more powerful than either human brain-power or machine intelligence alone. Executive Vice President of IBM John Kelly calls attention to the 1989 defeat of Russian chess master Garry Kasparov by the computer named Deep Thought. Deep Thought, he says, was trained by a number of different players. While chess masters like Kasparov can often learn a given player’s style and adapt to it, the computer proved to be a challenging adversary partially because of its amalgamation of different players’ styles and strategies. James Field, CEO of Lab Genius, shares this vision of synergy with AI, having stated earlier that humans alone “are actually really bad at the process of scientific discovery,” but that “embrac[ing] alternative technologies enable[s] us to grapple with different solution-problem spaces.” Domingo points out that as humans and AI have different strengths and weakness, the best results will always come from combining the two. “If you have a horse, you don’t worry that the horse will outrun you. You think about how much farther you can go by riding the horse, so let’s ride the AI horse into the future.”
A common fear that is often expressed by AI skeptics is that, given a high enough level of intelligence, it could one day become conscious, question its own goals and go off-script. Ming points out that neuroscientists themselves have not reached a consensus on what consciousness actually is, thus it is even more difficult to define consciousness in machines. While she is not opposed to “super intelligent” machines, she is “highly skeptical that anything we’ve invented today or the infrastructure that’s in place [will give] rise to such a thing.” Kelly, in contrast, defining consciousness as self-awareness, claims “these machines know exactly the state they’re in. They know what they learned, where they learned it, and they remember that.” He adds the caveat that “ethically though, they cannot develop their own ethics, they can only learn from the data they’re trained on.” Domingo addresses the concern of a theoretical rogue AI, stating that “if you built a go-playing machine, it can become infinitely intelligent and powerful, and it will never have that moment of saying
‘Oh, why am I doing this,’ for a very simple reason… An AI system is an optimizer, we give it a goal, it uses all of its intelligence, all of its resources to satisfy that goal.”
Instead, he says, it is more likely that an AI system follows its manmade goals so well that it causes damage to humans. This, he argues, would be case of poorly-written rules for the machine, and one of goals without foresight.
Further delving into the topic of what separates man and machine, Field calls into question the idea that emotions are essentially what distinguishes humans from computers. “Is emotion actually the neural network inside your brain rewarding you for fulfilling your objective function?” he asks. Domingo echoes his point, stating “our emotions are the objective functions that our genes gave us so that we apply our intelligence to their survival and reproduction. So, in some sense, we already have objective functions given by our emotions, and machines already have emotions.” He claims that “in some ways, machines are as emotional as we are.” He draws parallels between the way our genes direct us through emotions and the humans will direct the actions of AI through programmed objectives once more, saying
“your intelligence is way beyond the understanding of your DNA, your DNA has no clue what’s going on. It’s just a bunch of molecules, and yet, that DNA is still controlling you to do the things that will foster its survival. In the same way we can build intelligences that are way beyond our understanding and are still doing our bidding.”
The panelists concluded the discussion by stressing that the ethical debate surrounding applications of artificial intelligence should be a public one. Field calls AI “both a tool and a huge responsibility.” He goes on to state that “there is no universal answer of ‘Yes, it’s right,’ or ‘No, it’s wrong,’ but it’s on a case-by-case basis that we fight each of these battles.” Kelly stressed transparency, saying “I can build a system that’s very sensitive to bias, or I can build a system that doesn’t care about bias and will pursue whatever goal I want it to pursue. I think it’s incumbent upon me to build that in. I can build a system that’s totally transparent that you can look inside the box and know how it made that decision, or I can build a system that’s a black box that you’ll never know how it made that decision. I think it’s incumbent upon me to build that such that that box is transparent to those that are using the system.” Ming cited her work as a neuroscientist who studies the ways in which technology can be used to augment human intelligence, arguing that once cyborg – like augmentation of the human mind becomes a reality for some, it will force others to do the same to compete. This, she said, is why it is important to involve the public in decisions over the ethics of artificial intelligence. She questioned
“what happens when my kid gets a Porsche in their head and the other doesn’t, and now we end up with two fundamentally different kinds of people? I’m not comfortable with that not being a pluralistic decision.”
Victoria Liset is strategic business & technology consultant to SMEs. She helps businesses improve their performance by using data more efficiently, and helping them to understand the implications of new technologies such as AI, Machine Learning, Big data, blockchain and IoT.