The days of automating the online consent of data use process are upon us. Even in the light of how users have been given more control over how their data is used via the GDPR law, the problems of providing effective notification, adequate time to make a choice, and time to exercise the ability to withdraw both consent from a company and have your data removed from their database will become a monumental clash between AI-powered automated approval and an individual’s control over their data. The bigger issue is that we haven’t even seen the full power of what AI can be used for. Let’s take a look at the Gartner Hype Cycle for emerging tech.
Maturity of AI and the Ethical Dilemma
The development of artificial intelligence algorithms for use in machine learning applications is not a regulated industry. It is an emerging technology that has an unlimited future in collecting, processing, and analyzing incredible amounts of data with the advent of its emerging technology counterpart of quantum computing, the brain for AI. The reality of AI is best demonstrated by Gartner in their annual report on the maturity of the technology and when we can expect to see it come into everyday use. As you can see in the chart, both AI and Quantum Computing are 5 to 10+ years away from being totally mature and commercially viable. That isn’t a long time.
Let’s take a quick look at the way AI is being used and perceived of today, even in its weakened state without Quantum Computing. The potential moral and political dilemmas are inescapable. The use of the data collected is unpredictable (remember Cambridge Analytica in 2016 and Facebook’s permission to use your data – who knew until after the fact?) and the unimaginable things they did with it. It was perceived as an acceptable feature of AI to exploit and do the analyses of people’s conversations, mood, and behavior in support of determining who they were going to vote for. There was no concern for the consequences and potential harm that might result from the issue. What was done turned out to be ethically immoral and politically unacceptable? Who knows what personally identifiable information (PII) and other private information were collected and used?
Analysis of Ethical Situations of Automating Consent Using AI
A study of automating the consent for getting approval to use data needs to be conducted to investigate the potential impacts of the current morally acceptable means of obtaining consent before we start transforming the process into something the individual may not understand or be unable to react quickly enough to prevent unauthorized use. AI can do things so quickly that ethical dilemmas will begin to pop up long before any governance institution can respond. We already see that now with the EU GDPR process. The governance of the law is already overwhelmed with violators and legal challenges.
A few ideas are floating around now that could be considered as short term fixes so that we can examine technology to counter immoral use of data which will no doubt use AI algorithms to protect the data people don’t want to release. Supplementing the current consent process using binary computers and weak AI capabilities would match what corporations, political institutions, and data warehouses are doing today. Digitizing the ongoing consent process using business intelligence techniques initiated from the user side of the issue will at least create a stalemate and give the user or consentee time to make an informed decision. This would include an automated immediate communication response function to prevent anyone from jumping the gun. The process on the requester side of the issue must develop responsive apps that collaborate with the data owner and company holding the user data in their CRM’s or databases in initiating the entire consent process. Yes, that will slow down the effectiveness of the AI algorithm, and that’s OK.
AI, Ethics, and Digital Consent Isn’t the End of This Story
AI presents many unique challenges to processes and business that we haven’t discovered yet. But, people are thinking about the future ramifications and forums and conferences are batting them around for discussion. We will list them here for exposure purposes and dedicate future blog posts to some of them. But, for the sake of getting you into thinking mode, here are a few that came out of a past World Economic Forum as the top ethical and societal issues of AI.
- Equal Distribution of Wealth created by Machines
- Humanitarian and Wellness Issues – AI will change moods and behaviors and not all in a good way.
- AI mistakes as it will not be infallible or able to be protected from manipulation
- Racist Robots – people will create AI, and people have their own views on people
- Security – we all have adversaries. How do we defend ourselves?
- Self-Learning robots – Rise of the Machines anyone?
Conspiracy theories will abound, and religious forces will align against technology. We are at the beginning of this process and in true Hollywood fashion in borrowing a line from a very old movie. Hang on. It’s going to be a bumpy ride.
Victoria Liset is strategic business & technology consultant to SMEs. She helps businesses improve their performance by using data more efficiently, and helping them to understand the implications of new technologies such as AI, Machine Learning, Big data, blockchain and IoT.