Will robot lawyers replace their flesh and blood counterparts?

The legal field is slow to adopt change, and for good reason. Lawyers are held to a high standard of competency and honesty, and risk losing their licenses if they fail to represent their clients to the best of their ability. While students may be able to get away with claiming that their homework was accidentally deleted in the modern equivalent of “the dog ate my homework,” unlike teachers, attorney disciplinary authorities aren’t likely to give attorneys that kind of mercy.  

Most state attorney ethics codes require attorneys using technology, including artificial intelligence technology, to understand the “risks associated with relevant technology.” But most attorneys aren’t technology professionals, and many may have only a passing understanding of the software they’re using.

Even for the tech savvy attorney, artificial intelligence based technologies carries a heavy risk. AI often produces solutions without direct intervention by the user. It’s a black box, where a question is inserted and an answer pops out without revealing the method used to arrive at the solution. Google doesn’t show its work. That can be a problem for attorneys.

Attorneys have a duty to competently represent their clients. The duty is so central to legal ethics that it’s the first rule, Rule 1.1, in most legal ethics codes: “Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”

There’s no exception to the competency requirement for attorneys who rely on employees, or software, that make mistakes. The attorney is always responsible for mistakes made on a case. That’s why robot lawyers are unlikely to replace flesh and blood attorneys anytime soon. There’s always going to be a human attorney putting his or her license on the line.

Like Google, legal research platforms like Westlaw and Lexisnexis offer artificial intelligence technologies to assist lawyers with research for at least a decade. In the past, attorneys previously used a physical libraries with extensive paper-based indices to locate laws relevant to their cases. These indices were updated regularly to reference new statutes and cases. Using that old system, a competent attorney could be highly confident that he or she had found everything relevant to his or her case.

This system of indices was carried over to computer-based research systems. But over the past decade, natural language, AI-based searches akin to Google search have replaced the traditional system. While searching is easier, the challenge is ensuring that nothing is missing from the search. AI may be good at finding the most relevant information, but it isn’t necessarily good at finding all of the relevant information.

Understanding that challenge, legal research platforms continue to offer both AI-based search capabilities and traditional, index-based searches. So while an attorney may make his or her first search using a natural language, AI-based search, the attorney probably has a duty to follow that up with a second, traditional search.

Attorneys also need to be cognizant of the risk AI technologies present to client confidentiality.

Rule 1.6 of the American Bar Association’s Model Rules of Professional Conduct requires that a lawyer no “not reveal information relating to the representation of a client.”

Artificial intelligence applications are often cloud-based, requiring data to be uploaded to the internet, where the attorney has effectively lost control of the process and may be giving up client data to data-mining corporations and hackers alike.

As an example of that risk, consider cloud-based dictation programs that use AI technology server-side. An attorney dictating client meeting notes has likely released confidential client information not only to the dictation service provider, but also to hackers who may gain access to the provider’s databases or intercept the communication on its way to the provider.

The criminal justice system likewise faces ethical challenges presented by artifical intelligence. But this isn’t just a fear of a future world with robot judges. Software using AI technology to predict whether offenders were likely to reoffend was found to include a racial bias that made it more likely that black defendants more than white defendants would be flagged as likely recidivists. And judges have used this software extensively when making sentencing decisions.

Technology is advancing daily, and for the average user, beta-testing the latest cloud-based AI technology can be a fun, if somewhat frustrating. But attorneys, and the legal system in general, should be slower to adopt new technology than the general public. Attorneys and judges may be entranced by the efficiency an AI-based technology offers, but the public’s interests should always come first. Attorneys should take a pass on new technology that threatens the confidentiality of client data or presents a threat to an attorney’s ability to competently represent his or her clients.