AI and Law is becoming an increasingly hot topic (we always knew it was special) with companies like ROSS Intelligence, TrademarkNow and many others making significant headway. Unfortunately so much media subsumed under the heading of “AI and Law” has to do with either helping lawyers do their jobs better, or pointless arguing about mass-unemployment for lawyers (or a lawyer-free utopia, depending on your perspective).
There are so many good things happening in AI and Law - why does the conversation so often devolve into blathering about Doomsday scenarios, and what is the real substance behind lawyer’s anxiety about AI?
Many applications of artificial intelligence center around legal informatics and research, these tend to be fairly safe fields where many people accept a higher degree of automation is necessary thanks to the ever-increasing data volumes. Other applications tend to be more troubling. Gradual exportation of legal techniques, models of argumentation, case-based reasoning, and legal decision-making begin to get closer to the idea of the automated lawyer. These areas have always been troubling, but seemed fairly distant until recent times.
Artificial lawyers have been under increasing development, less by lawyers and more by technologists who are augmenting or gradually replacing aspects of legal practice. An article in DATACONOMY describes AI as the future of law.
The author, German business writer and self-described “nerd,” Hannah Augur describes how AI developers in Germany have already developed an AI application to pass decisions on claims made by citizens. The developers claim that the application will “likely happen” under the careful authority of the human eye, but seem certain that the computer’s ability to reason through statements will make it perfect for legal applications.
They admit that “it’s unlikely” defendants will be judged by a robot anytime soon. AI, the developers speculate, won’t be taking over big law firms just yet. However, they describe to top of the slippery slope. External companies offering AI are already growing. They will be able to automate small tasks for firms.
“If properly programmed, AI could even surpass the abilities of an ordinary lawyer….Once complex laws can be broken down in machine-readable text, AI…will be passing judgement…That sounds a bit scarier than expected.”
Still, lawyers themselves are threatened by the ever growing role of AI. A panel of lawyers described AI as those of prior generations might describe the work of an underservant.
In response to a question, the panel admitted that they are both afraid and encouraged by AI. Certain elements of AI could clearly replace lawyers in some of their traditional work particularly when lawyers hear about computational models of argumentation, decision-making, legal reasoning and the like. The new technology, said one respondent, has turned lawyers into data entry clerks. They feel that the development of AI in law is a chance to re-evaluate their profession.
Law societies have long viewed artificial intelligence with a mixture of awe and suspicion. On one hand there are the legal and ethical questions that arise when the idea of autonomous artificial intelligence robotics comes up.
Jonathan Smithers of The British Law Society recently delivered a comprehensive speech to the Union Internationale des Avocats (UIA) Conference. He points out some of real concerns about large-scale use of AI.
AI relies heavily on the use of personal and corporate data for all its practical applications. This raises privacy and data protection concerns. How will “big data” handle delicate information like search engine history, online banking data, medical history that may have to be collected and stored? Who will access this data and for what purposes? Who will be responsible for keeping the data secure? How will the system handle data breaches that can be international and across jurisdictions?
What about tort accountability? As an immediate example, if a driverless car encounters a child who runs into the street and the car has to decide whether to hit the child or crash itself into an oncoming bus,
AI is posing problems by introducing concepts that current law does not cover. When the 3-D printer formula to make an invention is e-mailed and the invention is 3-D printed by any recipient or pirate, who owns the printed object? Who is accountable if the invention does not function or is unsafe? What if the invention is a deadly weapon? What are the laws that cover these situations?
Artificial intelligence, Smithers goes on to say, is gradually dictating the way we do the law. The profession is playing catch-up. The future of the law is being planned by technologists and software engineers.
Many clients are using the internet to diagnose their legal cases before they call a lawyer. Self-diagnosis is still no substitute for complete replacement for lawyers, but not all potential clients recognize that. There are distinct limits to the capability of AI systems to go beyond “dispensing black letter law.” AI can not (yet) develop creative legal arguments or go beyond very limited interpretation of legal data. So far, expert legal intervention by creative and compassionate human lawyers is still needed.