In what may be a first for the legal AI industry [Legal Robot] has announced that it has committed to making its use of algorithms transparent and accountable.
As Founder, Dan Rubins, explains, the move follows the suggestions of the US-based Association for Computing Machinery (ACM) to ensure AI companies’ algorithms and their impact are well understood and transparent to users. This is in order to ensure the removal of any potential bias, prevent intrinsic errors built into decision-making systems and to allay any fears among users and the wider public.
The following is Q&A that Artificial Lawyer conducted today about Legal Robot’s statement on Algorithmic Transparency.
How do we risk creating bias in contract review?
Well, contract review isn’t the only thing on our roadmap. Even in that limited scope, Legal Robot uses machine learning algorithms to translate legal language into simpler texts. While most of us native English speakers don’t think about this daily, many other languages have gendered nouns. For example, if we’re simplifying or translating a legal text into a language like Spanish, we could accidentally introduce or reinforce gender bias. Even the default translation of Lawyer is male-gendered.
Perhaps a more relevant example: consider an investor evaluating potential real estate deals. If an AI company sells that investor a contract review/analysis software with an opaque rating of “Investability” and their model includes socio-economic factors, those effects could be easily magnified. But, there are myriad other situations with less obvious rules and more sinister impacts. The simultaneously wonderful and terrible thing about AI: it’s an amplifier.
Latent biases that exist in our daily lives may seem trivial, but can become extremely damaging when magnified and ingrained into society’s institutions. What we’re doing here is drawing a line in the sand for our company – we fully expect things will go wrong, but putting these principles in place will help us make things right when they do.
What inspired this move?
As the founder of an AI company I’ve been conscious of the increasing effect of algorithms for some time, but there were two recent events that really demanded action. First, a state agency in Michigan wrongfully accused 93% of unemployment claimants of fraud [(see here)].
More recently, established tech companies have been under fire for promoting social media stories that are optimized for an emotional response (emotional response -> higher engagement -> heavier weight in feeds -> more views -> ad revenue). News organizations have known this for ages.
So, without passing judgment on the large tech companies, it is clearly easier to course-correct a small company. We decided it was important to set the tone of our company before growing too large. ACM put a lot of effort into the principles they published yesterday and they closely mirror what we’ve been thinking about for a while. I think a larger part of these principles is in things like having procedures to trace where training data comes from, testing it for whatever biases we (and other smart people) can come up with, then publishing the results.
What about IP risk?
As a startup, you’re going to have a rough time if your main IP is just secret algorithms. There are constantly newer and better algorithms being developed and published by researchers. For us, it’s a non-issue to tell people how we get to a particular decision or even specifics about an algorithm (hey everybody, we use LSTM neural nets!). I think a larger part of these principles is in things like having procedures to trace where training data comes from, testing it for whatever biases we (or other smart people can come up with), then publishing the results.
Will other AI companies do the same?
I certainly hope so… and we’re here to help. The truth is, it’s going to be quite a lot of work and for many companies it’s just not going to be a priority unless their competitors are doing it, or their customers or regulators demand it.