Can Legal Tech Help Fight Bias in the Legal Field?

January 4, 2017 - Casey C. Sullivan, Esq.

The legal industry isn’t winning many awards for diversity. The industry as a whole is severely lacking in racial diversity and gender parity, for example, while there are long-running and well-documented disparities in criminal outcomes across racial lines. What’s worse, those disparities are growing. The racial gap in sentencing has expanded between 2005 and 2013, according to federal reports.

But some think that technology might be able to solve, or at least mitigate, some of the legal practice’s most stubborn biases. In a recent article in the Observer, diversity consultant Monique Tallon looked at how the legal tech industry is confronting bias in the law. Here are some of the highlights.

Gender and Race Blind Legal Tech?

Tallon spoke to Stephen Kane, founder of ArbiClaims and legal informatics fellow at Stanford’s CodeX Center. ArbiClaims allows parties to settle small disputes through online arbitration, rather than going through the court systems. When testing the platform, Kane quickly noticed that users demonstrated a pattern of bias in choosing arbitrators. “What became obvious was that some people, when given the choice, were vetoing arbitrators based on race,” Kane said.

ArbiClaims ended up removing the option to choose arbitrators as a result.

Of course, this isn’t foolproof. When discussing attempts to counteract bias through technology, Kane told Tallon that ArbiClaims had “brainstormed ideas like creating a virtual courtroom to solve for racial bias… what if the jury served remotely and couldn’t see the defendant?”

Such hypotheticals also show the difficulty of balancing race neutrality with the needs of the justice system, such as a jury’s ability to evaluate parties and testimony face-to-face. Being able to see a party, their behavior and demeanor, is an essential part of assessing his or her trustworthiness, despite the implicit biases it might also activate.

What to Do When Even Computers Discriminate

Algorithms themselves can reflect the biases (conscious or unconscious) of their programmers or of the data they rely upon. Software used to predict future criminality, for example, shows distinct racial biases, according to an analysis by Pro Publica. Even Google’s automated advertising bots have reflected gender discrimination, researchers have found, displaying ads for high-income jobs to men much more than women.

Some legal tech leaders are well aware of this problem. Dan Rubins is the founder of Legal Robot, a company that uses AI to translate legal contracts into plain English. “We see this as a way to level the playing field, especially for people who can’t afford to hire pricey lawyers, like parolees or other disenfranchised groups,” Rubins said.

But he was aware that AI alone might not remove bias. “Unfortunately, programmers also have cognitive biases and work off of existing data.” Legal Robot works to avoid unintentionally building bias into its algorithms, according to Rubins, and tries to weed out bias in its data. When dealing with caselaw, Rubins told Tallon, some companies have “uncovered some racial and gender effects in judicial opinions. Finding them is only part of the battle… When we rely more on data, some of that bias is taken out.”

As the legal industry grows, such efforts gain increasing urgency. And even if legal tech can’t eliminate bias in the legal practice, it may at least be able to reduce its replication in technology.