January 1, 2018

Legal Robot will publish this report quarterly, the next being on or around April 1st, 2018.

Algorithmic Transparency

On January 12, 2017, Legal Robot publicly committed to implementing principles for Algorithmic Transparency.

We will make our owners, designers, builders, users, and other stakeholders of analytic systems aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.

In an effort to improve the general awareness around Algorithmic Transparency, our CEO, Dan Rubins, traveled to Washington D.C. to speak at the Association for Computing Machinery’s (ACM) Panel on Algorithmic Transparency. The panel discussed the challenges, opportunities, business value, and societal impacts of algorithms with a diverse and lively crowd of political staffers, lobbyists, academics, and other stakeholders.

Access and Redress
We will adopt mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.

Most predictions in our app have a button to visualize and examine the details of the result, however we don’t provide this for basic operations like sentence segmentation, part-of-speech tagging, and other NLP operations that are fairly well understood by the NLP community. Where appropriate, we also include statistical measures like precision, recall, and F1 score, as well as the size, source, and scope of the underlying dataset, and details about the design of the algorithm used for the prediction. Of course, we don’t expect everyone to be able to interpret this technical data, so we also allow anyone to share the results with our team for more explanation.

Also, anyone can ask questions over email to [email protected], even if they are not using Legal Robot. These questions are tracked separately from our normal support requests.

We will demonstrate to our users how decisions are made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.

Many of our processes at Legal Robot use deep neural networks to process language. Neural networks can be very complex which can make them seem incomprehensible. However, just because an algorithm seems like a black box (and is treated that way by many people using it) does not mean it cannot be explained.

To begin with, we do not use any 3rd party machine learning APIs at Legal Robot. This is mainly so we can control where data processing occurs. Rather than passing sensitive data to a 3rd party as many “AI” companies do, we actually build our own algorithms so we can open up the internals for further analysis and explanation.

We now tag each prediction created by our software with a unique random identifier that can be used to trace back to both the algorithm and the training dataset used for each prediction in order to enable questioning and redress.

We will produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made.

Some of the techniques we use yield dense vectors (basically a long string of seemingly incomprehensible numbers, like [0.78524 , 0.42504, 0.60494, …]) that we use to teach an algorithm what a particular type of clause looks like (statistically speaking). However, we are working on methods to make these dense vectors more interpretable, much the same way that deep learning techniques can yield semi-interpretable layer visualizations in computer vision. We think these can provide some utility for users to understand what is happening inside the “black box.” We are focusing on these areas over the next few releases and intend to publish our results to the research community.

Data Provenance
We will provide a description of the way in which the training data was collected, along with an exploration of the potential biases induced by the human or algorithmic data-gathering process.

Every model created by Legal Robot is traceable to the specific dataset. Every data point also includes detail on how and why each sample was collected, and the details of any enrichment or manual tagging.

All models, algorithms, training and test data, as well as decisions, will be recorded and kept for a reasonable amount of time so they can be audited in cases where harm is suspected. However, we will not provide sensitive user information, like decisions or other algorithmic output from private legal documents, to anyone but their owner (doing so would violate our privacy policy, terms of service, and ethics).

All of our models, algorithms, and datasets are now versioned and recorded, providing a full audit trail. We have not yet set a policy or provided a mechanism to view or download the audit trail, but are planning to release this feature soon.

Validation and Testing
We will use rigorous methods to validate our models and document those methods and results. In particular, we will explore ways to conduct routine tests to assess and determine whether the model generates discriminatory harm. We will publish a description of the methods and the results of such tests in each quarter's transparency report.

We are working on a structured approach to analyzing bias to capture both known and unknown biases. In addition to this high-level approach, we are investigating lower level techniques like attribution to detect and evaluate bias. Last quarter, we started to use automated bias analysis on some of our models, but there is still much work to do by the research community.

Security Incidents
  • None
Bug Reports

Starting with the last transparency report, we began publishing statistics on our bug bounty program, links to disclosed bug reports, and detailed incident reports for serious security issues. This quarter was relatively quiet.

NewTriagedNeeds More InfoResolvedInformativeDuplicateNot ApplicableSpam
Public Disclosures

We intend to disclose all reports, once closed. However, we also respect the wishes of security researchers that are working with other organizations to resolve related issues. Our public disclosures can be viewed on HackerOne as they are disclosed.

Code of Conduct

We require all members of the Legal Robot community to abide by our Code of Conduct. As of the date of this report, we have not received any reports alleging violations of our code of conduct.

Requests for User Information

As of the date of this report, Legal Robot has not received any governmental or civil requests for user information. When we receive a request, we will ensure it is legitimate and not overbroad, and provide advance notice to affected users unless prohibited by a court order, or where we decide delayed notice is appropriate based on our privacy policy. Further information about our legal polcies, including helpful information for law enforcement, is available on our legal policies page.

National Security Requests

For more information around what inspired this statement go to https://www.canarywatch.org.

As of January 1st, 2018:

  • Legal Robot has not received any National Security Letters or any orders under the Foreign Intelligence Surveillance Act.
  • Legal Robot does not have any knowledge of any search orders that have been issued or carried out.
  • We have never placed any backdoors in our software and have not received any requests to do so.

Special note should be taken if this transparency report is not updated by the expected date at the top of the page, or if this section is modified or removed from the page.

The canary scheme is not infallible. Although signing the declaration makes it difficult for a third party to produce this declaration, it does not prevent them from using force or other means, like blackmail or compromising the signers’ laptops, to coerce us to produce false declarations.

Requests for Removal

Legal Robot has not received any “take down” notices or other removal requests under the Digital Millennium Copyright Act (“DMCA”) or any other regulation like Article 12 of Directive 95/46/EC, or the newer Article 17 of the General Data Protection Regulation (“GDPR”), commonly known as the “right to be forgotten”.

Proof of Freshness

The news quotes below show this report could not have been created prior to January 1st, 2018.

  • BBC: Time’s Up: Women launch campaign to fight sexual harassment
  • NY Times: Emboldened Israeli Right Presses Moves to Doom 2-State Solution
  • Reuters: Trump says U.S. has gotten ‘nothing’ from Pakistan aid
  • Washington Post: Trump administration fires all members of HIV/AIDS advisory council


Signed by Dan Rubins / Fingerprint 98D0 F6F0 305E F378 / Text format for verification