As a young AI company, we feel very strongly about algorithmic transparency. So, we’re making a public commitment to these principles that will set our company and our industry on the right path. Read our blog post on Algorithmic Transparency.
1. Awareness: we will make our owners, designers, builders, users, and other stakeholders of analytic systems aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
2. Access and Redress: we will adopt mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
3. Accountability: we will demonstrate to our users how decisions are made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
4. Explanation: we will produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made.
5. Data Provenance: we will provide a description of the way in which the training data was collected, along with an exploration of the potential biases induced by the human or algorithmic data-gathering process.
7. Validation and Testing: We will use rigorous methods to validate our models and document those methods and results. In particular, we will explore ways to conduct routine tests to assess and determine whether the model generates discriminatory harm. We will publish a description of the methods and the results of such tests in each quarter’s transparency report.