As the healthcare and financial impacts of COVID-19 continue to evolve with the global pandemic, the use of AI technology and associated risks have received greater attention. On April 8, 2020, the FTC posted an extensive summary of its recent enforcement actions, studies, and guidance regarding the use of AI tools and algorithms. The summary weaves together a handful of FTC enforcement actions and the FTC’s 2016 report on big data and 2018 hearings on AI, algorithms, and predictive analysis. The FTC’s compilation is intended to aid companies: manage the risks associated with the use of AI; design algorithms; evaluate training data; and develop an audit/accountability program to ensure their use of AI tools does not result in biased outcomes.
In sum, the FTC expects the use of AI tools to be transparent, explainable, fair, empirically sound, and managed in a compliant and ethically accountable way. In compiling and summarizing its regulatory approach to AI, the FTC has offered five key principles to guide companies’ use of AI tools.
- Transparent: Companies that use automated tools should be transparent with consumers as to how they use such tools and collect sensitive data that may be fed into an algorithm. They should also provide appropriate notice to consumers, especially if any AI-driven decision-making is covered by the Fair Credit Reporting Act. Because of the broad scope of the FCRA, the FTC Alert specifically noted that the FCRA may apply to companies that “compile and sell consumer information that is used or expected to be used for credit, employment, insurance, housing, or other similar decisions about consumers’ eligibility for certain benefits and transactions.”
- Explainable: Companies should explain their decision-making and outcomes that are based on algorithmic decision-making or risk rankings to the consumer.
- Fair: AI use should not result in discrimination of protected classes (which includes, race, religion, age, and sex, for example), and therefore companies should focus both on inputs to model and outcomes, as well as provide the consumer with access and the opportunity to correct information that may be used in such a model.
- Empirically sound: Companies that provide data to entities subject to the FCRA should implement reasonable procedures to ensure the accuracy of such data. AI models should be validated, and then revalidated over time to ensure they are functioning properly and do not illegally discriminate.
- Accountable: The FTC offered four key questions to avoid biased outcomes: (1) How representative is your data set; (2) Does your data model account for biases; (3) How accurate are your predictions based on big data; and (4) Does your reliance on big data raise ethical or fairness concerns? The FTC further expects that companies will protect AI tools from unauthorized use and consider accountability mechanisms, including third party standards and independent testing to validate the lack of discriminatory results.
As regulators focus on the potential for disparate impacts stemming from COVID-19, the compilation of FTC enforcement actions and guidance may serve as a useful guide as companies review their current reliance on AI tools for decision making processes and their related documentation of the steps taken to avoid bias. If you have any questions regarding your AI tools, training data, output or accountability/audit documentation, please contact the authors or members of the Alston Privacy and Data Security or Cybersecurity Preparedness and Response teams.