On August 21, 2024, the National Institution of Standards and Technology (“NIST”) released the second draft of its Digital Identity Guidelines, which provides federal agencies with a framework for identity proofing and authentication of external employees, government contractors, and individuals accessing government information systems and services. Building on the first draft of the guidance, the second draft expands on requirements regarding risk management, identity proofing models, authentication protocols, and safeguards for detecting and preventing fraud.
However, the most significant draft change includes an entire section (Section 3.8) on the use of Artificial Intelligence (“AI”) and Machine Learning (“ML”) in identity systems. For example, AI may be used in identity systems to verify government-issued identification documents by scanning and extracting the relevant information (such as name, date of birth, or expiration date) and validate their authenticity. Recognizing that the application of AI and ML can result in “disparate outcomes, biased outputs, and the exacerbation of existing inequities and access issues,” Section 3.8 puts forth three requirements for all AI and ML usage regardless of how they are used in identity systems:
- Documentation and Communication of AI and ML Use. Organizations that rely on AI and ML must document and communicate about their use of AI and ML in identity systems. In particular, the use of integrated technologies that leverage AI and ML by credential service providers, identity providers, or verifiers must be disclosed to all relying parties that make access decisions based on information from these systems.
- Disclosure of Techniques and Methods Used for AI and ML Model Training and Testing. Organizations that use or rely on AI and ML in identity systems are required to provide information to entities that use their technologies on the techniques and methods used for training their AI and ML models, a description of the data sets utilized for training, information on the frequency of AI and ML updates, and the results of testing completed on the algorithms.
- Implementation of NIST AI Risk Management Framework. Organizations that use AI and ML in identity systems must implement the NIST AI Risk Management Framework for AI and ML risk evaluation and also consult Towards a Standard for Managing Bias in Artificial Intelligence (SP1270) to manage bias in AI.
NIST is accepting public comments on the draft guideline until October 7, 2024. Government contractors and other stakeholders that implement NIST standards should take steps to engage with NIST through public comments and prepare for the implementation of the draft guideline should they go in effect.