On November 14, 2024, the Department of Homeland Security (“DHS”) announced a set of voluntary recommendations called the “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure” (“Framework”). Recognizing the severe consequences associated with disruption to the nation’s critical infrastructure, DHS released the framework to address certain risks associated with the use of AI across critical infrastructure sectors. The Framework seeks to complement and advance the AI safety and security best practices already established by the U.S. government, including, among others, the White House Voluntary Commitments, the Blueprint for an AI Bill of Rights, Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and the DHS Safety and Security Guidelines for Critical Infrastructure Owners and Operators.
In coordination with the Cybersecurity and Infrastructure Security Agency (“CISA”), DHS identified the following three categories of AI safety and security attack vectors and vulnerabilities across critical infrastructure: (1) attacks using AI, (2) attacks targeting AI systems, and (3) AI design and implementation failures. The Framework recommends actions that can be taken for five identified groups who are key in the “development and deployment of AI in U.S. critical infrastructure” to mitigate the risk of exploitation of such attack vectors and vulnerabilities.
In addition to recommended practices often recognized as part of a reasonable security program, such as maintaining an incident response plan, vetting third party providers, and network hardening, DHS recommends additional specific actions for each of the five groups involved in the AI lifecycle. Some of those recommendations for each group include:
- Cloud and Compute Infrastructure Providers
- Establish vulnerability management methods to scan, or enable customers to scan infrastructure for threats, including threats stemming from AI.
- Follow standard, coordinated vulnerability processes when reporting vulnerabilities that could affect model and system design processes.
- Utilize high-availability networking and backup plans in close cooperation with customers to ensure resiliency in the context of critical services.
- Work with customers to establish clear pathways for reporting suspicious or harmful activity and make use of existing incident reporting channels (such as ISACs) where appropriate.
- AI Developers
- Establish and adhere to a strategy to identify capabilities associated with autonomous activity, physical and life sciences, cybersecurity, and other capabilities that could impact critical infrastructure when deployed in relevant high-risk contexts.
- Ensure that AI models reflect human values and goals, with the ultimate objective of ensuring they are helpful, accurate, unbiased, and transparent.
- Ensure that effective data management for AI Systems considers an individual’s legal rights, stated choices, and reasonable expectations of privacy.
- Test AI for general reliability to ensure and the system will act as planned under normal conditions as well as a wide range of other possible conditions.
- Enable critical infrastructure customers to conduct their own risk assessments and make informed decisions about AI usage.
- Critical Infrastructure Owners and Operators
- Incorporate appropriate human involvement for making or informing consequential decisions that could negatively impact critical infrastructure.
- Provide meaningful transparency regarding use of AI to provide goods, services, or benefits to the public.
- Engage executive leadership in key decisions.
- Civil Society
- Develop educational resources for AI to help inform policymakers and the public about the uses, benefits, and risks.
- Formulate guiding values and safeguards for both the public and the government to develop and deploy AI systems that are transparent and protect privacy, civil rights, human rights, and social well-being.
- Engage with AI developers to develop practical standards that can be readily adopted by critical infrastructure across the board.
- Public Sector
- Ensure the use of AI never conflicts with core governmental functions.
- Improve efficiencies and increase affordability and availability of critical services.
- Prioritize the development and funding of programs that advance responsible AI practices in government services.
- Avoid using AI in a manner that produces discriminatory outcomes, infringes upon personal privacy, or violates other legal rights.