On March 26, 2026, a bipartisan group of U.S. lawmakers introduced H.R. 8094, titled the “AI Foundation Model Transparency Act of 2026” (“AI FMTA”). At its core, the AI FMTA would require developers of certain large AI Models, like ChatGPT or Claude, to publicly disclose key information about how the models are trained, what the models are designed to do, where the limitations and risks lie, and how the models are evaluated and monitored. To be clear, the purpose is to provide the public with transparency but not to regulate AI.
It reflects growing concern in Congress that widely used AI models can create real-world harms if their limits, risks, or biases are not well understood. As Rep. Beyer explained, AI FMTA “would help users determine if they should trust the [AI] model they are using for certain applications, and help identify limitations on data, potential biases, or misleading results…[that] could lead to harmful results like rejections for housing or loan applications, or faulty medical decisions….”
Who would be covered?
The AI FMTA applies to “covered entities” that provide use of or services from a “foundation model” and meet certain capability, scale, or computational thresholds. Both terms are carefully defined and are aimed at large, general purpose models, and AI systems whose scale or impact raises heightened public concerns.
A “foundation model” is an AI model that (i) is trained on broad data, (ii) generally uses self-supervised learning, (iii) contains at least one billion parameters, (iv) is designed to generate outputs (not just a single task), and (v) can be adapted across many different uses or contexts.
A “covered entity” is any person, partnership, or corporation (including common carriers and non-profits) subject to FTC regulations that offers a qualifying foundation model that meets at least one of these thresholds: (i) the model poses significant risks to security, civil rights, or public health; (ii) the model has more than 10 million monthly users or downloads; or (iii) was trained using more than 10²⁶ computational operations.
Fully open‑source models are exempt from the AI FMTA.
What would companies have to disclose?
The FTC must promulgate regulations that require covered entities to publicly disclose information about their foundation models. At minimum, disclosures would include:
- Training Data and Governance. An overview of where training data comes from, how it was collected, its size and composition, and the data governance practices in place.
- Intended Uses and Risks. A description of the model’s intended uses, known limitations or risks, version history, and processes for monitoring and responding to incidents.
- Language Support and AI Safety Standards. Information on supported languages and how the model aligns with recognized AI safety frameworks, such as NIST’s AI Risk Management Framework.
- Evaluation and High‑Risk Use Cases. A summary of model performance based on internal and/or third‑party evaluations and an outline of safeguards applied in high-risk areas like healthcare, national security, financial decisions, and other sensitive domains.
- Computational Resources. Information on the computing power used to train and run the model.
Models built on top of covered foundation models would be required to link to the base model’s transparency disclosures and comply with the FTC regulations for any significant changes or retraining they introduce.
Enforcement
The AI FMTA relies on the FTC’s existing enforcement powers and does not create new penalties. A violation of AI FMTA would be a violation of the FTC Act.
Why This Matters
The AI FMTA echoes themes found in California’s recently enacted SB-53, the “Transparency in Frontier Artificial Intelligence Act” (TFAIA), which was signed into law on September 29, 2025. Like the AI FMTA, TFAIA requires large AI developers to publicly disclose key information about their models, including a “frontier AI framework” describing how they assess and manage catastrophic risks, and transparency reports detailing intended uses, supported languages, and third-party evaluations. Both laws share a common objective: ensuring that policymakers and the public have access to meaningful information about AI systems’ capabilities, limitations, and risks. These parallel efforts signal the increasing importance of AI transparency at both the state and federal levels.
The AI FMTA is significant at the federal level for several reasons:
- It is bipartisan, signaling broad agreement that AI regulation is moving to a new phase.
- It closely follows other recent federal AI initiatives, including Senator Marsha Blackburn’s AI discussion draft and the White House’s National Policy Framework for Artificial Intelligence.
- It already has support from a diverse group of industry, labor, and civil society organizations, underscoring widespread momentum for federal AI standards.
The AI FMTA has been referred to the House Committee on Energy and Commerce (the “Committee”). The Committee is responsible for reviewing the AI FMTA and determining its next steps. During this stage, the Committee may hold hearings, request testimony from experts or stakeholders, and debate the AI FMTA’s contents. Based on these discussions, the Committee can amend the AI FMTA, approve it, or reject it altogether. Given its scope and ambition, the AI FMTA is unlikely to pass unchanged, but it offers a clear signal of where federal AI policy may be heading–toward greater transparency and clearer accountability.
For more information on AI legislation, regulations, and enforcement, please contact Alston & Bird’s Privacy, Cybersecurity and Data Strategy Team and sign up for alerts at AlstonPrivacy.com.
