On June 22, 2025, Texas Governor Greg Abbott signed House Bill 149, the Texas Responsible Artificial Intelligence Governance Act (“TRAIGA”), into law. TRAIGA imposes on businesses and governmental entities obligations and prohibitions for certain uses of artificial intelligence (“AI”), amends the Texas Capture or Use of Biometric Identifier Act (“CUBI”) to include certain exemptions, and amends the Texas Data Privacy and Security Act (“TDPSA”) to require processors to help controllers protect personal information processed by an AI system. TRAIGA takes effect January 1, 2026.
Duties and Prohibitions on the Use of AI
TRAIGA applies to persons who (i) promote, advertise, or conduct business in Texas; (ii) produce a product or service used by Texas residents; or (iii) develop or deploy an AI system in Texas. It applies to all systems that meet its definition of “AI systems,” not just those used in high-risk contexts. “AI system” is defined consistently with the Colorado Artificial Intelligence Act (“Colorado AI Act”) as any machine-based system that, for any explicit or implicit objective, infers from the inputs it receives how to generate outputs—such as content, decisions, predictions, or recommendations—that can influence physical or virtual environments.
TRAIGA provides that it should be broadly construed and applied to further its purposes of (i) advancing the responsible development and use of AI systems; (ii) protecting individuals from AI systems’ known and reasonably foreseeable risks; (iii) providing transparency regarding risks in the development, deployment, and use of AI systems; and (iv) providing reasonable notice regarding the use or contemplated use of AI systems by state agencies.
General Prohibitions
TRAIGA prohibits companies from developing or deploying AI systems that (i) unlawfully discriminate, (ii) impair an individual’s rights under the federal constitution, (iii) encourage harm or criminal activity, or (iv) produce or distribute certain sexually explicit content and child pornography. Each of these provisions are addressed below.
Discrimination. No person may develop or deploy an AI system with the intent to unlawfully discriminate against a protected class in violation of state or federal law. A “protected class” refers to individuals with a status protected under state or federal civil rights laws.
This prohibition does not apply to insurance entities providing insurance services, provided they are subject to laws governing anti-discrimination, unfair competition, or deceptive practices in the insurance industry. Federally insured financial institutions are deemed compliant with TRAIGA’s non-discrimination rules if they adhere to applicable federal and state banking laws and regulations.
TRAIGA thus places significant focus on ensuring non-discriminatory AI systems. This was already the key focus of the Colorado AI Act, although Texas – unlike Colorado – does not require businesses to report instances of “algorithmic discrimination.” It was also a focus of Virginia’s AI Act, which was passed by the legislature but vetoed by the governor. TRAIGA thus indicates that non-discrimination remains a key focus for states’ AI legislation.
Constitutional Rights. No person may develop or deploy AI systems with the sole intent to infringe, restrict, or otherwise impair an individual’s rights guaranteed under the U.S. Constitution, such as freedom of speech. TRAIGA clarifies that this provision may not be construed to create or expand any right under the U.S. Constitution.
Harm and Criminal Activity. TRAIGA prohibits developing or deploying an AI system in a way that intends to incite or encourage a person to commit physical self-harm, harm another person, or engage in criminal activity.
Sexually Explicit Content and Child Pornography. TRAIGA also prohibits the development or deployment of an AI system with the sole intent of producing, assisting or aiding in producing, or distributing (i) sexually explicit videos without consent of the persons depicted; (ii) sexually explicit “deep fake” videos without consent of the persons appearing to be depicted; or (iii) an AI system that engages in text-based sexual conversations impersonating or imitating persons less than eighteen (18) years old.
Government-Specific Obligations and Prohibitions
TRAIGA imposes a disclosure requirement on governmental agencies and prohibits certain uses of AI for social scoring and biometric identification.
“You’re Talking to AI” Disclosures. Governmental agencies that make AI systems available for consumer interaction must disclose that the consumer is interacting with an AI system prior to or at the time of interaction. The term “consumers” excludes employees or individuals acting in a commercial context.
Social Scoring. Governmental entities are prohibited from using or deploying AI systems to evaluate or classify individuals based on social behaviors or personal characteristics with the intent to assign a social score, if that score could result in:
• Unfavorable treatment unrelated to the context in which the behaviors or characteristics were observed;
• Disproportionate consequences relative to the nature of the behaviors or characteristics; or
• Infringement of any rights under Texas or federal law.
Biometric Identification. Governmental entities may not develop or deploy AI systems to uniquely identify individuals using biometric data or by collecting images or other media from the internet or publicly available sources without the individual’s consent, if such collection would infringe rights under Texas or federal law. A violation of CUBI also constitutes a violation of this provision.
Amendments to Texas Capture or Use of Biometric Identifier Act
CUBI requires persons collecting biometric identifiers for commercial purposes to provide notice to and obtain consent from the individual prior to collection. TRAIGA amends CUBI to exempt from this requirement the collection of biometric identifiers that are made publicly available by the individual to whom they pertain. This amendment implies businesses must still provide notice and obtain consent when collecting biometric identifiers that are publicly available but were disclosed by someone other than the individual to whom the identifier pertains.
TRAIGA further amends CUBI to exempt from the notice and consent requirements:
• The training, processing, or storage of biometric identifiers used in developing, evaluating, or offering AI models or systems, unless those systems are used or deployed to uniquely identify a specific individual; and
• The development or deployment of AI models or systems for security purposes or to prevent illegal activity.
Amendments to the Texas Data Privacy and Security Act
The TDPSA requires processors to assist controllers in complying with their obligations to maintain security measures to protect personal information. TRAIGA amends the TDPSA to explicitly require processors to assist controllers with complying with their obligations to protect personal information processed by an AI system.
Enforcement
The attorney general has exclusive enforcement authority and may seek civil damages if a person develops or deploys AI systems for prohibited use and fails to cure the alleged violation within sixty (60) days. Penalties include:
• $10,000 to $12,000 per curable violation;
• $80,000 to $200,000 per non-curable violation; and
• an additional $2,000 to $40,000 for each day that the violation continues.
TRAIGA does not provide a private right of action, nor does it impose additional penalties for violations of CUBI or TDPSA with the exception that a governmental agency that violates CUBI also violates TRAIGA.
State agencies may also impose sanctions on individuals or entities they license, register, or certify, if the attorney general recommends additional enforcement. Sanctions may include:
• a maximum penalty of $100,000; and
• suspension, probation, or revocation of a license, registration, certificate, or other authorization to operate.
For more information on AI legislation, regulations and enforcement, please contact Alston & Bird’s Privacy, Cybersecurity and Data Strategy Team and sign up for alerts at AlstonPrivacy.com.