On December 24, 2024 and January 13, 2025, the Oregon Attorney General’s Office and the California Attorney General’s Office published advisories (collectively, “Advisories”) explaining how existing statutes may be used to regulate, investigate and enforce against artificial intelligence (“AI”). These Advisories serve to remind AI developers, suppliers and users of heightened regulatory scrutiny of AI, and of potential regulatory enforcement tools. This blog post briefly summarizes the authorities the California and Oregon AGs have identified as potential vehicles for AI regulation and enforcement, and provides key takeaways for each.
Unfair and Deceptive Acts and Trade Practices
The Advisories interpret the states’ respective unfair and deceptive acts and trade practices (“UDAP”) laws as being broadly applicable to AI. They warn businesses that making false or misleading claims about AI – such as by misrepresenting the capabilities of AI products and services, or by using AI to generate or publish fake product reviews – may run afoul of UDAP laws.
In Oregon, businesses may be liable for misrepresentations even if such false or misleading claims are not necessarily made directly to consumers but may have downstream harmful effects on the same.
In California’s Advisory, the AG emphasizes that California’s UDAP statute (the “Unfair Competition Law”) “was intentionally written with broad, sweeping language to protect Californians” not just from “obvious and familiar forms of fraud and deception,” but also from “new, creative, and cutting-edge forms of unlawful, unfair, and misleading behavior.” In a nod to the law’s unique breadth, California’s AG also noted that California courts recognize violations of non-California law – such as federal law or the laws of other states – as potentially triggering liability under the Unfair Competition Law.
Additionally, the Federal Trade Commission and the state of Texas have already utilized their respective UDAP enforcement powers to regulate companies’ alleged misrepresentations of the capabilities of AI tools.
Privacy Laws
The Advisories remind developers, suppliers and users of AI to comply with state privacy laws when using AI models and systems to process personal information. California and Oregon have both enacted comprehensive privacy laws that may require businesses that process personal information in connection with AI to meet certain requirements. For example, the Oregon Consumer Privacy Act requires businesses to permit consumers to opt-out from the use of AI to profile consumers to make certain decisions, such as those related to employment and financial services.
In California, rules regulating how AI may make automated decisions on the basis of consumer profiles are not yet in force; instead, they are being considered as part of a package of proposed amendments to the California Consumer Privacy Act Regulations. If the amendments pass in current form, they could mean substantial changes to how AI that makes – or whose output is used as a significant factor in making – consequential decisions about individuals, both in the consumer and employment contexts. New AI-specific requirements such as “pre-use notices,” opt-out rights, and “access” rights (which could enable individuals to request more information about AI algorithms) are currently included in the amendments. (The AG’s Advisory does not mention these amendments, possibly because they have not been finalized and are currently undergoing public comment.)
Beyond California’s draft CCPA Regulations, California maintains additional privacy statutes not expressly mentioned in its Advisory, but which could be relevant to AI investigations and enforcement. For instance, California’s Confidentiality of Medical Information Act, Invasion of Privacy Act, and Student Online Personal Information Protection Act could all potentially be used by the state AG – and possibly also by private plaintiffs – to bring claims relating to AI.
Civil Rights Laws
Both the California and Oregon AGs emphasize their willingness to use civil rights legislation to investigate AI usage by companies. California’s AG has already conducted a sweep of health systems to investigate possible “racial and ethnic bias” in AI systems used by providers, and he reminds companies that “[d]evelopers and users of AI should be wary of these potential biases that may be unlawfully impacting Californians.” Similarly, Oregon’s AG warns that if AI makes housing decisions against qualified applicants “from certain neighborhoods or ethnic backgrounds because [it] was trained on historically biased data,” it may violate Oregon civil rights laws.
We have also seen civil rights statutes at federal and state levels support investigations into companies’ use of AI algorithms within their products and services. Civil rights legislation provides broad authority to regulators who could use such discretion to investigate areas, such as AI development, training, testing, use, and monitoring.
AI-Specific Laws in California
The California AG also reminds developers and users that recently enacted AI-specific laws must be considered in addition to laws that do not explicitly regulate AI.
California recently passed several laws that specifically apply to AI, including requirements for developers and users of AI to make certain disclosures when using AI, the regulation of unauthorized use of likeness in digital replicas, and requirements for health insurers to ensure licensed physicians supervise the use of any AI that make automated decisions about healthcare services or insurance claims.
Other Laws May Apply
In addition to the laws identified above, the Advisories point to laws related to data security and antitrust, and they remind businesses that such laws apply equally to AI. To further highlight this point, the California AG stated in a January 13, 2025 Healthcare AI Advisory, “conduct that is illegal if engaged in without the involvement of AI is equally unlawful if AI is involved, and the fact that AI is involved is not a defense to liability under any law.”
Best Practices and Next Steps for Companies Implementing AI Tools
The Advisories encourage businesses that develop or use AI to consider implementing steps to mitigate risks arising from the myriad of state laws potentially applicable to AI.
- Conduct Risk Assessments. Assess the risk that the use of AI systems may have on the business, including risks arising from state laws, and consumers. Consider whether there are reasonable measures that the business can take to mitigate such risk.
- Test, Validate and Audit. Test AI systems, validate results and conduct audits to identify whether such systems are functioning properly and to determine whether their use is safe, ethical, and lawful.
- Be Transparent. Disclose the impact that AI may have on consumers. Be transparent about what and how consumer personal information is used (if any), including disclosing whether consumer data may be used for AI training purposes.
- Implement Data Privacy Principles. Comply with well-established and legally required privacy principles, such as by limiting the collection of personal information to only what is necessary to achieve the purposes for which the business collects the information. Where possible, apply privacy protective measures such as the anonymization of personal information.
- Comply with Applicable Law. Be mindful of laws that generally apply to the business because they will likely continue to apply following any AI deployment in the same areas.