New York state Assembly Bill A3411B (“the Bill”) passed its third reading in the senate on March 9, 2026, sending it through the legislature and preparing it for delivery to Governor Kathy Hochul. If enacted, the Bill will require owners, licensees, and operators of generative AI systems to display a clear and conspicuous notice on […]
Artificial Intelligence
Cybercrime Trends to Watch: Takeaways from the FBI’s 2025 IC3 Annual Report
On April 6, 2026, the Federal Bureau of Investigation (FBI) released its 2025 IC3 Annual Report, which provides key trends, case data, and other statistics related to the FBI’s ongoing efforts to combat emerging cybersecurity threats. According to the report, 2025 marked the first time the total reported cybercrime losses surpassed $20 billion, with cryptocurrency […]
Key AI, Cybersecurity, and Privacy Takeaways from the NAIC 2026 Spring Meeting
From March 22–25, the National Association of Insurance Commissioners (“NAIC”) held its 2026 Spring National Meeting in San Diego, California. During the meeting, the Innovation, Cybersecurity, and Technology Committee, along with its working groups on Third-Party Data and Models, Big Data and Artificial Intelligence, and Cybersecurity, addressed key developments regarding oversight of third-party data and […]
California Jumps into AI Procurement with State Governing Principles in an Executive Order
On March 30, 2026, California Governor Gavin Newsom signed Executive Order N-5-26 (the “Order”), aimed at governing the responsible procurement and deployment of Generative Artificial Intelligence (“GenAI”) across California’s state government. The Order builds on the foundation laid by Executive Order N-12-23, issued in September 2023, by directing a series of actions across multiple state […]
Threat Actors Exploit Google’s Gemini to Accelerate Cyberattacks
Google Threat Intelligence Group (GTIG) recently reported that cybercriminals—in particular, state-sponsored threat actors from North Korea, Iran, China, and Russia—are misusing Gemini, Google’s large language model (LLM), to support all stages of their attack lifecycle. Specifically, GTIG observed threat actors using Gemini to code and script tasks, accelerate reconnaissance, research publicly known vulnerabilities, and enable […]