Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

States Lead AI Regulation Push as US Policy Interests Shift


Nearly 700 AI bills surfaced in state legislatures in 2024, addressing issues from safety standards to deepfake controls. Colorado passed comprehensive legislation, while California vetoed a key bill, reflecting varied strategies as states fill the regulatory gap left by stalled federal action.

States Move to Shape AI Regulation Landscape in 2024, Report Finds

The CCIA State Policy Center reports that state legislatures are taking an active role in artificial intelligence oversight. In 2024, AI-related bills were introduced in virtually every state, and several measures became law.

The state-level momentum comes as Congress and federal agencies weigh national AI standards. California and Colorado exemplify different regulatory approaches: Colorado enacted comprehensive AI legislation through SB 205, though stakeholders expressed concerns about limited input opportunities. Meanwhile, California Governor Gavin Newsom vetoed SB 1047, citing the need for more refined proposals, while signing other AI-related bills addressing digital replicas and deepfakes.

State legislation largely addresses five areas: safety requirements for AI development, digital content watermarking, deepfake regulations, right of publicity protections and study commissions. The CCIA State Policy Center warns that overly broad state regulations could hamper technological advancement.

“In the fast-evolving field of AI, it is important to find a balance in regulation in order ensure that rules are not so rigid as to hinder innovation,” the report states, noting particular concerns about appropriately assigning liability between AI developers, deployers and users.

Looking ahead to 2025, Connecticut Senator Maroney plans to reintroduce comprehensive AI regulation that could become a model for other states. New York’s legislature is expected to consider bills on AI liability standards and synthetic media watermarking.

The varied state approaches highlight the challenges of establishing AI oversight frameworks without unified federal standards.

AI Policy Faces Uncertain Shift Ahead of 2025

The future of artificial intelligence regulation in the United States faces uncertainty ahead of potential leadership changes in Washington, according to a new analysis from Wharton School experts.

While the Biden administration has emphasized safety protocols, Trump campaign advisers and donors favored reduced AI restrictions, Wharton legal studies professor Kevin Werbach told a recent panel. However, the campaign’s position remains complex, having both criticized big tech companies while opposing regulation. The insights emerged from Wharton’s recent “Policies That Work” panel examining AI governance.

States aren’t waiting for federal clarity. Approximately 700 AI-related bills are under consideration nationwide, even as companies implement their voluntary safety measures to prevent discrimination and protect users.

The technology’s soaring energy demands present immediate challenges. AI-related data centers currently consume triple the power of New York City, with usage expected to triple again by 2028. In response, Microsoft has partnered with Constellation Energy to revive Pennsylvania’s Three Mile Island nuclear facility through a 20-year power agreement.

Deepfake technology poses a particular threat to democratic stability, the experts warned. Their proposed solution includes mandatory education, with students learning to create deepfakes to understand the technology’s capabilities better.

While the European Union moves forward with comprehensive regulations, U.S. policy remains at a crossroads, creating an uncertain environment for industry leaders and innovators.

Healthcare AI Needs Smart Regulation, New Report Warns

A new report from Paragon Health Institute warns that overregulation of artificial intelligence in healthcare could stifle innovations that save lives while calling for targeted oversight that prioritizes patient safety.

The report comes as state legislatures have dramatically ramped up AI-related bills, with nearly 700 proposals in 2024 compared to 191 in 2023. 

“An awareness of AI among policymakers has, at times, substituted for a meaningful understanding of its operations,” said Kev Coleman, visiting research fellow at Paragon. “When coupled with the dystopian AI predictions occasionally in the press, this situation risks mis-regulation that can not only increase technology costs but reduce the very medical advances policymakers desire from AI.”

The report recommends that regulators distinguish between different AI systems rather than treat them uniformly. For example, AI used for back-office medical supply purchasing carries much lower risk than patient-facing diagnostic applications.

The study also emphasizes that the FDA’s existing framework for evaluating medical devices provides a strong foundation for AI oversight. Rather than creating new regulatory bodies, the report suggests leveraging existing healthcare agencies’ expertise.

Key recommendations include providing economic pathways for AI systems to get updated approvals as they improve over time and ensuring regulations don’t duplicate existing protections under HIPAA and other laws.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *