California Gov. Gavin Newsom, Signs SB53, AI Safety and Transparency law
LISTEN TO THIS THE AFRICANA VOICE ARTICLE NOW
Getting your Trinity Audio player ready...

SACRAMENTO, California (TAV) — California Governor Gavin Newsom on Monday signed Senate Bill 53, enacting the nation’s first state law to regulate advanced artificial intelligence systems and positioning the state as a global leader in AI oversight.

The Transparency in Frontier Artificial Intelligence Act requires major AI developers to disclose their risk management practices, publish safety frameworks, and report critical incidents. It also includes whistleblower protections and establishes CalCompute, a state-led consortium to expand access to computing resources and research.

“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” Newsom said in his signing message. “AI is the new frontier in innovation, and California is not only here for it but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves.”

Anthropic’s powerful endorsement

Newsom’s decision was bolstered by a rare public endorsement from Anthropic, one of the leading AI labs. The company broke ranks with competitors Meta and OpenAI by backing SB 53, calling it a thoughtful balance between safety and innovation.

“While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington,” Anthropic said in its endorsement. The company emphasized that SB 53 implements a “‘trust but verify’ approach” through disclosure requirements rather than prescriptive technical mandates, which doomed last year’s failed bill, SB 1047.

Anthropic described SB 53’s transparency requirements as a safeguard against competitive backsliding: “Without it, labs with increasingly powerful models could face growing incentives to dial back their own safety and disclosure programs in order to compete. But with SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety, creating a level playing field where disclosure is mandatory, not optional.”

The endorsement proved decisive. As TechCrunch noted, Meta and OpenAI lobbied heavily against SB 53, with OpenAI even publishing an open letter urging Newsom not to sign the bill. Leaders at both companies have also launched pro-AI super PACs, part of a broader effort by Silicon Valley elites to raise hundreds of millions of dollars to support candidates favoring light-touch AI regulation. Against this backdrop, Anthropic’s support offered a voice of reason, giving Newsom the political cover to act.

 

Related story: The Coming AI Storm: Why 2027 Haunts Experts

AI Experts Dr. Roman Yampolskiy, and Mo Gawdat speak to The Diary of a CEO podcast host Steven Bartlett about the dangers of AI

Lawmakers and experts weigh in.

Sen. Scott Wiener, the San Francisco Democrat who authored the bill, said the legislation proves innovation and safety can coexist.

“With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails to understand and reduce risk,” Wiener said.

Mariano-Florentino Cuéllar, a former California Supreme Court Justice who co-chaired Newsom’s AI working group, said the law reflects a rigorous process.

“As artificial intelligence continues its long journey of development, more frontier breakthroughs will occur,” Cuéllar said. “AI policy should continue emphasizing thoughtful scientific review and keeping America at the forefront of technology.”

Fei-Fei Li, co-director of Stanford University’s Institute for Human-Centered AI, said SB 53 helps California lead “with transparency and trust.”

Jennifer Tour Chayes, dean of computing and data sciences at UC Berkeley, added that the law creates “a framework where innovation and accountability can coexist.”

California’s global influence

California is home to 32 of the world’s top 50 AI companies, according to a Forbes listing. With more than half of global venture capital in AI startups flowing into the Bay Area, observers say California’s regulations will have a significant impact far beyond state borders.

According to the Associated Press, more than 20 advocacy groups also supported the measure, citing protections for youth and communities most affected by AI harms. Still, some venture capitalists, including Andreessen Horowitz, warned that the law could set a precedent for restrictive state regulation that undermines competitiveness.

Broader implications

The passage of SB 53 underscores the absence of federal law on AI. California’s step could pressure Congress to act or accelerate a patchwork of state-led initiatives.

For African nations and the diaspora, California’s approach offers lessons: transparency mandates and whistleblower protections may strengthen global demands for accountable AI, while regulatory gaps at the federal level highlight the opportunity for emerging economies to shape their own frameworks.

By signing SB 53, Newsom established California as the first U.S. state to regulate frontier AI, despite opposition from some of the most powerful players in the industry.

With Anthropic’s endorsement providing crucial momentum, California has made clear that the era of voluntary self-policing in AI is giving way to enforceable rules.

Other states are already watching closely: in New York, Gov. Kathy Hochul is weighing whether to sign or veto a similar AI transparency bill recently passed by lawmakers, underscoring that California’s move could be the beginning of a wider state-by-state wave.

LEAVE A COMMENT