Connect with us

AI

California’s new AI law seems to have struck the right balance

AI companies must now disclose and actually follow the security protocols to prevent cyberattacks or the design of biological weapons.

Smartphone showing AI application icons on screen.
Image: Unsplash

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

California just pulled off something most people thought impossible: it passed an AI safety law that tech bros aren’t completely melting down about.

SB 53, the state’s new AI safety and transparency law, was freshly signed by Governor Gavin Newsom and hailed by some as proof that smart regulation doesn’t have to crush innovation. 

One of those believers is Adam Billen, vice president of public policy at Encode AI, a youth-led advocacy group that’s been pushing for more responsible AI development. 

“You can make laws that protect innovation and make products safe,” Billen told TechCrunch.

At its core, SB 53 tells big AI labs to stop hiding their safety playbooks. Companies must now disclose, and actually follow, the security protocols that prevent their models from being used for things like cyberattacks or bio-weapon design. 

The state’s Office of Emergency Services will keep watch, ensuring no one gets lazy with their guardrails. “Most companies already do this stuff,” Billen said, “but some have started skimping, and that’s exactly why the bill matters.”

He’s referring to a growing industry trend: AI firms cutting safety corners to stay competitive. 

OpenAI, for instance, has admitted it might “adjust” safety standards if rivals move faster. Billen argues SB 53 locks companies into the promises they’ve already made, a legislative safety net for the innovation arms race.

Unlike its ill-fated predecessor, SB 1047, this bill didn’t spark Silicon Valley hysteria. Still, the tech elite, from Meta to Andreessen Horowitz, are pouring money into pro-AI super PACs to keep state regulators out of their servers. 

Meanwhile, Senator Ted Cruz is trying a new route to sideline state laws through his SANDBOX Act, which would let AI companies dodge some federal rules for up to ten years.

Billen warns that these moves could “delete federalism for the most important technology of our time.” 

He says if America really wants to beat China in the AI race, it should focus on chip export controls, not blocking state transparency laws. 

“SB 53 isn’t the thing holding us back,” he said. “It’s democracy actually working, messy, imperfect, but still alive.”

Does OpenAI’s Sora social media app represent a responsible way to monetize AI research, or is the company walking down the same path that made Facebook and TikTok controversial? Should AI companies be held to higher standards than traditional social media platforms given their stated missions around AI safety and beneficial AGI, or is commercial success a necessary step toward funding breakthrough research? Tell us below in the comments, or reach us via our Twitter or Facebook.

Follow us on Flipboard, Google News, or Apple News

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More in AI