SAN FRANCISCO — California moved to set the national agenda on AI policy when Gov. Gavin Newsom signed a first-of-its-kind law this week aimed at making the technology safer.
But the path to get there stretches back more than a year and was marked by shifting political and business alliances, behind-the-scenes blow-ups and last-minute interventions, according to insider accounts shared with POLITICO.
The legislation was the result of months of intense discussions with the AI industry that pulled in tech luminaries like the angel investor and Democratic megadonor Ron Conway as well as Silicon Valley giants Meta, Amazon and Google, plus input from AI heavyweights Anthropic and OpenAI.
The process also highlighted the stakes for the top Democrats involved: Newsom, a likely 2028 presidential contender, and the bill’s author, state Sen. Scott Wiener, who carried more ambitious AI legislation last year that Newsom vetoed, but now has his sights set more squarely on replacing former House Speaker Nancy Pelosi.
The motivations of the players, as well as the tight timeframe to turn around the legislation, underscores the weight of the law’s potential consequences for the lucrative AI industry within the state, estimated to contribute billions to California’s coffers each year: The law will require major AI companies to disclose their safety protocols while offering whistleblowers protections, and is being watched by lawmakers in other states as setting a potential national standard.
“We’ve publicly said many times, obviously the preference is a federal framework,” Jack Clark, co-founder and policy head of OpenAIChatGPT rival Anthropic, told POLITICO. “This is a chance to do something at the state level, which would then give us a template for something federal that everyone wants.”
POLITICO spoke with nearly a dozen people in and around the negotiations to get inside the rooms where California’s landmark AI law, SB 53, was crafted, many of whom were granted anonymity to disclose private discussions. The conversations paint the most vivid picture yet of how the tech companies formed alliances and pushed negotiations with Wiener to the last ticks of the legislative clock.
Each player faced different pressures this time around that made the law feel more urgent, but also made getting it across the finish line a real possibility.
For Newsom, with an eye increasingly shifting to a likely White House run, remaining close with deep-pocketed tech donors — some of whom are close friends — was key. Signing the bill helped head off accusations he had done nothing amid mounting public scrutiny of AI and headlining-grabbing lawsuits alleging chatbots like ChatGPT contributed to teen suicides.
After vetoing Wiener’s first bill, Newsom set up a panel of experts to report on AI safety recommendations, offering the governor a way to guide more modest legislation and still credibly claim a win on landmark AI rules without alienating tech.
Newsom’s move in turn left Wiener little choice but to adapt from Newsom’s expert report, which the state senator did: He knew he couldn’t go as far as last year if he wanted to claim victory on the major AI safety measure. He also could ill afford to wait another year, with the midterms looming where he could make a run for the House should Pelosi step aside, as well as lingering promises by Republicans in Washington to freeze state AI laws.
“The thing that was very meaningfully different is, we know [Wiener’s] office started from a greater point of attempting to sort of reach out to industry,” Clark said.
The dynamics were also different for tech companies, facing public pressure over how their chatbots interact with kids, in particular, and the prospect of other state legislatures where they have less influence moving faster to crack down.
But tech was not a bloc. Each company had their own preferences and they jockeyed up to the last minute over who would and wouldn’t be included under Wiener’s bill, haggling over definitions to the end.
They knew that the bill would cover not just AI’s home market of California, but could potentially establish a blueprint for regulation that could stretch across the country.
Newsom’s office declined to comment directly for this story, instead referring POLITICO to the governor’s signing statement.
“[California] stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves,” Newsom said on Monday.
In the dawn …
Last year, Wiener took his big swing with the broader bill that required safety testing for some AI models before they could be released and which included liability provisions for certain harms caused by AI. That infuriated tech companies, which accused the San Francisco Democrat of kneecapping the state’s burgeoning golden goose industry. Chief among the critics in Washington was none other than Pelosi herself.
After Newsom vetoed the bill, the governor’s team stood up the panel of experts to make recommendations on the state’s role in AI regulation.
Wiener’s staff got to work almost immediately, crafting new legislation after the veto, along with co-sponsors like the youth-led AI nonprofit Encode, which had helped to push Wiener’s failed bill, SB 1047, through the statehouse.
The state lawmaker couldn’t afford to wait, with the clock already ticking as the 2025 legislative session got underway.
Wiener and his allies — which included AI safety-focused advocacy groups — locked in a placeholder bill in January. His expanded measure ultimately focused on AI whistleblower protections and laid the groundwork for a state-run AI computing cluster, dubbed CalCompute, in late February.
Those were the least controversial pieces of the previous ill-fated bill that the senator decided to resurrect, but it was just beginning.
Wiener had to wait and see what Newsom’s expert panel would produce. If he had jumped the gun and mismatched the bill with the report, there was less of a chance Newsom would back the measure in the end, and Weiner risked another ambitious lawmaker stepping in on AI and taking his place.
Newsom’s three main report authors were Stanford’s Fei-Fei Li, UC Berkeley’s Jennifer Chayes and the Carnegie Endowment for International Peace’s Mariano Florentino Cuéllar. All three continued to advise the parties through the legislative process at various points, according to one person with knowledge of the discussions.
The initial report came in March, recommending AI companies be transparent about the safety protocols they were following and recommending protections for whistleblowers working at AI companies to allow them to report dangerous program behavior, among other findings.
That partly aligned with Wiener’s rough sketch of a bill, as the still skeletal measure passed through the Senate in May. But the timing of the report meant that key definitions, and crucially, the scope of which companies and models would be captured in the bill, had yet to be definitively hammered out in the first house, as they might normally be.
Anthropic, the safety-focused AI lab behind the Claude chatbot, also engaged early with Wiener’s office starting in the spring, although their involvement truly stretched back to SB 1047, which the company worked with Wiener on but never fully supported. Clark said Anthropic also sent some of its experts to weigh in on technical briefings held by Newsom’s AI working group.
Once the final report came in June, Wiener’s office began to circulate an outline of major changes to come, first reported by POLITICO. That contained specifics on critical risks companies would have to identify, transparency requirements and the kinds of adverse events companies would have to report to the state.
Wiener released a bulked-up proposal in July reflecting those requirements that were also in the expert report.
He also sent a letter to tech companies, first reported by POLITICO, that would be central to the discussions, including Google, Meta and ChatGPT-maker OpenAI, asking for their input and partnership. Their first face-to-face meeting would show that the kumbaya moment would not last long.
A frosty reception
The battle lines emerged quickly. Google, Meta and Amazon largely operated as a bloc, multiple people close to the negotiations told POLITICO.
The venture capital firm Andreessen Horowitz stood in for the interests of the “little tech” startups that make up much of their investment portfolio. They represented a powerful voice because of their outsize role in building the state’s, and the nation’s, tech economy, as well as their close ties to the Trump administration, with Marc Andreessen backing the now president during the 2024 campaign.
OpenAI and Anthropic, two of the leading San Francisco AI labs, ran their own, very different playbooks.
The first convening by Wiener’s office brought together the companies on a call, as well as co-sponsors like Encode, in Sacramento. It was an attempt after the letter to build a big tent.
But it received an icy reception at first, according to one person with knowledge of the call. During a call to discuss the amendments, there were a few clarifying questions, and lots of silence.
“A lot of these companies don’t really trust each other,” the person said. “Even within industry, they have a lot of animosity towards each other.”
Wiener’s office continued to meet with the companies to go over the bill. The compressed timeline meant that key definitions — like what constituted a catastrophic risk from an AI model — were still being hashed out.
The lawmaker was not interested in waiting until next year, though, when other priorities and political squabbles could potentially put AI safety on the backburner.
For their part, Newsom’s office waited to engage, avoiding unnecessarily drawing the ire of the tech industry which so vehemently lambasted Wiener’s bill last year. The governor’s office was publicly silent last year on Wiener’s previous effort until the veto, and took a wait-and-see approach this year in the early stages.
OpenAI was keen to highlight the AI industry’s importance to California around that time, sending an economic impact report directly to Newsom and the Legislature in July. The message seemed clear: California needed the AI industry for its booming economy. Did Newsom really want to be the governor who pushed it out of state with heavy-handed regulation?
The first showdown
National politics also seeped into the process as the summer wore on. Sen. Ted Cruz (R-Texas) had tried and failed to freeze state AI regulations by June, and many tech companies involved in Wiener’s negotiations, like Andreessen Horowitz, had jumped at the opportunity to support the Texas Republican’s crusade.
Cruz has since promised to try again, underscoring the urgency to get the California bill passed for its boosters. The preemption fight also didn’t sit well with Newsom, who blasted the moratorium attempt as a move to “decimate state AI laws.”
It annoyed the governor’s circle that some of the companies pushing for federal preemption then tried to negotiate on California’s AI safety rules once they failed in Washington, according to another person with knowledge of the AI bill discussions.
That was the state of play as Wiener’s measure headed to the Assembly Privacy and Consumer Protection Committee in July, chaired by vocal tech critic Assemblymember Rebecca Bauer-Kahan.
Bauer-Kahan, a fellow San Francisco Bay Area Democrat, tacked on an amendment to the bill that required outside auditing to ensure companies were complying with their own safety policies, one of the people familiar with the process said.
Bauer-Kahan spokesperson Lauren Howe confirmed the auditing provision was added at the suggestion of the committee.
“They were forced on [Wiener],” the person familiar said. But the amendments also gave the senator something to trade in negotiations down the line, which he’d need in a later committee showdown.
With the definitions of key terms still not fully fleshed out, tensions were starting to run high.
The definitions that came out of the privacy committee “all somewhat overlapped and contradicted each other,” said another person with knowledge of the discussions. Companies would be required to report major incidents to the state under the bill, but some companies were still not clear what even constituted a reportable incident. The full details of which models would be included were also not settled.
But with time of the essence, Wiener struck the first of many bargains. The measure already included companies that had trained very large AI models. Now, after the privacy committee amendments, companies with $100 million in revenue would also be covered by the measure.
Some big tech companies would continue to try and push the revenue figure down, to apply to more companies, while advocates for little tech — namely Andreessen Horowitz — worked to push it upward to lighten the burden on startups, according to two people familiar with the discussions.
This was around the time that Bob Hertzberg, the former Assembly speaker, got involved, according to two people.
“He was sort of a convener that was brought in to close the deal,” one person said, adding he was engaged on behalf of industry. Hertzberg did not respond to a request for comment.
Conway — Silicon Valley’s most famous angel investor with close ties to Newsom — was also involved in the process, dating back to talks over Wiener’s previous bill, according to those same two people.
“He advocates for the health and reputation of the tech industry,” said one of those people, describing him as a sort of “King Solomon” figure who garnered immense respect and laid out what was and was not feasible throughout the process. A representative for Conway did not respond to a request for comment.
Wiener’s team continued to wrestle over the particulars after the privacy committee throughout the summer legislative recess.
But August would see the most intense negotiations so far as the bill hurtled toward its next major milestone.
Appropriations committee
Coming out of the summer recess, Anthropic submitted a letter saying it would support the bill if amended, Clark said. It seemed only a matter of time and details before the company got behind this year’s bill.
OpenAI took a different approach by setting off some fireworks that were hard to ignore, if a little hard to understand, for some who had been close to the negotiations.
In mid-August, the company’s head of global policy Chris Lehane — a long-time tech insider with ties to Newsom and Democratic politics going back to the Clinton administration — sent a letter directly to the governor. He urged Newsom to adopt an AI framework that was very different from what Wiener’s bill then included. Frontier AI companies that signed onto a federal testing regime, or the EU AI Act’s provisions, could be considered in compliance with California’s AI rules, Lehane suggested.
The letter went out as negotiations once again heated up, going into the Assembly Appropriations Committee, where many bills are quietly shelved every year. Wiener would reject Lehane’s suggestion, telling POLITICO at the time it was “a non-starter.” But language about taking into account federal and international AI safety frameworks would show up in the final version of the bill.
The public letter was not well received by some in the Legislature, according to one person familiar with the negotiations.
Newsom’s office did not respond to a question about the governor’s reaction to the letter.
But if the goal was to make waves, the letter succeeded at least in part. It also underscored the national and even international implications of the bill.
Newsom’s team had kept abreast of the bill’s progress to this point, but “August was when they started to get a lot more engaged,” Clark said.
Another person familiar with the negotiations put it bluntly: “We were heading to the … deadline and we were still not getting to a deal.”
The governor’s legislative staff began to emphasize that Newsom had already vetoed last year’s bill and that he accepted the results of the panel report the new one was based on, the same person said.
It was time to get serious.
Newsom’s team was involved in the question of what companies and models should be covered by the bill, supporting the company revenue threshold staying in the bill, according to another person familiar with the discussions.
“I think the conversation here became very heavily centered on, from Wiener and the governor’s side and all of industry who were engaged, including us, on what is the appropriate threshold?” said Clark.
During the final appropriations committee, chaired by Assemblymember Buffy Wicks, “little tech” advocates continued to push for the revenue threshold to be higher, according to two people.
She announced at the hearing in late August that the third-party auditing rules — added in Bauer-Kahan’s committee — had been stripped out, a priority for industry. The revenue threshold would be bumped up to $500 million. Things were moving forward.
Last steps
With the measure now headed toward a floor vote in the Assembly, Wiener’s office was still locked in negotiations with the companies on the specifics.
Companies continued to wrangle over definitions and disclosure requirements. One person familiar with the negotiations referred to the days leading up to those final talks as “a goat rodeo,” as different versions of the bill went back and forth.
Wiener eventually cut off the negotiations as midnight neared on Sept. 4, instead turning to the governor’s office to hammer out the final changes.
Late night on Sept. 5, Wiener released the final amendments. The changes tweaked which AI models would be covered by the bill and included references to national and international standards, along with other amendments.
Wiener acknowledged at the time that the language was the result of discussions with the Newsom’s administration and other stakeholders.
The hard-fought amendments were enough for Anthropic, which officially came out in support of the bill on Sept. 8, the Monday after the amendments went public.
Anthropic CEO Dario Amodei told POLITICO at the time that the measure was not perfect, but that time was running out to require companies to publish their safety guidelines (which many including Anthropic already do) before their systems became even more powerful, and profitable.
“The bill creates basic transparency about frontier AI companies and the systems they build,” Clark said. “We expect extremely powerful systems to get built at the end of 2026 or early 2027. If we wait another year, you do not have a chance of hitting that AI development timeline.”
None of the other companies involved — Meta, Google or OpenAI — would take the same step of supporting the bill, while the Chamber of Commerce and tech lobbying group TechNet continued to oppose it unless amended to the end. Both lobbying groups declined to comment for this story.
Meta and OpenAI both spoke in somewhat positive terms of the bill without supporting it, with Meta calling it a step in the right direction once it passed the Legislature. Once signed, OpenAI said publicly that the bill, if implemented correctly, would allow federal and state governments to cooperate on safely deploying AI.
Meta, Google, Amazon and OpenAI declined to comment further for this story.
In the wee hours of the final day of the legislative session on Sept. 13, Wiener presented his measure to the Senate, which passed it easily with a vote of 29-8.
Newsom dropped a heavy hint weeks later in New York that he would sign the measure while speaking at an event with former President Bill Clinton on the sidelines of the U.N. General Assembly, another sign of the sweeping power the measure would have if it became law.
“We have a bill that’s on my desk that we think strikes the right balance,” Newsom said. “We worked with industry, but we didn’t submit to industry. We’re not doing things to them, but we’re not necessarily doing things for them.”
The governor signed it into law only days later, a year to the day he had vetoed Wiener’s first attempt.
Wiener, at long last, could declare victory: “With this law,” he said, “California is stepping up, once again, as a global leader on both technology innovation and safety.”