Gov. Newsom signs AI-related bills regulating Hollywood actor replicas and deep fakes
SAN FRANCISCO — Gov. Gavin Newsom on Tuesday signed a handful of artificial intelligence-related bills that would give actors more protection over their digital likenesses and fight against the spread of deepfakes in political ads, among other regulations aimed at the fast-rising technology.
“They were important election integrity bills that are long overdue,” Newsom said in an interview at Dreamforce, a San Francisco conference hosted by business software giant Salesforce. “The election’s happening, early voting is happening, these bills were urgent for me to get done.”
At least one of the new laws could play into this year’s presidential election, which has already seen an online proliferation of deepfake political endorsements and videos featuring false videos of candidates.
One of the new laws, Assembly Bill 2839, aims to curb manipulated content that could harm a candidate’s reputation or or public confidence in an election’s outcome, with the exception of parody and satire. Under the legislation, a candidate, election committee or elections official could seek a court order to get deepfakes pulled down. They could also sue the person who distributed or republished the deceptive material for damages.
The other bills signed include AB 2655, which requires technology platforms to have procedures for identifying, removing and labeling fake content. This also exempts parody, satire and news outlets that meet certain requirements. AB 2355 requires a committee that creates a political ad to disclose if it was generated or substantially altered using AI.
Deepfakes of this year’s presidential candidates, Vice President Kamala Harris and former President Trump, have spread widely online, increasing fears of misinformation and disinformation. Some social media companies have taken down such content when it violates their standards, but it can be difficult for content moderators to keep up with the rapid sharing and uploading.
The surge in deepfake images and videos online of U.S. presidential candidates Donald Trump and Kamala Harris have raised questions over whether the false information could impact the election.
One recent deepfake victim was Taylor Swift. Trump shared a post on his Truth Social platform that implied Swift had endorsed him when she did not.
“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site,” Swift wrote in an Instagram post, where she endorsed Harris. “It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.”
The bills that Newsom signed into law also address concerns that were raised during last year’s Hollywood strikes led by the Screen Actors Guild-American Federation of Television and Radio Artists and the Writers Guild of America, which fought for protections for actors and writers who worried that their jobs could be taken away by advances in AI technology.
Two of the laws that Newsom signed would give performers more protections over their digital likeness.
One prohibits and penalizes the making and distribution of a deceased person’s digital replica without permission from their estate.
The other makes a contract unenforceable if a digital replica of an actor was used when the individual could have performed the work in person or if the contract did not include a reasonably specific description of how the digital replica would be used. The rules governing contracts take effect in January.
The Times spoke with a cinematographer, an editor, a costume designer and two voice actors about why they are taking steps to educate themselves about AI.
“No one should live in fear of becoming someone else’s unpaid digital puppet,” said Duncan Crabtree-Ireland, SAG-AFTRA’s national executive director and chief negotiator, in a statement. “Gov. Newsom has led the way in protecting people — and families — from A.I. replication without real consent.”
The new laws were part of a slew of roughly 50 AI-related bills in the state Legislature, as the state’s political leaders are trying to address the concerns raised by the public about AI. One bill Newsom did not yet decide on is SB 1047, an AI safety bill introduced by Sen. Scott Wiener (D-San Francisco), which has been hotly debated in Silicon Valley.
The bill would require developers of future advanced AI models to create guardrails to prevent the technology from being misused to conduct cyberattacks on critical infrastructure.
“The governor has made public statements more generally about supporting both innovation and regulation, but not wanting regulation to harm innovation,” Wiener said at a Tuesday news conference. “Those align with my views as well, and those statements align with SB 1047.”
On Tuesday, Newsom told The Times that he hasn’t made up his mind on the bill yet.
“It’s one of those bills that come across your desk infrequently, where it depends on who the last person on the call was in terms of how persuasive they are,” Newsom said. “It’s divided so many folks.”
Newsom said that in the green room before he went onstage at Dreamforce there were two leaders in the space (whom he didn’t want to name) who were debating the bill with polar opposite views. “It even splits people here,” he said.
“The most important thing [is] regardless of what happens on 1047, that’s not the last word, that’s not the holy grail of regulation in this space,” Newsom said. “... It’s all evolving and I want to make sure we have a dynamic regulatory environment and we’re constantly iterating.”
At a fireside chat during Dreamforce, Newsom said he wants to bring regulatory framework that can support investment. At the same time, “we have to have enough flexibility to deal with unintended consequences, but that we’re not overcompensating for anxieties that may never materialize,” he said.
After clearing many hurdles in the state Legislature, Sen. Scott Wiener’s bill to require developers to put safeguards around their advanced AI models is one step closer to becoming law.
Dreamforce is San Francisco’s largest conference and is expected to draw 45,000 people, according to San Francisco’s Office of Economic & Workforce Development. The three-day event, which kicked off Tuesday, is being billed by Salesforce as the “largest AI event in the world.”
Staff writer Queenie Wong contributed to this report.
The Times spoke with a cinematographer, an editor, a costume designer and two voice actors about why they are taking steps to educate themselves about AI.
More to Read
Inside the business of entertainment
The Wide Shot brings you news, analysis and insights on everything from streaming wars to production — and what it all means for the future.
You may occasionally receive promotional content from the Los Angeles Times.