OpenAI Says California's Controversial AI Bill Will Hurt Innovation

SB 1047, introduced by Wiener, aims to enact what his office has called "common sense safety standards" for companies that make large AI models above a specific size and cost threshold.

Advertisement
Read Time: 6 mins
OpenAI argued that such regulation needs to come from Federal agencies, and not the State ones.

OpenAI is opposing a bill in California that would place new safety requirements on artificial intelligence companies, joining a chorus of tech leaders and politicians who have recently come out against the controversial legislation.

The San Francisco-based startup said the bill would hurt innovation in the AI industry and argued that regulation on this issue should come from the federal government instead of the states, according to a letter sent to California State Senator Scott Wiener's office on Wednesday and obtained by Bloomberg News. The letter also raised concerns that the bill, if passed, could have "broad and significant" implications for US competitiveness on AI and national security.

SB 1047, introduced by Wiener, aims to enact what his office has called "common sense safety standards" for companies that make large AI models above a specific size and cost threshold. The bill, which passed the state Senate in May, would require AI companies to take steps to prevent their models from causing "critical harm," such as enabling the development of bioweapons that can cause mass human casualties or by contributing to more than $500 million in financial damage.

Under the bill, companies would need to ensure AI systems can be shut down, take "reasonable care" that artificial intelligence models don't cause catastrophe and disclose a statement of compliance to California's attorney general. If businesses don't follow these requirements, they could be sued and face civil penalties.

The bill has received fierce opposition from many major tech companies, startups and venture capitalists who say that it's an overreach for a technology still in its infancy and could stifle tech innovation in the state. Some critics of the bill have raised concerns that it could drive AI companies out of California. OpenAI echoed those concerns in the letter to Wiener's office.

"The AI revolution is only just beginning, and California's unique status as the global leader in AI is fueling the state's economic dynamism," Jason Kwon, chief strategy officer at OpenAI, wrote in the letter. "SB 1047 would threaten that growth, slow the pace of innovation, and lead California's world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere."

OpenAI has put conversations about expanding its San Francisco offices on hold amid concerns about uncertainty with California's regulatory landscape, according to a person familiar with the company's real estate plans who requested anonymity to discuss internal conversations.

Advertisement

In a statement, Wiener defended the proposed legislation and said OpenAI's letter "doesn't criticize a single provision in the bill." He also said the argument about AI talent leaving the state "makes no sense" because the law would apply to any companies that conduct business in California, regardless of where their offices are located. A representative for Wiener's office pointed to two prominent national security experts who have publicly supported the bill.

"Bottom line: SB 1047 is a highly reasonable bill that asks large AI labs to do what they've already committed to doing, namely, test their large models for catastrophic safety risk," Wiener said. "SB 1047 is well calibrated to what we know about foreseeable AI risks, and it deserves to be enacted."

Critics argue the bill will hamper innovation by requiring companies to submit details about their models to the state government, as well as prevent smaller open-source developers from creating startups for fear of being sued.

Advertisement

Last week, in an effort to address some of the pushback, Wiener amended the proposed legislation to eliminate criminal liability for tech companies that are not in compliance, added a protection for smaller open-source model developers and got rid of the new proposed "Frontier Model Division." Previously, developers could incur criminal penalties for intentionally submitting false information about their safety plans to the government under penalty of perjury. OpenAI rival Anthropic, which has a reputation for being more safety-oriented than its competitors, has previously said it would support the bill with some of these amendments.

But even after the amendments were introduced, the bill has continued to rack up opponents, including former House Speaker Nancy Pelosi, who released a statement calling it "ill-informed." A group of Democratic members of Congress also publicly opposed the bill. OpenAI and other tech industry firms have retained lobbyists who have been working on the bill, according to state filings.

Advertisement

In the letter, OpenAI said it has been engaging with Wiener's office for several months on the bill but ultimately does not support it.

"We must protect America's AI edge with a set of federal policies - rather than state ones - that can provide clarity and certainty for AI labs and developers while also preserving public safety," according to the letter. OpenAI also said having a clear federal framework would "help the US maintain its competitive advantage against countries like China and to advance democratic governance and values around the world."

OpenAI argued that federal agencies such as the White House Office of Science and Technology Policy, the Department of Commerce and the National Security Council are better suited to govern critical AI risks than state-level California government agencies. The company said it supports several proposed pieces of federal legislation such as the Future of AI Innovation Act, which provides congressional backing for the new US AI Safety Institute.

Advertisement

"As I've stated repeatedly, I agree that ideally Congress would handle this," Wiener said in the statement. "However, Congress has not done so, and we are skeptical Congress will do so. Under OpenAI's argument about Congress, California never would have passed its data privacy law, and given Congress's lack of action, Californians would have no protection whatsoever for their data."

SB 1047 is set to be voted on in the California state assembly sometime this month. If passed, it will ultimately move to the desk of California Governor Gavin Newsom. While Newsom has not indicated whether he will veto the bill, he has publicly spoken about the need to promote AI innovation in California while mitigating its risks.

Featured Video Of The Day
2025 Resolution: Make India Safe For Women
Topics mentioned in this article