TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem


Published: Updated: 
5 min read

NY Governor Weakens AI Safety Rules as Tech Captures Regulation

Hochul's imminent rewrite of the RAISE Act represents the critical moment when executive power dilutes legislative intent on AI safety—a pattern that mirrors California SB 53's capture and signals systematic regulatory failure across states.

Article Image

The Meridiem TeamAt The Meridiem, we cover just about everything in the world of tech. Some of our favorite topics to follow include the ever-evolving streaming industry, the latest in artificial intelligence, and changes to the way our government interacts with Big Tech.

  • The New York legislature passed strict RAISE Act requirements for large AI model developers (Meta, OpenAI, Google, Deepseek) in June. This week, Governor Hochul reportedly proposed a near-total rewrite to favor tech companies, mirroring what happened to California's SB 53.

  • 150+ parents who lost children to AI harms sent a letter Friday demanding Hochul sign the original bill. Tech companies—coordinated through the AI Alliance—called it 'unworkable' and mobilized a super PAC to target the bill's sponsor with ads.

  • For investors: This determines whether AI regulatory risk is real or priced into an illusion of enforcement. For enterprises: Your future compliance architecture depends on whether the rules that actually pass have teeth.

  • Watch for Hochul's signing decision within 7-10 days. If she signs the original, NY becomes the template. If she accepts the rewrite, expect other states to follow the weaker pattern.

This week marks the moment when New York's AI safety regulation either holds or collapses. The legislature passed the RAISE Act with real teeth—forcing the largest AI companies to file safety plans, disclose critical incidents, and justify frontier model releases. Now Governor Kathy Hochul is about to rewrite it into something the tech industry can live with. This is regulatory capture in real time, and the decision window closes within days.

The New York legislature handed Governor Hochul what looked like a genuine constraint on AI companies this summer. The RAISE Act—Responsible AI Safety and Education—required developers of frontier models to actually document what could go wrong, report when things did go wrong, and prove their systems wouldn't cause the defined critical harms: death or serious injury to 100+ people, or $1 billion in damages. For companies spending hundreds of millions annually on model development, it was friction. Real friction. And friction is what safety regulations are supposed to create.

Then this week, Hochul reportedly circled back with a near-total rewrite. The specific language of her proposed changes hasn't been published, but Transformer News reported that the governor's office is moving toward something "more favorable to tech companies"—which in regulatory language means the safety requirements are getting thinner.

This matters because we've seen this exact script before. California's SB 53 passed with real substance in 2023, but by the time Governor Gavin Newsom signed it, significant provisions had been stripped. What emerged was substantively weaker—still technically a transparency law, but with enough loopholes that compliance became an exercise in box-checking rather than actual disclosure. The pattern: legislature passes something strict, industry mobilizes, governor's office "moderates" the bill before signing, and what becomes law is more theater than enforcement.

You can see the mechanics at work in New York right now. The AI Alliance—which counts Meta, OpenAI, Google, IBM, Intel, Oracle, and others among its members—sent a June letter to lawmakers calling the RAISE Act "unworkable." That's industry consensus-building disguised as technical feedback. Then Leading the Future, a super PAC backed by Perplexity AI, Andreessen Horowitz, OpenAI president Greg Brockman, and Palantir co-founder Joe Lonsdale, started running ads against Alex Bores, the RAISE Act's co-sponsor. That's pressure applied where it matters politically—against the legislator who championed the rules.

Meanwhile, 150+ parents sent a letter Friday to Hochul. Many of them had "lost children to the harms of AI chatbots and social media," according to the organizations coordinating the push. They're calling the original RAISE Act, in its current form, "minimalist guardrails"—not comprehensive regulation, but at least an attempt. What they're really saying is: don't make it worse than what the legislature already passed.

The current language matters. The original RAISE Act only covers the very largest AI developers—companies spending hundreds of millions annually. That's intentional narrowing; regulating every AI tool would be regulatory overreach. But for those frontier developers, the rules are specific: file a safety plan with the state attorney general, disclose incidents involving critical harm within 30 days, and don't release a frontier model if it creates unreasonable risk of defined critical harm. These aren't obstacles, they're documentation requirements. They're what any responsible company would do anyway—or at least, what they'd claim they're doing.

What's changing with Hochul's rewrite is almost certainly the teeth. Narrower definitions of "critical harm." Longer reporting timelines that effectively gut the incident transparency. Safe harbors for companies that claim they didn't know their models would cause harm. Broader exemptions. The specific mechanics matter less than the direction: toward laxness.

The timing is what makes this an inflection point. Hochul has a narrow window—usually governors have a few weeks to sign or veto legislation—before the bill becomes law with or without her signature. But if she's proposing a rewrite, she's signaling willingness to hold the bill in negotiations, using her signing power as leverage. That leverage only works if she can convince the legislature that her version is acceptable.

This is where the regulatory capture mechanism reveals itself: the governor's office becomes the filter through which industry preferences are laundered back into the legislative process. It feels procedural. It looks like "balancing concerns." But what it actually does is allow organized industry pressure—coordinated through industry groups, super PACs, and trade associations—to soften rules after the political costs of passing strict ones have already been paid by legislators.

For investors, this determines whether AI regulatory risk is something that will actually affect valuations or if it's a phantom risk priced into an imaginary enforcement environment. If Hochul signs the weakened version, AI regulatory risk effectively doesn't exist at state level (federal preemption is another question). For enterprises building compliance frameworks, you need to know whether the rules you're planning for will actually be enforced.

The next threshold to watch: Hochul's signing within 7-10 days, and whether the legislature's leadership accepts her rewrite or forces a decision between the original bill and a veto.

This is the moment when New York determines whether state-level AI regulation becomes real or becomes a symbolic gesture. The inflection point is narrow: Hochul can sign the legislature's bill, which has genuine but modest safety requirements, or she can force a rewrite that makes compliance easier for the companies it's meant to constrain. The parallel to California's SB 53 capture suggests what's coming—rules that sound tough but function as cover for industry as usual. For decision-makers, this determines your compliance timeline and whether regulations will actually drive behavior change. For investors, it signals whether regulatory risk to AI companies is mispriced. For builders, it shows which rules will actually be enforced versus which are theater. Watch for the signing within days—it's the moment the pattern either repeats or breaks.

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiemLogo

Missed this week's big shifts?

Our newsletter breaks
them down in plain words.

Envelope
Envelope

Newsletter Subscription

Subscribe to our Newsletter

Feedback

Need support? Request a call from our team

Meridiem
Meridiem