New York lawmakers have passed the Responsible AI Safety and Education (RAISE) Act, a bill establishing stringent safety and transparency rules for large developers of advanced AI models like OpenAI and Google. The legislation now awaits Governor Kathy Hochul’s decision, potentially setting a significant U.S. precedent for AI oversight.
Million-dollar rules: If signed, the RAISE Act mandates that companies investing $100 million or more in training a single AI system must publish safety plans, undergo independent annual reviews, and report critical incidents or model theft to the state attorney general within 72 hours.
Defining disaster: The bill targets "critical harm"—events causing at least 100 deaths or over $1 billion in damage—with penalties up to $30 million for repeat offenses and includes whistleblower protections, a nod to recent staff exits from AI labs over safety worries.
Industry's divided view: While sponsors like State Senator Andrew Gounardes advocate for such "commonsense safeguards," some tech figures, like Andreessen Horowitz's Anjney Midha, have criticized the bill as detrimental, posting on X that it could "hurt the US."
The Albany clock: Governor Hochul's decision on the RAISE Act is pending. If enacted, leading AI developers will have 180 days to submit their initial safety assessments, ushering in a new phase of AI accountability. Meanwhile, the broader AI safety movement has reportedly faced headwinds, and some industry leaders, like Anthropic's CEO, are also calling for federal action on AI transparency. This comes as federal lawmakers are reportedly weighing measures that could preempt such state-level rules, and some AI labs are independently upgrading model safeguards against severe risks like bioweapon development.
Reading Recap: