New York state lawmakers have approved the RAISE Act, which aims to prevent advanced models developed by major AI labs such as OpenAI, Google, and Anthropic from causing catastrophic scenarios. The act would create a regulatory framework against scenarios that could result in more than 100 deaths or injuries or more than $1 billion in property damage.
The first legal step against AI disasters came from New York! The RAISE Act has passed the House
If the RAISE Act goes into effect, it will be the first legal transparency standards in the US in this area. The act, supported by prominent figures such as Nobel Prize winner Geoffrey Hinton and AI research pioneer Yoshua Bengio, stands against Silicon Valley and previous administrations that have prioritized speed and innovation in recent years.

The act is currently awaiting signature by New York Governor Kathy Hochul. The governor can approve, request changes, or veto the law if he wishes. The RAISE Act is technically similar to California’s SB 1047, but it offers a narrower and less intrusive framework.
New York State Senator Andrew Gounardes, one of the co-sponsors of the bill, said in a statement that the bill was designed to minimize innovation pressure on startups or academic researchers.
The law would require large technology companies developing advanced AI systems to publicly publish security and cybersecurity reports, report potential security breaches, and report negative behavior related to these systems. Companies that fail to comply could face civil penalties of up to $30 million by the New York Attorney General’s Office.
The law focuses solely on advanced AI systems. This would require the model to have been trained with more than $100 million in computing resources and be available to users in New York. This definition currently covers most of the most advanced systems on the market.
According to Nathan Calvin, vice president of legal affairs at Encode, which helped develop the law, the RAISE Act was drafted with criticisms of past regulations in mind. For example, the bill does not require an “emergency stop” (kill switch) on models, and companies are not directly held liable for any damages that may occur after the model is trained.
However, even in its current form, the law has faced serious opposition in Silicon Valley. The other sponsor of the bill, New York State Assemblyman Alex Bores, said in a statement that he expected this resistance from the industry. Anjney Midha, a partner at investor groups such as A16Z, called the law “stupid” and claimed that such regulations would weaken the U.S.’s global competitiveness.
Another person who was cautious about the bill was Jack Clark, co-founder of security-focused artificial intelligence company Anthropic. Clark said the law could be too broad for small companies. However, Senator Gounardes said this criticism missed the point and that the bill only targets large companies.
OpenAI, Google and Meta declined to comment on the bill. On the other hand, there were concerns that the law could force companies to withdraw from the New York market. However, Assemblyman Bores reminded that this did not make economic sense and that New York has the third largest economy in the US, and said, “Withdrawing from this market is not something that companies will take lightly.”
The RAISE Act is one of the most comprehensive steps taken at the state level in the US, especially after technology regulations in Europe. By requiring the development of advanced AI systems in a more transparent and auditable manner, the law could herald a new era that prioritizes public safety.

