States could not enforce the regulations for artificial intelligence technology During a decade under a plan that is being considered in the American house of representatives. The legislationin one change According to the Federal Government’s budget law, no state or political subdivision “is allowed to enforce law or regulation for 10 years that artificial intelligence models, artificial intelligence systems or automated decision systems”. The proposal would still need the approval of both congress rooms and President Donald Trump before it can become law. The house is expected to vote for the full budget package this week.
AI developers and some legislators have said that federal action is necessary to prevent states from creating a patchwork of different rules and regulations in the US that can slow down the growth of technology. The rapid growth in generative AI since then Chatgpt At the end of 2022, on site on the scene has led to companies fit the technology in as many spaces as possible. The economic implications are important, since the US and China race to see which country will dominate the technology, but generative AI is privacy, transparency and other risks for consumers who have tried to pace legislators.
“As an industry and as a country, we need a clear federal standard, whatever it is,” Alexandr Wang, founder and CEO of the Datacompany Scale AI, told legislators during a Hearing in April. “But we need one, we need clarity with regard to one federal standard and have priority to prevent this result where you have 50 different standards.”
Efforts to regulate the ability of states to regulate artificial intelligence can mean less consumer protection around a technology that increasingly seeps into every aspect of American life. “There have been many discussions at the level of the state and I would think it is important for us to approach this problem at multiple levels,” said Anjana Susarla, a professor at Michigan State University who studies AI. “We could approach it at national level. We can also approach it at state level. I think we both need both.”
Several states have already started regulating AI
The proposed language would prevent states from maintaining any regulation, including those in the books. The exceptions are rules and laws that make things easier for AI development and that apply the same standards to non-AIs models and systems that do similar things. This type of regulations are already starting to pop up. The biggest focus is not in the US, but in Europe, where the European Union has already implemented Standards for AI. But states are starting to take action.
Colorado passed a set of consumer protection last year, set up to take effect in 2026. California has adopted more than a dozen AI-related Laws last year. Other states have laws and regulations that often deal with specific issues Like Deepfakes Or demands AI developers to publish information about their training data. At the local level, some regulations also discuss potential discrimination of employment such as AI systems are used when hiring.
“States are everywhere on the map when it comes to what they want to regulate in AI,” said Arsen Kourinian, partner at the law firm Mayer Brown. So far in 2025, state laws have at least introduced 550 proposals Around AI, according to the national conference of state laws. During the hearing of the house committee last month, Rep. Jay Obernolte, a Republican from California, to the desire to encourage more regulations at state level. “We have a limited amount of legislative job to resolve that problem before the states go too far,” he said.
Although some states have laws on the books, they have not all came into force or seen any enforcement. This limits the potential impact in the short term of a moratorium, said Cobun Zweifel-Keegan, director of Washington for the International Association of Privacy Professionals. “There is not really an enforcement yet.”
A moratorium would probably prevent state laws and policymakers from developing and proposing new regulations, said Zweifel-Keegan. “The federal government would become the primary and potentially only regulator around AI systems,” he said.
What a moratorium on ai -regulation means
AI developers have asked for catching catches that are placed at work to be consistent and streamlined. During a senate committee HearOpenAI CEO Sam Altman told Senator Ted Cruz, a Republican from Texas, that an EU style would be “disastrous” for industry. Instead, Altman suggested that the industry developed its own standards.
Asked by Senator Brian Schatz, a Democrat from Hawaii, as self -regulation in the industry is enough at the moment, Altman said he thought that some crash barriers would be good, but: “It’s easy to go too far. If I have learned more about how the world works, I am more afraid that it could go too far and have really bad consequences.” (Disclosure: ZIFF Davis, parent company of CNET, brought a lawsuit against OpenAi in April, which claimed that it infringed Ziff Davis authentic rights in training and operating his AI systems.)
Concerns of companies – both the developers who create AI systems and the ‘implementers they use in interactions with consumers – often arise from fears that states will oblige considerable work, such as impact assessments or transparency tools before a product is released, Kourinian said. Proponents of consumers have said that more regulations are needed, and hindering the assets of states can harm user privacy and safety.
“AI is used on a large scale to make decisions about the lives of people without transparency, accountability or story – it also facilitates chilling fraud, imitation and surveillance,” said Ben Winters, director of AI and Privacy at the Consumer Federation of America, in a statement. “A 10-year break would lead to more discrimination, more fraud and less control super said, it is with technology companies about the people they influence.”
A moratorium on specific state rules and laws can lead to more problems with consumer protection being treated in court or by the attorney general, said Kourinian. Existing laws on unfair and misleading practices that are not specific to AI would still apply. “Time will tell how judges will interpret those issues,” he said.
Susarla said that the omnipresence of AI in industries means that states may in a broader sense can regulate issues such as privacy and transparency, without concentrating on technology. But a moratorium on AI regulation can lead to such a policy tied in lawsuits. “It must be a kind of balance between ‘we do not want to stop innovation’, but on the other hand we must also acknowledge that there can be real consequences,” she said.
A lot of policy on the board of AI systems is done because of those so-called technology-agent rules and laws, said Zweifel-Keegan. “It is also worth remembering that there are many existing laws and that there is a potential to make new laws that do not activate the moratorium, but apply to AI systems as long as they apply to other systems,” he said.
Moratorium attracts opposition before the mood of the house
Huisdemocrats have said that the proposed break about the regulations would hinder the ability of states to protect consumers. Rep. Jan Schakowsky called the move “reckless” in a hearing of the committee on AI Regulation Wednesday. “It is now our job to protect consumers,” said the Democrat of Illinois.
In the meantime, Republicans argued that the state regulations may suffer too much from innovation in artificial intelligence. Rep. John Joyce, a Republican from Pennsylvania, said in the same hearing that the congress should create a national regulatory framework instead of leaving it to the States. “We need a federal approach that ensures that consumers are protected when AI tools are abused, and in a way that enables innovators to thrive.”
At state level, A Letter signed by 40 state lawyers -general – The congress of both parties called on the moratorium and instead to create that broader regulatory system. “This bill does not propose a regulation to replace or supplement the laws that have been determined or that are currently considered by the States, so that Americans remain completely unprotected by the possible damage of AI,” they wrote.
Source link
, , #proposed #moratorium #rules, #proposed #moratorium #rules, 1747868136, what-a-proposed-moratorium-on-the-ai-rules-could-mean-for-you