AI Regulation: Harder Than It Looks
No one thought, or should have thought, that devising and implementing a coherent and workable regulatory regime for artificial intelligence would be easy, or without controversy. But from Europe to the United States, the political battles erupting over efforts to create one are showing just how difficult that challenge will be.
Consider:
In the U.S., Congress has thus far failed to enact any sort of comprehensive legal or regulatory framework for AI, leaving a vacuum that many state legislatures have tried to fill where they can. But tucked into the “One Big Beautiful Bill” recently passed by the U.S. House of Representatives is a measure that would ban states from enforcing “any law or regulation limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce” for a period of 10 years.
According to various analysts, the provision would effectively nullify anywhere from 700 to 1,000 current or pending state laws, eliminating what little AI regulation exists.
Technology companies had pressed Congress to take steps to address the emerging patchwork of state laws around the country. It is difficult and costly for any business or industry to comply with 50 different — and sometimes contradictory — regulatory regimes. But the idea was to replace that patchwork with a single federal standard applying everywhere, not simply to wipe away any rules at all. Yet the Republicans who drafted the OBBB couldn’t be bothered with the hard part. Their eyes were fixed on taking from the poor and giving to the rich, cutting Medicaid and nutrition assistance for the bottom fifth while extending generous tax cuts to the top. Actually addressing a complex policy thicket would have only slowed them down.
The OBBB now faces uncertain prospects in the Senate, however. Some of its provisions, including the AI law moratorium, have attracted opposition, even from some Republicans. The bill also is framed as a budget reconciliation measure, allowing it to circumvent a likely Democratic filibuster and pass by a simple majority, with only Republican votes. But under the Senate’s arcane rules, only measures that directly impact spending or taxes can be included in a reconciliation bill. The AI law moratorium, as written, likely does not meet that criterium and could be stripped from the OBBB by the Senate parliamentarian.
That would leave the current messy patchwork of state laws in place.
A key effort to bring some clarity to the application of copyright law to AI systems is also now a shambles. As discussed in my previous post, the U.S. Copyright Office was nearing the finish line on a two-year, congressionally mandated study of the interaction between copyright and AI when President Trump abruptly —and perhaps illegally — fired the Register of Copyrights Shira Perlmutter days after firing her boss, Librarian of Congress Carla Hayden. Perlmutter has now sued Trump over her dismissal. But a federal court on Monday denied here request for a temporary restraining order that would have immediately reinstated her, seemingly leaving the leadership of the office — and perhaps the fate of the fourth and final chunk of the AI report — up in the air. The Copyright Office staff has declined to recognize Trump’s hand-picked “acting Register,” Justice Department official Paul Perkins, as legitimately appointed, and have instead rallied behind senior career official Robert Newlan who stepped in in the immediate aftermath of Perlmutter’s firing.
Europe Union
In the European Union, efforts to implement the landmark AI Act may also be hitting the skids. The law is scheduled to come into full effect in August. But reports are now circulating that the European Commission is considering a “stop the clock” move to pause enforcement of the law to give time to consider amendments to “simplify” the rules.
The AI Act is the most ambitious effort anywhere to establish a robust regulatory regime for AI. But the law — never popular with AI developers — has faced increasingly forceful pushback, as well as technical difficulties, as full implementation has drawn near.
The loudest controversy has erupted over the drafting of the AI Code of Practice mandated by the law and intended to provide a blueprint for technology companies to ensure their compliance with the AI Act’s complex rules. The drafting process, conducted by a committee of experts appointed by the Commission, has been subject to intense lobbying by U.S. technology companies as well as from home grown AI developers, and the Code has undergone significant modification with each successive draft.
Those modifications have drawn the ire of some members of the European Parliament. In May, a group of lawmakers led by Italian MEP Brando Benifei wrote to the Commission to express “great concern” over efforts to water down the Code. “Risks to fundamental rights and democracy are systemic risks that the the most impactful AI providers must assess and mitigate [sic],” the letter said. “It is dangerous, undemocratic and creates legal uncertainty to fully reinterpret and narrow down a legal text that co-legislators agreed on, through a Code of Practice.” Benifei has separately threatened legal action against the Commission if the Code is adopted in its current form.
The Commission has also faced intense pressure over the law from the Trump Administration, which claims it unfairly targets U.S. companies. In April, the U.S. Mission to the EU reached out to the Commission to oppose adoption of the law in its current form. The White House has also pressured the EU for changes to the Digital Services Act and the General Data Protection Regulation (GDPR).
The AI Act has also hit speed bump in the development of technical specifications intended to enable equipment and device makers to design their systems to perform the various technical functions the law will require. Development of the specs is supposed to be completed before August, but the process is behind schedule and may not be completed before the end of the year.
Even if negotiators meet the deadline, however, EU Parliament AI policy advisor Kai Zenner told a conference in Washington in May that most EU member countries lack the funds or expertise needed to enforce the AI Act in any case. “Most member states are almost broke,” Zenner said. “This combination of lack of capital finance and also lack of talent will be really one of the main challenges of enforcing the AI Act.”
United Kingdom
Across the Channel in the U.K. the government of Prime Minister Keir Starmer has had a devil of a time steering its ambitious Data (Use and Access) Bill through Parliament. The bill, a centerpiece of the PM’s bid to claim a piece of the AI action for England, has been hung up over a provision that mirrors the EU AI Act’s text-and-data mining exemption, which allows researchers to make use of copyrighted works without permission or payment to rights owners. Among those permitted uses would be ingesting copyrighted works to train generative AI models.
The bill has been mired in Parliamentary “ping-pong” as the House of Lords three times has added an amendment to the bill requiring AI companies to disclose all data used in training their models so that copyright owners can know whether their works have been included, only to have the government get it stripped from the bill in the House of Commons. Lords is expected to take up the latest version of the bill — again sans copyright amendment — on Monday (6/2). But Baroness Beeban Kidron, the author of the amendment has vowed to again introduce the disclosure provision, which has passed with increasing margins each time the bill has come to the floor in the upper chamber.
The government has pleaded with MPs and peers to let the Data Bill go through without addressing copyright issues, fearing the measure it sees as critical to the U.K.’s economic future, could get bogged down in the complexities surrounding the interaction of AI and IP. But it has gotten bogged down anyway over efforts to avoid the question.
The TDM exception has drawn high-profile opposition from members of the British creative community, including such luminaries as Paul McCartney and Elton John, the latter of whom branded the government “absolute losers” and said he felt “incredibly betrayed” over the exception in an interview with the BBC.
In an effort to mollify opponents, the government has launched a public consultation specifically on AI and copyright, and has promised to address data transparency and other copyright issues in separate legislation based on the consultation later this year or next. But critics have dismissed the effort as window dressing meant to render the issue moot.
In an open letter to Secretary of State for Science, Innovation, and Technology Peter Kyle released Friday, a group of 42 artists, authors, and musicians organized by former Stability AI executive turned AI-model critic Ed Newton-Rex demanded the government “clarify” language in the consultation document regarding AI and copyright law.
“In the document introducing the consultation, the government repeatedly suggested that the question of how UK copyright law currently applies to AI training is uncertain,” the letter said. “You personally reinforced this in the accompanying press release when you referred to ‘uncertainty about how copyright law applies to AI’… However, there is no uncertainty: commercial generative AI training on copyrighted work without a licence is illegal in the UK.”
As in the EU, the U.S. government and technology companies have had a heavy hand in trying to keep the copyright question out of the Data Bill. “The Government have got it wrong. They have been turned by the sweet whisperings of Silicon Valley, who have stolen – and continue to steal every day we take no action – the UK’s extraordinary, beautiful and valuable creative output,” Baroness Kidron said after the most recent vote in the House of Lords. "Silicon Valley has persuaded the government that it's easier for them to redefine theft than make them pay for what they have stolen."
As with the EU, the Trump Administration has also pressed the U.K. government not to take steps to regulate AI that the White House views as unfair to U.S. technology companies.
The particular political circumstances obviously are different in each of those jurisdictions. What they share is that their respective governments, to one degree or another, have bought into the idea that preeminence in AI technology is the one true path to economic growth, and they’re willing to subordinate any other economic, technological, cultural, or diplomatic interest they view as potentially blocking that path. The increasingly bitter debate over the proper balance between AI and copyright is both a symptom and a consequence of that AI fixation.
The AI Act began as an effort to cement the EU in the role of global standard setter for regulating the technology, and in particular AI companies’ handling of the data they collect and process. The current European Commission under the leadership of President Ursula von der Leyen, is now steering a very different course.
Last week the Commission unveiled a new EU Startup and Scaleup Strategy geared to fostering more home-grown “innovation,” a term increasingly read in political circles to mean “AI.” Von der Leyen and the Commission are also working to make the EU more open to AI investment from abroad (read: the U.S.). The tempering of the Code of Practice to be more accommodating to U.S. technology companies is a reflection of that, as are efforts to “simplify” the operation of the regulation itself.
In the U.S. we’re practically burning the boats. While Republicans in Congress try to sweep away any regulation that might hinder AI development, the Trump administration is racing to dismantle the very scientific, academic, and research capacity needed to develop anything else.
AI may some day prove to be the transformative technology its apostles believe, on the order of electrification, or the wheel. But we don’t know that yet. And we’re putting a lot of eggs in that basket.