If you’re wondering where a robust regulatory response to generative AI could come from, don’t sleep on the Federal Trade Commission. While courts and the U.S. Copyright Office are still feeling their way through the complex maze of technical, legal and policy questions raised by the generative AI technology, the FTC, as a law enforcement agency, has a narrower brief and potent tools it already can bring to bear on many of those questions.
It also has a green light from the White House to use those tools. In President Joe Biden’s recent executive order on AI the FTC was encouraged to take a leading role in the government’s efforts to put guardrails around the development and use of AI systems.
The head of each agency developing policies and regulations related to AI shall use their authorities, as appropriate and consistent with applicable law, to promote competition in AI and related technologies, as well as in other markets… In particular, the Federal Trade Commission is encouraged to consider, as it deems appropriate, whether to exercise the Commission’s existing authorities, including its rulemaking authority under the Federal Trade Commission Act, 15 U.S.C. 41 et seq., to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI.
As an independent agency, the FTC does not report to the president. The White House therefore cannot direct the agency to take any particular action. But the order’s encouragement to act will not have been missed the commissioners.
We’ve noted here before the FTC’s interest in AI’s potential impact on the creative industries and the creative workers within those industries. How it might act on that interest was spelled out in more detail in comments the commission filed to the Copyright Office as part of the latter’s congressionally mandated study of generative AI and copyright.
The FTC has an interest in many of the difficult questions with which the Copyright Office has been grappling about where to draw the line between human creation and AI-generated content. For instance, not only may creators’ ability to compete be unfairly harmed, but consumers may be deceived when authorship does not align with consumer expectations, such as when a consumer thinks a work has been created by a particular musician or other artist but it has been generated by someone else using an AI tool.
[snip]
The use of AI technology raises significant competition and consumer protection issues beyond questions about the scope of rights and the extent of liability under the copyright laws… Conduct that may violate the copyright laws––such as training an AI tool on protected expression without the creator’s consent or selling output generated from such an AI tool, including by mimicking the creator’s writing style, vocal or instrumental performance, or likeness—may also constitute an unfair method of competition or an unfair or deceptive practice… In addition, conduct that may be consistent with the copyright laws nevertheless may violate Section 5.17 [of the FTC Act]. Many large technology firms possess vast financial resources that enable them to indemnify the users of their generative AI tools or obtain exclusive licenses to copyrighted (or otherwise proprietary) training data, potentially further entrenching the market power of these dominant firms.
The FTC’s interest in AI, as well as its starring role in the president’s executive order, were further elaborated on in comments this week by Commissioner Alvaro Bedoya at an event sponsored by the Open Markets Institute, a Washington-based, liberal-leaning think tank (Bedoya’s comments start at the 1:42:24 mark.)
The OMI also released its own report this week on the monopoly danger posed by AI, including the threat to creators, creative workers and creative property.
Critically, many of the concerns the antitrust watchdogs raise about generative AI overlap directly with the issues raised by artists and copyright owners. But they frame those concerns not in terms of copyright law and the fact-specific, multi-factor analysis required to adjudicate fair use claims, but in the more concrete and generally applicable principles of competition and consumer protection laws.
Moreover, those laws are already on the books and do not require any new legislation to apply them in the context of AI, as might very well be required to bring generative AI within the scope of copyright law.
Many of the needed rules of engagement between AI and the creative communities, in other words, may already be written. We just need to read them.
ICYMI
AI across the aisle Apropos the above, Sens. John Thune (R-S.D.) and Amy Klobuchar (D-Minn.) this week introduced a bipartisan bill to regulate AI. The Artificial Intelligence Research, Innovation, and Accountability Act of 2023 would direct federal agencies to create standards for transparency and accountability for AI tools. It also proposes new definitions of “generative” and “high-impact” AI systems, and directs the National Institute of Standards & Technology (NIST) to develop standards for establishing the authenticity of online content. As noted, some of the bill’s goals may already be achievable through enforcement of laws already on the books. But it never hurts to get your name on a bill likely to garner support from colleagues, the public and, of course, donors. Story. Bill.
Unfair use Ed Newton-Rex is composer, technologist and entrepreneur, and one of the most prominent figures in the development of generative AI in music. And until Monday he had been the head of the audio team at StabilityAI, the company behind the Stable Diffusion generative AI engine. He resigned his high-profile position over a philosophical disagreement with the company on the fair use question with respect to the use of copyrighted works to train generative AI models without the rights owner’s consent. In an op-ed published Wednesday in Music Business Worldwide, Newton-Rex said StabilityAI’s comments filed last month to the U.S. Copyright Office was the last straw. "I’ve resigned from my role leading the Audio team at Stability AI, because I don’t agree with the company’s opinion that training generative AI models on copyrighted works is ‘fair use’," he wrote. “This is a position that is fairly standard across many of the large generative AI companies, and other big tech companies building these models — it’s far from a view that is unique to Stability. But it’s a position I disagree with.” Op-ed. StabilityAI comments to Copyright Office.
Theatrical flourish Interesting move by former National Association of Theatre Owners chief John Fithian. His new LA-based consulting firm, the Fithian Group will do the usual strategic, investment and PR advising but will also work with clients looking to bring a broad range of films directly to theaters without passing through a traditional studio or distributor. The strategy worked for Taylor Smith (what doesn’t?), whose The Eras Tour movie went straight to theaters via a deal with AMC to become the highest grossing concert film on record and a much needed ray of hope for beleaguered cinema owners. Another direct-to-theater musical is coming soon from Beyoncé. The studios probably aren’t looking over their shoulder just yet, but it would be interesting to see if the model works for dramas and other non-event fare. The Hollywood Reporter.