Oh, the Humanity: Copyright Gadfly Still Buzzing
Also This Week: Writers of the World Unite, AI in the U.K., and a French Kiss
Say this for Dr. Stephen Thaler: He doesn’t give up without a fight. For the past several years the computer scientist has been on a worldwide crusade to have an AI system he developed recognized as an author and inventor entitled to the same privileges and immunities as a human author or inventor. He’s filed patent and copyright applications in the U.S., the U.K. and Australia, all of which have been rejected by those countries’ registration authorities. He’s also filed lawsuits in all three countries contesting the rejections. So far, he’s 0-for-the-world.
But he’s still at it. This week, Thaler filed a reply brief to the U.S. Copyright Office’s opening brief in his ongoing lawsuit against the office over its rejection of his application to register a visual image he insists was “autonomously created” by his AI. Thaler lost the first round at the U.S. district court back in August but has now appealed that decision to the DC Circuit.
His latest brief largely reiterates the arguments he’s been making since the beginning. He claims the Copyright Office and the district court were misreading the language of the Copyright Act and subverting the intent of the Constitution’s copyright clause in insisting on flesh-and-blood human authorship as a prerequisite for copyright protection (patent authorities make a similar claim for inventorship).
I won’t rehash the entire saga (you can read my earlier coverage of the case here and here). Thaler’s quest seems legally Quixotic at this stage. He’s essentially asking the court to overturn the judgment of the Copyright Office and announce a broad new reading of copyright law while the office is still studying the issue and the technology is advancing at breakneck speed.
But there’s the nub of an interesting policy debate lurking in his filing that goes beyond the immediate legal questions. “Dr. Thaler has consistently argued on the record that, while he does not consider himself the author of the Work in the traditional sense of a human being putting pen to paper, he is the undisputed human originator of the Work. He is the creator, owner, programmer, and user of the machine that outputted the Work.”
If an AI cannot be the author of a work for copyright purposes, and the AI user or developer cannot claim authorship, who or what can?
ICYMI
Write On
The Writers Guild of America’s AI-driven strike may be over but the topic is still very much a live one among screenwriters around the world. The Federation of Screenwriters in Europe (FSE) and the International Affiliation of Writers Guilds (IAWG), which together represent about 67,000 professional script jockeys, are jointly developing a set of “ethical guidelines” for the use of AI in scriptwriting. The core principles in the guidelines, which will be binding on the 46 organizations in 38 countries in the two groups, closely mirror those in the WGA’s settlement with the studios last year. They include a stipulation that only human writers can create “literary material” as defined in collective bargaining agreements, transparency around the use of AI to perform any modification of a writer’s work, and requiring explicit consent for the use of a writer’s work to train an AI model. They also touch on the question of AI and authorship. “While we applaud the work of the EU to enact the AI Act, there are unresolved issues with respect to the unauthorized use of our intellectual property for training large language models, and uncertainty regarding authorship and copyright of machine-generated script material,” said Carolin Otto, president of FSE. Watch this space.
AI in the U.K.
Though the EU AI Act does not apply in the non-EU U.K. the baronies of Brexit are not free of controversy over AI and copyright. This week, lawmakers there blasted the government of Prime Minister Rishi Sunak in a report issued by the Culture, Media and Sports Committee for failing to “come to an agreement between the creative industries and AI developers on creators' consent and compensation regarding the use of their works to train AI." The report also recommended changing the split of music-streaming revenue between labels and publishers in favor of publishers. “The revenue split between recording and publishing rights does not reflect the importance of songwriters, composers and publishers in the music streaming process,” the report said. “We recommend that the Government bring forward measures for consultation with fans, music makers and other stakeholders to incentivize an optimal rate for publishing rights in order to fairly remunerate creators for their work.” Music to the ears of songwriters, but a sour note to the labels.
French Connection
One of the notable, if less noted, last-minute twists in passage of the AI Act was the exemption of open-source models from many of the law’s toughest measures, including disclosure requirements concerning training data. The move was viewed as an effort to bolster European AI developers, like those behind France’s open-source Mistral, in the face of competition from U.S. tech giants. This week, before the act is even in effect, the move bore fruit. Mistral introduced a new Large Language Model given the catchy name, Mixtral 8X22B and released under the Apache 2.0 open-source license. It “wasn’t safe to trust” the U.S. tech giants to set ground rules for a powerful new technology, Mistral CEO Arthur Mensch told the New York Times. “We can’t have a strategic dependency. That’s why we want to make a European champion.” Mixtral’s model features 176 billion parameters and a 65,000-token context window, putting on par with OpenAI’s GPT 3.5 and Meta’s LLaMA 2. Take that, Yanks!