AI Licensing Steps on the Scale
Engineers and policymakers weigh in
The Nieman Lab at Harvard University recently published its Predictions for Journalism 2026 report, featuring contribution from an array of leading digital media executives. Among those was one from David Skok, CEO and editor-in-chief of the Canadian business and technology outlet The Logic.
His prediction? Publishers will see no meaningful AI licensing revenue.
While Skok acknowledges the recent flurry of direct deals between publishers and AI developers, such those announced by Meta, Google, OpenAI and Microsoft, he argues they are not enough to create a genuinely liquid market for rights and that the model won’t scale.
“They will be more like public relations exercises than true economic signals,” per Skok.
The fundamental problem, in Skok’s telling, is that “[a]s long as Google’s search crawler and its AI-training crawler are functioning as a single system, the market for licensing journalism into AI models is effectively stalled.”
The tools available to publishers to regulate access to their content online, principally the robots.txt protocol, he suggests, are no longer up to the task in the face of AI.
With the workflows of AI crawling and search indexing intertwined, “blocking one means blocking both,” Skok notes. The effect is to force a choice on publishers between open access to AI crawlers and invisibility in search. “Most publishers can’t afford the latter. So the wall they assumed they could put up is gone,” Skok writes. “This removal of friction destroys the scarcity upon which a licensing market depends.”
His complaint is not a new one. It is a view widely shared among publishers. But the market may not be quite as stalled as he claims.
Last week, the RSL Technical Steering Committee published the first iteration of the Real Simple Licensing technical specification. First announced in September, RSL is an open standard, like the Real Simple Syndication (RSS) standard it’s based on, allowing anyone to implement or support it.
It is designed to work with robots.txt, augmenting robots’ simple yes/no permission gate by adding new permission categories, including “ai-all”, “ai-input”, and “ai-index.” The aim is to give publishers finer control over how their content can be used in AI-related applications, such as freely, as input for training, or for indexing purposes only. It also enables their disentangling search indexing from AI scraping.
“Today’s release of RSL 1.0 marks an inflection point for the open internet,” Eckart Walther, chair of the RSL Technical Steering Committee, said in a press release. “RSL establishes clarity, transparency, and the foundation for new economic frameworks for publishers and AI systems, ensuring that internet innovation can continue to flourish, underpinned by clear, accountable content rights.”
Although voluntary, like robots.txt, RSL has at least nominal support from some 1,500 publishers, technology companies, platforms and open standards organizations, according to the RSL Collective. Among those is Cloudflare, the web’s largest CDN, which has also attracted broad support for its recently announced managed-robots.txt platform, making the voluntary protocol enforceable at the network level and enabling a pay-per-crawl access model.
The RSL standard also includes support for a new “contribution” licensing option developed in collaboration with Creative Commons and meant to help strengthen the non-commercial web publishing ecosystem.
A voluntary technical standard is not a panacea for beleaguered web publishers, of course. There is no legal or contractual enforcement mechanism and no penalty for evading or ignoring it. Tinkering with robots.txt also does nothing to control downstream copies of content, nor does it solve the problems facing every kind of rights owners. It does nothing for music rights holders, for instance, whose works are readily scraped from streaming services.
But the publication of the standard and the broad initial support signal real movement within the media and technology industries toward standardizing the technical foundations of a licensing system, which is the sine qua non for getting any such system to scale.
Another approach to getting an AI licensing system to scale, of course, would be to make it mandatory. While that would take the issue out of the orderly, empirical world of engineering into the cut-and-thrust realm of policymaking, a new report commissioned by the European Parliament deigns to go there.
The report, The Economics of Copyright and AI, was commissioned by Parliament’s Policy Department for Justice, Civil Liberties and Institutional Affairs at the request of the Committee on Legal Affairs. It views the controversy over the use of copyrighted works to train AI models through the lens of earlier technology-enabled disruptions such as the Napster firestorm, and concludes the most viable solution is to implement a compulsory AI license for copyrighted works and statutory royalty.
“Among realistic options, a statutory licence with a modest royalty outperforms licensing markets, or copyright exceptions with or without opt-out” in balancing the interests of rights owners, technology developers and consumers, the report says. “Our calibration exercise suggests that statutory licensing generates about $14 billion more annual welfare than the next best alternative.”
Specifically, the report recommends policymakers “Avoid opt-out and adopt statutory licensing as the default framework. It is the most robust option for aligning private incentives with societal welfare. The royalty rate should be set at the lowest level that restores maintenance of creative supply.”
The proposal, especially the low royalty rate, is not likely to be music to the ears of artists and rights holders. Music publishers have chafed at statutory mechanical licensing since its inception, for instance, which they argue deprives them of the true market value of their works.
Yet, the fact that it’s being floated at the highest level of policymaking within the EU, which has led the world in digital regulation, is an indication that movement toward getting AI licensing to scale is happening at the policy level as well as the technical level.
It may not get there quickly (certainly not in 2026, in fairness to Skok). And it may never protect all participants’ interests all the time. But neither is it completely stalled.

