‘The legal system won’t protect here’: machines have already taken over the creative labour market

Experts estimate that the rewards for works created by artificial intelligence will reach approximately 15 billion rubles

‘The legal system won’t protect here’: machines have already taken over the creative labour market
Sergey Matveev, President of the Intellectual Property Federation.. Photo: скриншот онлайн-трансляции non/fictionN

In the age of neural networks, this question is no longer rhetorical. Writers are increasingly turning to AI to generate plots, dialogues, and even entire chapters, while publishers and lawyers are left grappling with the issue: who owns the rights to the resulting novel? When one co-author is flesh and blood and the other is made of code and algorithms, the boundaries of authorship become blurred, and with them, the traditional norms of book publishing. At the International Intellectual Literature Fair non/fictioNvesna, experts gathered to discuss the legal aspects of human-AI collaboration in creative work.

There are no established practices in Russia

Artificial intelligence in publishing houses is no longer just a trendy tool — it is a full-fledged employee: it helps select manuscripts, predicts the commercial potential of books, corrects and edits texts, writes annotations, and even suggests cover design. The machine can, in a matter of seconds, analyse which tropes most often occur in bestsellers, or indicate in which genre niche there is a shortage of new books. Thanks to AI, publishers save time, reduce costs, and more accurately meet audience demands — because the algorithm reads faster and remembers more than any editor.

But the fact remains that we all live within a legal framework, and the regulatory acts created before the active use of neural networks are not adapted to the current reality.
“Industry practices are already taking shape. First and foremost, this is happening abroad; on our mortal soil, I have not yet seen clearly formulated principles," said Gennady Uvarkin, the director general of the Omega Legal Bureau.

Gennady Uvarkin, director general of Omega Law Bureau. скриншот онлайн-трансляции non/fictionN

For example, Wiley Publishing has issued detailed recommendations that emphasise that AI should serve as an auxiliary tool in the process of writing texts, and not replace the author. Authors must be fully responsible for the accuracy of the content and compliance with Wiley ethical and editorial standards. In addition, organisations such as The Authors Guild offer authors best practices for using AI, including disclosure of information about the use of AI tools and copyright compliance. In an international context, standards such as ISO 42001 and IEEE Ethically Aligned Design attract the attention of publishers, ensuring the ethical and managed implementation of AI in publishing.

Uvarkin drew attention to the standards introduced for journalists at the Associated Press (AP). They allow for the processing of information using artificial intelligence, the selection of headlines, and other text manipulations — but under human supervision.
At the same time, AI must not replace the journalist. Fact-checking (verification of information) remains the responsibility of the author — including when searching for information via neural networks. That is, searching is allowed, but verification is essential.
The AP standards also specify that manipulation of visual content is unacceptable, as is uploading personal data or texts protected by copyright into AI services.

What’s written in the fine print

For personal use — when we decide to “play around” with a neural network — artificial intelligence can be used without legal risks. Questions of ownership arise in cases where artificial intelligence is used to create professional content. The same AP standards state that a journalist or author may use only those neural networks whose developers have signed an agreement. “If the results of our [collaborative work with AI] are to be made public to an audience of millions and commercialised, we must avoid potential risks associated with restrictions from the developers of artificial intelligence,” said Uvarkin.

Viktoria Nagrodskaya, Advisor to the IP/IT Practice at Omega Law Office. скриншот онлайн-трансляции non/fictionN

The thing is that there is currently no unified standard for agreements between AI developers and users of neural networks. This means that in one particular case, the rights to further use of a work created in co-authorship with artificial intelligence may belong to the user, while in another — to the developers. And in the latter case, there is a risk of facing a lawsuit. Uvarkin suggested carefully reading the user agreement instead of accepting it automatically.

“If we look at modern American, French, or German services, we’ll usually see that they clearly state it is a non-exclusive licence that applies worldwide, for the entire duration of the exclusive rights, and with the right to sublicense. But if we look at Russian projects, we’ll find that it is, in fact, an exclusive licence — and without the right to sublicense, unless the developers confirm otherwise in writing. Which means users could face serious problems,” said Viktoria Nagrodskaya, advisor for the IP/IT practice at the Omega Legal Bureau.

“No protection”

President of the Intellectual Property Federation Sergey Matveev was markedly pessimistic in his remarks. He believes that the ongoing debates over who holds the rights to works created jointly with AI represent a major catastrophe for humanity. “In my view, what has happened is a sudden tectonic shift that has affected two things. It has essentially overturned the legal system and overturned human relations — and the notion of the human being as an entity endowed with intellect,” Matveev stated.

He believes that the very traits that make each person unique have been erased with the advent of artificial intelligence. For example, humanity has masterpieces of architecture created by Le Corbusier or Zaha Hadid. But if one asks AI to produce something in their style — or even a blend of the two — the artificial intelligence will deliver. How, then, should copyright be viewed in such a case? The lawyer’s answer is brief: “There is no protection.”

Matveev fundamentally questions why the term “intelligence” was ever applied to artificial intelligence:

“This is a machine system, a technology that, when fed information in the form of images, sounds, or texts, processes it in a certain way — statistically breaking it down in order to later use it in an unusual fashion and produce a synthetic semblance of creative work. That’s machine learning. Why on earth was it called intelligence?! ‘Deep machine learning’ sounds far more accurate. But even that misses the point. Learning is a humanitarian function, deeply human. We teach people to make them more intelligent, more cultured, to understand the accumulated body of human knowledge and to build upon it. The reason all these problems are now creeping in is because we gave this monster the wrong name in the first place.”

Matveev suggests that artificial intelligence should not be treated as a subject, but rather that things should be called by their proper names. “People, in building the technology, used objects traditionally protected by copyright without the consent of the authors and copyright holders," added the president of the Federation of Intellectual Property. In other words, any claims should be directed not at the technology itself, but at the specific individuals behind it.

скриншот онлайн-трансляции non/fictionN

However, even here, things are not so straightforward. Matveev explained that the initial development of AI technology was seen as a scientific task. According to the Civil Code, scientists, IT specialists, and other developers are permitted to use any data, including copyrighted materials, but only for scientific purposes. “The problem is that this scientific objective immediately transformed into practical use," the lawyer noted.

“After the humanitarian disaster, let's talk about money — it sounds more straightforward," Matveev said, shifting the conversation from abstract issues related to AI to financial concerns. Let's imagine a scenario. Artificial intelligence, trained for scientific purposes on the global treasure trove of the music industry, then synthesizes a high-quality musical composition, which a person uploads to streaming services. As listeners pay attention to the piece, the same algorithms start boosting this composition in the ranking results. “A human will release an album once a year, while artificial intelligence will do so once a week," added Sergey Matveev.

According to his data, AI-generated works in the Russian music industry will account for 8 billion rubles, while translation activities and the creation of illustrations will be entirely taken over by neural networks. “In total, about 15 billion rubles, intended for human creators, will be replaced by synthetic, cheap objects. But the biggest issue is that all of this is built on the use of stolen content," Matveev said.

In the end, he added that the existing legal system in the field of copyright protection has failed when it comes to artificial intelligence. “The legal system won't offer any protection here. The only option is to redistribute the unexpected economic flow," concluded the president of the Federation of Intellectual Property.

Ekaterina Petrova is a literary critic for the online newspaper Realnoe Vremya and the author of the Telegram channel Bulochki s Makom(Poppy Seed Buns).

Ekaterina Petrova

Подписывайтесь на телеграм-канал, группу «ВКонтакте» и страницу в «Одноклассниках» «Реального времени». Ежедневные видео на Rutube, «Дзене» и Youtube.