Artificial intelligence, and deepfake technology in particular, is posing serious new challenges to the protection of personality rights and the interests of intellectual property right holders. In Finland, there is no dedicated legislation governing the unauthorised use of a person's name, image, voice or public persona. The entry into force of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) and its implementation into national legislation mark a turning point – yet the question remains whether the resulting framework adequately safeguards the rights of those whose identity and creative output are at stake.
Case Law
In Finland there is no specific provision prohibiting the use of persons name or image in commercial context. The inappropriate use of a person’s name or image may be addressed on the basis of the general clauses of the Consumer Protection Act (38/1978) and the Unfair Business Practices Act (1061/1978). Established case law from domestic courts holds that using a person's photograph in commercial purposes requires their consent, and that the party who has been injured by the unauthorised use must be compensated with a reasonable sum.
These principles were most recently reinforced by the Helsinki Court of Appeal judgment in the dispute between a Finnish actor and a clothing company in June 2025. It was undisputed that the company had run an advertising campaign using the actor's name, image and voice across television, radio and online. The Court found that it had not been demonstrated that the actor had given his consent to the campaign, either expressly or implicitly. Considering the scope and content of the campaign as well as the actor's market value, the Court held that the €300,000 compensation claimed could be considered reasonable.
ICC Advertising and Marketing Communications Code
The Finnish Chamber of Commerce’s Board of Business Practice applies the ICC Advertising and Marketing Communications Code, Article 19 of which sets out that marketing practices should not refer to any private or public persons without permission.
The Board has issued a recommendation that if a person is identifiable in an advertisement or other marketing material, whether from a photograph, video, painting, drawing, caricature or even a lookalike, the marketer must obtain that person's consent. Consent is also required where the person is identifiable by name, voice or any other element, or where their reputation is exploited for commercial purposes. Individuals in public and private capacity, including children, enjoy this protection.
Where the Protection Falls Short
The harm caused by unauthorised use of a person's identity extends far beyond commercial marketing and strikes at the core of intellectual property right holders' interests. Deepfake technology allows the realistic fabrication of images, audio and video, enabling the creation of material that may be indistinguishable from reality. Through social media such content can spread globally within minutes. The consequences to a person's reputation, professional standing or personal life can be severe and almost immediate.
In addition to the case law on personality rights, the domestic legal framework for the protection of personality rights is formulated by a patchwork of overlapping mechanisms including the criminal law provisions on identity theft and the unauthorised distribution of sexual images, as well as general civil law remedies.
Under the existing mechanisms, effective enforcement in the online environment remains a challenge for rights holders. The anonymity of digital platforms, the cross-border nature of online conduct and the speed at which content proliferates all undermine the practical efficacy of enforcement. For intellectual property rights holders, a legal basis for action in theory offers little assurance of timely and effective relief in practice.
The problem is illustrated by recent developments in the music industry. In March 2026, a major record label reported that it had requested the removal of more than 135,000 songs from streaming services, all of which had been generated using AI to impersonate the label's recording artists. The label indicated that such counterfeits cause direct commercial harm to the legitimate artists.
The implementation of the AI Act introduces a further regulatory layer targeting the social media platforms by creating transparency obligations on the deployers of AI-systems.
The AI Act and the Transparency obligations
The AI Act introduces new transparency obligations that are directly relevant to deepfake content. Under Article 50(4) of the AI Act, deployers of AI systems that generate or manipulate synthetic audio, image, video or text content constituting a deep fake shall disclose that the content has been artificially generated or manipulated (see also Recitals 133 and 134).
While these transparency requirements represent a meaningful step forward, they address dissemination rather than the underlying right to control the use of one's identity. From the perspective of intellectual property right holders, labelling AI-generated content as such does not, in itself, prevent the unauthorised exploitation of a person's likeness, voice or persona, nor does it provide an adequate remedy where such exploitation has already occurred. The right holder's interest lies not merely in disclosure but in the ability to prevent and obtain redress for the misappropriation of their identity.
The remaining gap
Notwithstanding the additional regulatory layer introduced by the AI Act and its national implementation, a significant gap remains from the standpoint of intellectual property rights holders. A person whose AI-generated likeness has been used to impersonate their artistic persona, to damage their reputation, misrepresent their views or undermine their brand still has no ready and comprehensive legal remedy under existing law. Rights holders are left without a coherent statutory basis on which to assert control over the use of their identity or to seek effective redress.
The transparency obligations imposed by the AI Act do not, as such, establish enforceable personality rights. The gap is not merely theoretical but has tangible consequences for intellectual property rights holders who are unable to effectively prevent or remedy the misappropriation of their identity. Comprehensive personality rights legislation covering the unauthorised use of a person's identity beyond advertising and providing rights holders with both private and public enforcement mechanisms, deserves serious consideration – regardless of the advances brought about by the AI Act.

/Passle/64ef1c9c95186b4923406017/MediaLibrary/Images/2026-04-13-14-19-41-498-69dcfb7d6d7258fe4b8be45e.jpg)
/Passle/MediaLibrary/Images/2025-07-07-10-07-13-609-686b9c51a2144a0551f1ab26.jpg)
/Passle/64ef1c9c95186b4923406017/MediaLibrary/Images/2026-02-23-11-15-41-283-699c36dd2077178a60ee8e64.png)
/Passle/64ef1c9c95186b4923406017/MediaLibrary/Images/2026-02-19-12-04-25-086-6996fc49ed31acc93b14048e.png)