Denmark’s Deepfake Law: Closing the Gaps Left by the GDPR

Introduction

In June 2025, Denmark introduced legislation to grant individuals copyright protection for AI-generated representations of their faces, voices, and performances, also known as deepfakes. While the EU GDPR governs the processing of personal data, it does not adequately address situations where an individual’s identity is imitated but not directly identifiable. Therefore, stronger protections are needed to safeguard individual identities from the increasing prevalence of deepfakes.

 

GDPR Framework and Special Category Data

Under both the UK and EU GDPR, data controllers must ensure that the processing of personal data is lawful, fair, and transparent. This includes biometric data such as facial images and voice recordings, which are special category data under Article 9 and therefore subject to stricter protections. Lawful processing requires a valid legal basis under Article 6 and an appropriate condition under Article 9, such as explicit consent or the provision of health or social care. However, biometric data is only classified as special category data under Article 9 when it is processed “for the purpose of uniquely identifying a natural person” (Recital 51), which may not apply to many AI-generated imitations.

 

Limitations of GDPR in Regulating Deepfakes

However, not all deepfake content can be clearly labelled as personal data under the GDPR, as the regulation concerns identifiability, not resemblance. For instance, if an image or voice is altered so that it is no longer directly identifiable as a specific individual, even if it is recognisably based on one, it may fall outside the scope of the GDPR. The identifiability test in Recital 26 requires that identification be “reasonably likely” using available means, which is context-dependent and open to interpretation. As a result, those affected by deepfakes often have limited legal options, even if significant damage has been caused.

Furthermore, the GDPR focuses on how data is collected and processed and does not regulate imitation or representation itself. This means that even when a deepfake causes reputational harm, data protection law may not apply unless personal data is explicitly being processed.

 

Denmark’s Legislative Solution

To address this, Denmark has proposed legislation that would give individuals copyright protections and rights to control the use and distribution of their AI-generated likeness. This includes the ability to issue takedown requests, prevent commercial and unauthorized use of their image, and pursue legal action for damages caused by misuse. However, the proposal still permits legitimate uses such as parody, journalism, and non-commercial artistic works, if the content is not misleading or harmful. Unlike the GDPR, Denmark’s approach does not require the content to be classified as personal data. Instead, it focuses on recognisability and imitation.

 

Potential UK Impact

Although the UK is no longer a member of the EU, this new legislation could still influence future UK policies, particularly with the rise in political misinformation, non-consensual explicit content, and reputational harm. The UK GDPR provides limited protection for content that does not involve personal data, and while the Online Safety Act 2023 imposes duties on platforms to address harmful content, it still does not grant individuals rights over likenesses. The Act focuses on platform responsibility for damaging user-generated content but does not address ownership or consent over AI-generated imitations of identity.

 

Share This

Share this post with your friends!

Share This

Share this post with your friends!