Deepfake regulation: A double-edged sword?

Deepfake technology is rapidly emerging as AI’s latest ‘Pandora’s box. No longer limited to producing parodic content of politicians (who’ll ever forget the Pope sporting Moncler?), we are now seeing generative AI being actively weaponized, from misleading political deepfakes, clickbait celebrity advertisements, to school children deep-faking explicit pictures of classmates.

As the capabilities of AI tools race ahead of regulation, many are growing concerned of the very real threat it poses. New legislation is coming in – but much of it is too narrow, or too vague to protect people comprehensively. And on the flip side, these new rules have implications that could easily catch out professionals trying to utilize generative AI in legitimate ways.

So, what legal protection in the UK currently exists around deepfake technologies, and what behaviors are prohibited?

The Varieties of Visage

First, it’s important to actually define what makes a deepfake, a deepfake. After all, similarities exist in nature – there’s the old adage that seven people in the world look like you – but to what extent of similarity are you protected by regulation – and where can you slip up as a business? A useful example is the 2019 ruling against vape company Diamond Mist. The business’ adverts included one with the strapline “Mo’s mad for menthol” accompanied by imagery of a male model with a bald head and thick eyebrows.

Mo Farah took to Twitter to complain about the potential confusion, concerned people would think he had endorsed the product. Ultimately, the Advertising Standards Agency (ASA) ruled that the advert did indeed gave a “misleading impression”: while ‘Mo’ is a common moniker, the model’s head and eyebrows were “reminiscent” enough of the athlete that viewers would associate it with Mo Farah as one of the most well-known figures in the UK by that name.

Herein lies the crux: while the image wasn’t a deepfake, it was similar enough to confuse viewers, and the same applies to deepfakes. If it’s misleading enough to confuse someone else, you have grounds to consider litigation.

Conversely, as a business, you need to consider all potential interpretations of imagery to ensure you can use generative AI without getting caught up in legal complications. Just because the stock gen-AI photo you’re using to head up a LinkedIn article seems generic, doesn’t mean it is. Voice, gestures, and context are all factors taken into consideration., but ultimately the question is: did it confuse viewers?

Current Legislation around Deepfakes

To date, there is no single piece of legislation within the UK that provides blanket protection against deepfakes. Instead, individuals are protected under an assortment of regulations depending on the nature of the deepfake.

Online Safety Act 

The Online Safety Act has one main provision against deepfakes. While it has been illegal to share intimate or explicit images of someone without their consent since 2015, the Online Safety Act has compounded this ruling to make it illegal to also share intimate AI-generated images of someone without their consent. Crucially, unlike the ruling about genuine intimate content, you do not need to prove that the creator intended to cause distress in the case of deepfake imagery, although it is considered a further serious offence if a sexual intention can be demonstrated. It’s vital to note that this ruling does not criminalize the creation of an explicit deepfake, only the sharing. The Online Safety Act is also primarily focused on removing offensive content; many are concerned that provisions will prove ineffective while the creation of intimate deepfakes continues to be unregulated, while perpetrators escape punishment.

Advertising Standards Agency 

The ASA steps in when advertisements contain misleading content. In terms of deepfakes, this mostly arises in the case of scam adverts or clickbait; it’s unlikely to affect everyday people, and those running businesses should know not to use celebrities, who have usually trademarked their likeness, gestures, and voice, for example.

More interestingly, however, is the grey area of similarity that deepfakes are set to exacerbate. One thing that the Mo Farah case particularly highlighted was that likeness doesn’t need to be identical, it just needs to confuse the viewer enough to create reasonable doubt. With generative AI drawing from copyrighted material, there is now a danger that businesses could accidentally infringe ASA regulations by using gen-AI output that is accidentally similar enough to real-life celebrities to cause confusion. Intent in this case isn’t relevant: all that matters is whether viewers have been misled, and it could land businesses in hot water with the ASA.

Civil Law

The final recourse for UK citizens is under civil law. While there is no specific legislation addressing deepfakes, individuals could seek recourse in the following situations:

Privacy: a deepfake could be considered a violation of one’s right to privacy, especially if they are able to prove the creator used personal data to create it, which is protected under UK GDPR and the Data Protection Act 2018. Harassment: multiple deepfakes with intent to cause alarm or distress could form the basis of a harassment suit Defamation: if a deepfake has an adverse effect on one’s reputation by portraying them in a false or damaging way, there is the potential for a defamation case

In such cases, an individual would be best to seek legal guidance on how to proceed.

Future of deepfake legislation

So, where does legislation go from here? Hopefully, forward. The UK government took a considerable step back from the issue in the run-up to the election, but with the EU AI Act leading the way it’s likely we’ll see new regulation coming down the track soon.

The greater issue, however, is enforcement. Between the three bodies we’ve discussed above, the Online Safety Act, the Advertising Standards Agency, and UK civil law, all are centered on regulating output on a case-by-case basis. Currently, the UK has no regulation in place or proposals to input greater safety measures around the programs themselves. In fact, many are celebrating the lack of regulation in the UK following the EU AI Act, hoping it leads to a boon in AI industries.

Current strategies, however, remain inefficient. Victims require legal support to make any headway in cases, and creators continue to escape repercussions. Widespread control of technology is similarly impractical – one only need look at GDPR to get a sense of that. Efforts to do so, such as the EU AI Act, still fail to tackle the problem with open-source generative technologies remaining completely unregulated.

It appears that an independent adjudicator will be required – an Ofcom for AI – but how independent, or effective, this will prove remains to be seen. Let’s just hope that the new Government manage to strike some kind of balance between industry, personal protection, and business innovation.

We’ve listed the best business laptops.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Related posts

Open source machine learning systems are highly vulnerable to security threats

Leaders pushing for AI investment are gaining competitive advantages

New leak says if your iPhone can run iOS 18, it should be able to run iOS 19 too

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More