A ‘deepfake’ is an image – whether a digital drawing, photo, or video – where a person’s image is edited onto the body of another person, who may be real or computer-generated.
This synthetic media uses AI technology to manipulate someone’s likeness, so it appears they are doing something that the person did not do – or saying something they did not say, in the case of text-to-speech deepfakes.
When this type of technology first became accessible, it was less of a concern, because it was much easier to tell when an image was fake or manipulated. For example, many AI-generated images of humans often had the wrong amount of fingers.
However, AI develops and improves by learning from large data sets, and the more people have been using AI engines to generate imagery based on real photos, videos, or audio, the better it has become at creating realistic fabrications.
So, what are the implications of this, and how is the UK criminal justice system addressing them?
Why are deepfakes such a big problem?
As the technology advances, it is becoming more difficult to identify deepfakes, which is a major concern for public figures who are often targeted, such as celebrities and politicians.
Deepfakes have the potential to spread misinformation and commit fraud or blackmail, so they could fall under several areas of UK law – e.g. for crimes relating to privacy, data protection, intellectual property, harassment, or defamation.
One of the most significant concerns is that AI technology can be used to generate ‘non-consensual exploitative images’ – in other words, creating deepfake pornography of real people without their consent and sharing it online.
This exploitative use of AI came to wider attention recently when explicit deepfake images of American popstar Taylor Swift began circulating on the social media platform X (formerly Twitter), which were viewed millions of times before they were removed.
As the US rushes to address the gaps in its legal system that fail to address this type of cybercrime, other countries are also under pressure to keep up with rapid technological development and control the use of AI-generated images of existing people.
The UK is leading the way by implementing one of the first internet safety laws of its kind.
Is there a deepfake pornography law?
Previous laws concerning ‘intimate image abuse’ attempted to criminalise ‘revenge porn’ – the act of a person publishing sexual images of someone else without their consent. However, it was previously only a crime if the intent was to humiliate the other person and cause distress.
Now, thanks to the introduction of the Online Safety Act 2023, updated legislation is being put in place to protect victims of not just revenge porn, but also deepfake porn – the sharing of which also falls under the umbrella of intimate image abuse.
Under the new legislation, intimate images now include photographs or films that have been altered, whether by computer graphics or another method, to superimpose someone’s likeness onto another body to create realistic fake images of them engaging in sexual acts.
It is now a criminal offence in the UK to share an intimate image, whether real or deepfake, without the consent of the person or people depicted.
The Act will be enforced by regulatory body Ofcom, which will work with media and tech companies to make sure illegal content is removed from their platforms and that those who allow the commission of these offences are held to account.
Can you go to prison for deepfakes?
While there was previously a loophole where the person accused of sharing intimate images could deny they had the intent to cause humiliation or distress, this has now been closed – it is a criminal offence to share intimate images without consent, regardless of the perpetrator’s intent.
This makes it easier for UK courts to charge and convict perpetrators of intimate image abuse. There are three categories that prosecutors can apply, with varying sentences if convicted:
• Sharing an intimate image without consent can carry a sentence of up to 6 months in prison.
• Sharing an intimate image non-consensually with the intention of causing distress or humiliation, or obtaining sexual gratification, can each carry a sentence of up to 2 yearsin prison.
Threatening to publish intimate images to cause fear is also an offence that can affect sentencing. Additionally, if found guilty of sharing intimate images for sexual gratification, the perpetrator could be required to register as a sex offender.
As these offences have been simplified to make them easier to prosecute, there is likely to be an increase in cases where individuals circulate intimate images online thinking it’s ‘just a joke’ – especially in the case of deepfakes – then find themselves charged with a criminal offence.
Without representation from sexual offence solicitors with up-to-date knowledge of the legal system, convicted offenders could find themselves facing several months in prison.
Those playing with AI generators should therefore consider: is it worth harming others, losing your reputation, and potentially losing job opportunities and access to onlineplatforms?
As for non-sexual deepfakes, the amendments to the law currently do not address non-consensual AI-generated likenesses for other contexts or purposes – though victims may be able to seek legal redress against perpetrators through more complicated privacy and defamation laws.