- Channel 4, a British television channel, has sparked controversy with a deepfake video portraying an alternative festive broadcast set to be broadcast on Friday.
- The video depicts the Queen discussing controversial Royal Family stories, including Prince Andrew’s connections to Jeffrey Epstein, and the departure of Prince Harry and Meghan Markle from the family.
- Channel 4 said it intends the video to provide a “stark warning” about deepfake technology and fake news.
- Critics, however, say that the video makes it seem as though deepfakes are more widespread than they actually are.
- Visit Business Insider’s homepage for more stories.
British broadcaster Channel 4 has sparked controversy with a deepfake video portraying an alternative festive broadcast set to be broadcast on Friday.
Queen Elizabeth II releases a yearly video address to the nation at 3pm on Christmas Day, reflecting on the highs and lows of the previous year. The message usually focuses on a single topic, and in 2020 it will likely focus on the coronavirus pandemic and its impact on the UK.
Channel 4’s alternative, however, will be a little different.
The five-minute video shows a digitally altered version of the Queen, voiced by actor Debra Stephensen, discussing several of the Royal Family’s most controversial moment this year, including Prince Harry and Meghan Markle’s departure from royal duties, and the Duke of York’s relationship with disgraced financier and alleged sex offender Jeffrey Epstein, The Guardian reported.
In a short clip of the video published by the BBC shows the fake Queen joking that: “There are few things more hurtful than someone telling you they prefer the company of Canadians,” in reference to Harry and Meghan’s move to Canada.
The video was orginally intended to give a “stark warning” about deepfake technology and fake news.
Ian Katz, Channel 4’s director of programmes, told the Guardian that it was a “powerful reminder that we can no longer trust our own eyes.”
However, the project has somewhat backfired, with experts remarking that the video suggests that deepfake technology is more common than it actually is.
“We haven’t seen deepfakes used widely yet, except to attack women,” Sam Gregory, the programme director of Witness, an organization using video and technology to protect human rights, told the Guardian.
“We should be really careful about making people think that they can’t believe what they see. If you’ve not seen them before, this could make you believe that deep fakes are a more widespread problem than they are,” he added.
Deepfake technology has become an increasing issue, especially targeting women with non-consensual deepfake pornography.
A chilling investigation into a bot service that generates fake nudes has highlighted that the most urgent danger internet “deepfakes” pose isn’t disinformation – it’s revenge porn.
Deepfake-monitoring firm Sensity, previously Deeptrace, on Tuesday revealed it had discovered a huge operation disseminating AI-generated nude images of women and, in some cases, underage girls.
The service was operating primarily on the encrypted messaging app Telegram using an AI-powered bot.
Deepfakes expert Henry Ajder told the Guardian: “I think in this case the video is not sufficiently realistic to be a concern, but adding disclaimers before a deepfake video is shown, or adding a watermark so it can’t be cropped and edited, can help to deliver them responsibly.
“As a society, we need to figure out what uses for deepfakes we deem acceptable, and how we can navigate a future where synthetic media is an increasingly big part of our lives.
“Channel 4 should be encouraging best practice.”
Powered by WPeMatico