At some point a rich person will escape court because the CCTV evidence can't be separated from a deepfake. Potentially someone will be able to pull some serious scams using them. Or more seriously trick vulnerable people into crime or kidnapping or worse. There is some talk that laws on deepfake impersonations need to move in quickly, as tech laws are usually too slow and reactionary.
https://www.thetimes.co.uk/article/deep ... 1614620214
Tom Cruise looks at the camera. “I’m going to show you some magic,” he says holding up a coin. “It’s the real thing”, the Hollywood actor insists, giving his trademark laugh and making the coin disappear. “It’s all the real thing.”
Millions have watched the video on TikTok, with many initially wondering if the 58-year-old Mission: Impossible star had joined the video-sharing platform beloved of teenagers.
Except this is not Cruise but a “deepfake” video, one of several of the actor that have garnered millions of views and shares across social media. They have led experts to warn that deepfake technology is advancing much faster than most people realise.
The three videos first appeared on TikTok and have been created by an account called “deeptomcruise”. They show the Top Gun actor apparently performing a magic trick, playing golf before bending to talk to the camera, and falling over in a fashionable clothes store before telling an anecdote about Mikhail Gorbachev.
In each of the videos, “Cruise” delivers his trademark laugh, looks almost exactly like the real actor and displays similar mannerisms.
However, there are a few signs, other than the account name, that this is a fake, including the fact the face appears to show a younger Cruise, while the figure seems taller than the real actor’s 5ft 6in height.
Experts say there are only likely to be a few contenders who can produce videos of this standard. The video creators are also believed to have worked with Miles Fisher, a Cruise impersonator who helped create a spoof 2019 video of the actor.
“This is very much in the top 5 per cent of deepfakes out there in terms of quality,” Henry Ajder, a leading expert on deepfakes, said.
Deepfake technology first emerged in 2017. It is able to place politicians, celebrities or any normal person into a video they never participated in, making them say or do things that never happened.
The key tool used in deepfakes is machine learning. A person will feed a computer programme hours of real video and images of a person to give the machine an understanding of what that person looks like from different angles and under various lights. This would then be combined with computer graphics techniques to superimpose a copy of the person on to a stand-in actor, who may also be able to add their own references of how that person should move.
Although the technology is advancing rapidly, it normally takes some time to create a believable video, with a lot of manual tweaking after the algorithm has created the initial raw video footage.
“Post-production work plays a big role in the vast majority of deepfakes of this quality,” Ajder, who is a researcher and adviser on deepfakes, said. “We are talking about hours of work to smooth the edges of the face, to change the colour where skin isn’t exactly the right and get the right shade. You even need to change the movements in the neck to make it convincing when someone is speaking.”
However, while deepfake videos of this quality are normally done by professionals, many apps and computer programs now offer basic “face swapping” technology that can fool people for a few seconds if they are casually scrolling past a video, said Ajder. This has allowed it to be used in cases such as “revenge porn”, bullying or manipulation, in which someone can now place the face of a person they know on to the body of someone else in a video.
“That for me is a worry. The future will be synthesised, this technology is not going away, it’s becoming increasingly present in the minds of big players in entertainment, and there are useful cases for arts and even society. But there is also a huge amount of really negative and malicious use cases,” Ajder said.
Experts are torn on how to regulate such a space. Sandra Wachter, a professor in the ethics of AI at Oxford University’s Internet Institute, said: “If you have something out there that is potentially very harmful for individuals or groups, or could be a national security threat, then you obviously need to interfere. But I would also be worried about losing sight of the nuanced approach, because the Tom Cruise video is arguably satire and you don’t want to take down that sort of content when it’s free speech. If it’s uncanny but not necessarily harmful, then it’s probably more important to educate people so that they are not fooled by it rather than taking it down.”
TikTok said it had not removed the videos as they did not violate its policy and the account’s username provided context for the content being a deepfake.
It said the platform would remove digital forgeries “that mislead users by distorting the truth of events and cause harm to the subject of the video, other persons or society.”