AI Can Turn Hollywood Stars Into Pornographic Actors
The sex industry is an early adpoter of new technololgy as it seeks out new distribution channels and effects. Now, advanced machine learning technology is being used to create fake pornography featuring real actors and pop stars, pasting their faces over existing performers in explicit movies.
The resulting clips, made without consent from the actors whose faces are used, are often indistinguishable from a real film, with only subtly uncanny differences suggesting something is amiss.
A community on the social news site Reddit has spent months creating and sharing the images, which were initially made by a solo hobbyist who went by the name “deepfake”.
When the technology site Motherboard first reported on the user in December last year, they had already made images featuring women including Wonder Woman star, Gal Gadot, Taylor Swift, Scarlett Johansson, and Game of Thrones actor Maisie Williams.
In the months since, videos featuring other celebrities including Star Wars lead Daisy Ridley, Game of Thrones’s Sophie Turner, and Harry Potter star Emma Watson have been posted on the site, which has become the main location for sharing the clips.
While simple face swaps can be done in real time using apps such as Snapchat, the quality of the work posted by deepfake required much more processing time, and a wealth of original material for the AI system to learn from.
But the computer science behind it is widely known, and a number of researchers have already demonstrated similar face swaps carried out using public figures from news footage.
The creation of face-swapped pornography rapidly scaled up in late December, when another Reddit user (going by the name “deepfaceapp”) released a desktop app designed to let consumers create their own clips. While not easy to use, the app takes eight to 12 hours of processing time to make one short clip, the release of the app galvanised the creation of many more images.
“I think the current version of the app is a good start, but I hope to streamline it even more in the coming days and weeks,” deepfakeapp told Motherboard.
“Eventually, I want to improve it to the point where prospective users can simply select a video on their computer, download a neural network correlated to a certain face from a publicly available library, and swap the video with a different face with the press of one button.”
The ease of making extremely plausible fake videos using neural network-based technology has concerned many observers, who fear that it heralds a coming era when even the basic reality of recorded film, image or sound can’t be trusted.
“We already see it doesn’t even take doctored audio or video to make people believe something that isn’t true,” Mandy Jenkins, from social news company Storyful, said and added, “This has the potential to make it worse.”
Guardian: Deloitte: CreativeBloq:
You Might Also Read:
Google’s AutoML Offers Machine Learning Models Without Having To Code:
Virtual Reality – Just Getting Started: