As we dive deeper into the digital age, fake news, online deceit and widespread use of social media are having a profound impact on every element of society. From swaying elections to manipulating science-proven facts.
Deepfaking is the act of using artificial intelligence and machine learning technology to produce or alter video, image or audio content. It’s done using the sequence of the original to create a version of something that didn’t occur.
So, what’s the deal with deepfakes?
Once a topic only discussed in computer research labs, deepfakes were catapulted into mainstream media in 2017. This was after various online communities began swapping faces of high-profile personalities with actors in pornographic films.
“You need a piece of machine learning to digest all of these video sequences. The machine eventually learns who the person is, how they are represented, how they move and evolve in the video,” says Dr Richard Nock, machine learning expert with our Data61 team.
“So if you ask the machine to make a new sequence of this person, the machine is going to be able to automatically generate a new one.”
“The piece of technology is almost always the same, which is where the name ‘deepfake’ comes from,” says Dr Nock. “It’s usually deep learning, a subset of machine learning, used to ask the machine to forge a new reality.”
Let’s go… deeper
As a result, deepfakes have been described as one of the contributing factors of the Infocalypse. A term used to label the age of cybercriminals, digital misinformation, clickbait and data misuse. As the technology behind the AI-generated videos improves, the ability for audiences to distinguish fact from fiction is becoming increasingly difficult.
Creating a convincing deepfake is an unlikely feat for the general computer user. But an individual with advanced knowledge of machine learning (the specific software needed to digitally alter a piece of content) and access to the victim’s publicly-available social media profile for photographic, video and audio content, could do so.
Now face-morphing apps inbuilt with automated AI and machine learning are becoming more advanced. So, deepfake creation could possibly come to be attainable to the general population in the future.
One example of this is Snapchat’s introduction of the gender swap filter. The cost of a free download is all it takes for a Snapchat user to appear as someone else. The application’s gender swap filter completely alters the user’s appearance.
There have been numerous instances of cat fishing (an individual that fabricates an online identity to trick others into exploitative emotional or romantic relationships) via online dating apps using the technology. Some people are using the experience as a social experiment and others as a ploy to extract sensitive information.
Earlier this year an American college student used Snapchat's gender-swap filter to catch a police officer allegedly trying to lure a teen into a sexual relationship.
To deepfake or not to deepfake
Politicians, celebrities and those in the public spotlight are the most obvious victims of deep fakes. But the rise of posting multiple videos and selfies to public internet platforms places everyone at risk.
‘The creation of explicit images is one example of how deepfakes are being used to harass individuals online. One AI-powered app is creating images of what women might look like, according to the algorithm, unclothed.’
According to Dr Nock, an alternative effect of election deepfakery could be an online exodus. Basically, a segment of the population placing their trust in the opinions of a closed circle of friends, whether it be physical or an online forum, such as Reddit.
“Once you’ve passed that breaking point and no longer trust an information source, most people would start retracting themselves. Refraining themselves from accessing public media content because it cannot be trusted anymore. And eventually relying on their friends, which can be limiting if people are more exposed to opinions rather than the facts.”
The Obama deepfake was a viral hit. There were over six million views of the video seemingly produced by the US president. The video brought to light the existence of deepfake technology alongside a warning for the trust users place in online content.
Mitigating the threat of digital deceit
There are three ways to prevent deepfakes according to Dr Nock:
- Invent a mechanism of authenticity. Whether that be a physical stamp such as blockchain or branding, to confirm that the information is from a trusted source and the video is depicting something that happened.
- Train machine learning to detect deep fakes created by other machines.
- These mechanisms would need to be widely adopted by different information sources in order to be successful.
“Blockchain could work – if carefully crafted – but a watermark component would probably not,” explains Dr Nock. “Changing the format of an original document would eventually alter the watermark, while the document would obviously stay original. This would not happen with blockchain.”
Machine learning is already detecting deep fakes. Researchers from UC Berkeley and the University of Southern California are using this method to distinguish unique head and face movement. These subtle personal quirks are currently not modeled by deep fake algorithms, with the technique returning a 92 per cent level of accuracy.
While this research is comforting, bad actors will inevitably continue to reinvent and adapt AI-generated fakes.
Machine learning is a powerful technology. And one that’s becoming more sophisticated over time. Deepfakes aside, machine learning is also bringing enormous positive benefits to areas like privacy, healthcare, transport and even self-driving cars.
Our Data61 team acts as a network and partner with government, industry and universities, to advance the technologies of AI in many areas of society and industry, such as adversarial machine learning, cybersecurity and data protection, and rich data-driven insights.
A version of this article was originally published on the Data61 Algorithm blog.