news
Published: 12 December 2019

The rise of ‘Deep Fakes’

The run-up to the general election on 12 December has once again raised questions about fake news. Professor Alan Woodward, Visiting Professor in the Surrey Centre for Cyber Security, examines how AI techniques are enabling ‘Deep Fakes’: manipulated video footage in which people do and say whatever the user wants.

The once impregnable idea of democracy is coming under unprecedented attacks, especially in those bastions of democratic ideals – the United States of America and the United Kingdom.

By ‘under attack’, we mean that our system for recording, counting and verifying votes is vulnerable to infiltration by those seeking to alter an election. Influencing the fraction of a per cent of voters needed to swing an election is becoming easier by the day. 

In some cases, hostile powers may not care about the election result but simply wish to ‘unmask’ the democratic process as flawed. Such disruptors want their method seen to be influencing the vote, even if they don’t want to be identified as the source of the attack.

One of the most insidious techniques to emerge in the past couple of years is what have been dubbed Deep Fakes.

Deep Fakes are different to original image manipulations (photoshopped images, for example) because they are no longer simply about adding, subtracting, highlighting or stretching the elements of an existing image or video. With the use of Artificial Intelligence techniques, we can now actually mimic someone: we can make a subject appear to do or say anything we wish. This isn’t clever editing together of snippets of video and audio; rather, the computer learns looks, mannerisms and speech patterns and recreates them electronically. 

We’ve seen some extraordinary examples of this, such as when US actor and director Jordan Peele created a video of President Obama lip syncing to Peele’s words. 

Of course, technology moves on.  We no longer need an impressionist, for example.  One company, MyCroft AI, can model your voice – or anyone else’s voice for that matter – using examples you give it.  Originally intended as a way of ‘giving’ a voice back to those who had lost theirs through disease, this type of technology is already being used to model authors’ voices so that they can ‘read’ you their books.

We can create talking heads of anyone we choose, to say what we want, today.  This is no dystopian future – although it still has a cost, and must be done offline.  Or rather, it was:  Samsung’s research arm developed a technique that required only a few frames of video to train the computer to impersonate a human. Incredibly, the latest research from a group in Israel demonstrates a technique that doesn’t require hours of processing to manipulate a video; instead, their method can be used to swap faces on live video streams.

We’ve already seen such techniques used for swapping faces in ‘fake celebrity porn' so the potential for misuse of these techniques in political campaigning should be obvious. How many of us would think that a live video could be manipulated like this? 

Rather more worrying about these latest developments is that they require far less skill and equipment than previously. The technique from the Israeli group doesn’t even require multiple images of a face in order for it to be swapped onto a video of someone else’s body. The equipment used by the researchers was relatively cheap and if the only impediment to its use is the level of skill required, this technology could cause a massive proliferation of Deep Fakes.

This technology is a fact of life. You can’t uninvent it. If anything, you can only expect it to improve. The question is: how do we protect ourselves from being swayed by disinformation and misinformation disseminated using these technologies?

Perhaps individuals simply need to be aware enough of the technology so they can calibrate what they are seeing on social media. But remember – we only need to influence a tiny fraction of a per cent of the voters to change an election result.  It’s almost inevitable some of these Deep Fakes will find suitable floating voters who are unaware of what is possible, or maybe choose to believe due to underlying confirmation bias. 

The social media networks have proven reluctant in the past to take responsibility for material posted that amounted to election tampering. However, they do now seem to accept that they have a role to play in preventing improper influencing. 

Indeed, are social media networks technically capable of detecting Deep Fakes?  At the level of bits and bytes it is difficult to fully disguise that something has been altered, or is artificial; it requires effort, and most of all money. The research on detection needs to follow closely the work of producing the fakes, not trail along years behind. That means action needs to be taken now in order for the research to be implemented in time. 

One of the reasons the Israeli researchers chose to go public was so legislators and policy makers could be forewarned. The time to listen is now and government needs to use its powers to persuade the social media companies to act.

 

Discover our courses in computer science, including our Information Security MSc.

Share what you've read?