Most of us have played with Snapchat’s Face Swap feature at least once. You might even have downloaded apps like Restart to up your meme-creation game by doctoring some gifs.
But what, you might ask, have these novelty tools got to do with cyber security? The answer is in what they demonstrate in terms of technological advancement. A decade or so ago, it was pretty much only film studios and psyops units who had the ability to produce a convincing synthetic (deepfake) video. These days, with just a so-so knowledge of coding and some Open Source software, anyone can create a pretty convincing fake.
So can you tell a fake from a genuine video? This test from Microsoft is a good starting point.
Here’s a deep-dive into deepfake and why it’s an issue for cyber security professionals.
How are deepfake videos created?
- A deepfake is a media stream (e.g. an audio recording or video accompanied by audio) that has been created by modifying existing information to create a new stream that purports to show something that didn’t actually occur.
- The most common technology used for creating deepfakes is a subcategory of AI known as GAN (generative adversarial network).
- A GAN uses deep neural networks, a field of AI that seeks to mimic the human brain through a set of algorithms. This type of AI is useful for focusing on non-mathematical problems such as recognising voices (e.g. Siri) and telling the difference between a cat and dog.
- Basically, the approach involves two neural networks competing against each other. Both networks are fed real media samples (images, audio or video). One network has the job of generating its own sample (the generative network), while the other one tries to recognise whether those created assets are real or fake (the discriminative network).
- Bit by bit, the generative network gets better at creating content. Likewise, the discriminative network improves on its ability to identify fakes. Each network learns from the other.
- After the GAN system has processed millions of these learning cycles, the generative neural network should have advanced to the extent that it can now produce convincing fake assets. In other words, items that an equally capable discriminative neural network is unable to tell apart from the real ones.
- Based on this, you have the potential to create an entire fake media stream.
You’ll find no shortage of ready-to-go downloadable deepfake apps. (Here’s a good listicle featuring the most popular ones currently available).
These apps tend to be easy to use, and many of them are completely free. These offerings are almost always focused on fun rather than delivering realistic results.
If you are interested in having a go at creating your own convincing deepfake (purely for research purposes, of course), you’ll need some dedicated software. For this, be prepared to use some Python. You won’t necessarily have to be a Python master, but a basic knowledge, or at least, the ability to pick it up, would be a great help.
You will also need a dedicated graphics card or a virtual GPU (Google Cloud is worth looking at).
Popular software platforms include DeepFace Labs and Faceswap.
Issues for cyber security professionals
You’ve probably seen some of the discourse concerning the political implications of deepfake. Potential scenarios include fake videos being pushed out to discredit politicians, to support conspiracy theories and generally sow distrust in political processes.
Away from politics, ordinary people and businesses are starting to realize the risks that deepfake technology brings. One recent survey suggests that 80% of businesses are worried about the risks, but less than a third have taken any steps to prevent them. Cyber security professionals definitely need to have it on their radar.
So what are these risks?
- Fraud: e.g. a hacker gains access to a publicly-accessible video snippet of your CEO. Using a GAN stack on Github, they use this to create a fake video email. The message is sent to your head of accounts, asking them to release funds to a certain account.
- Reputation: e.g. a hacker creates a stream, purporting to be secret footage of your boss dissing your company’s own products, or expressing socially unacceptable opinions.
Of the two, fraud is perhaps the most straightforward risk to protect against. You can address it by making sure the organization has standard systems of communication in place. For instance, a video request for release of information or funds should always be backed up with a written request, so it can be authenticated.
Proving that a video is fake for reputational purposes is going to be more of a challenge. Your best bet is probably going to involve contacting a reputation crisis management specialist within your country. They should be able to put you in contact with a forensic video expert specializing in authentication.
So what’s the future for dealing with deepfake? As Mary Branscombe in TechRepublic recently highlighted, the emphasis needs to be on provenance. In other words, with video content, we need to collectively get into the habit of making it absolutely clear – ideally through verifiable metadata – where and when it was produced, by whom, as well as how it was edited. A deepfake might look real. But if it doesn’t have this level of auditability, people will (hopefully) see through it.