Tell-tale signs of live deepfake technology
Getting your Trinity Audio player ready...
|
In cybersecurity, attacks come and go, but one of the certainties is the so-called cybersecurity arms race. Since the earliest malware and first waves of cyberattacks, keeping users safe has been a cat-and-mouse battle. While the good actors identify and fix the security holes, the bad actors exploit new vulnerabilities and pull ahead. Cybersecurity strategies such as defense in depth and zero trust security have certainly given adversaries pause for thought. But, always on the lookout for a way in, bad actors are quick to spot a trend and turn it to their advantage. And this brings us to deepfake technology.
What is a deepfake?
In the movies, deepfake technology helps with lip-syncing – for example, when studios want to dub a film into a different language. Flawless – a firm pioneering ‘neural network enabled filmmaking’ based in London and Los Angeles – has developed a product dubbed TrueSync to help with this. Using artificial intelligence (AI), the tool digitally morphs the appearance of actors’ lips to better fit target languages. And to demonstrate how the deepfake technology works, the software’s creators provide a great example of Forrest Gump speaking Japanese.
In the audio space, there has been great work in the development of synthetic voices by big names such as Google, IBM, Microsoft, Amazon, and others. Rich synthetic voice libraries are opening the door to computer-guided newscasts, podcasts, and sonic branding. Building on this, providers are working with clients to offer voice cloning services that enable firms to personalize their products and form stronger links with audiences. But, while many have security measures in place to protect voice owners from misuse, the temptation is there – for speech, video, and even chatbots, to be misappropriated.
Designed for fun, there are pages and pages of face swap apps listed on mobile phone app stores. The quality may be variable, but the rising number of easy-to-access software tools featuring deepfake technology is noteworthy from a security perspective. There are plenty of legitimate applications for deepfake technology – tools that imitate, in a realistic manner, people’s attributes either visually or sonically. But the question on security experts’ minds is how will bad actors deploy these algorithms? “Once they have a model, adversaries could apply that to new manifestations,” Matt Lewis, Research Director at NCC Group, told TechHQ.
Doubling down on operations.
Business targets could fall victim to deepfake technology weaponized to cause reputational damage and affect share prices. Crafted attacks that repurpose CEO or CFO voice data gathered from the internet, such as earnings calls or keynotes, could trick staff into making unauthorized payments. And for those reasons, firms should double down on good operating practices. “Business processes should follow the two-person rule,” Lewis advises.
Requiring payments to be authorized by two or more staff is just one in a list of proactive steps that companies can take to guard against the abuse of deepfake technology. A recent concern is the introduction of tools for generating deepfakes live, which – given the huge rise in online meetings over the past couple of years – could be problematic. Often deepfakes can be distinguished from legitimate audio, image, or video sources based on the loss in quality. High-resolution, original images become blurred as AI algorithms move pixels around. But – as anyone who’s used Zoom, Teams, or any other online meetings provider over a patchy internet connection knows – an adversary would have plenty of glitches to hide behind. Fortunately, for those defending against potential attacks, the deepfake technology has other weak points.
“There will always be some visual tells,” said Lewis. “If you are suspicious, ask the person to wave their hand in front of their face.” The motion disrupts the digital landmarks used by the deepfake technology to mask its target image over the host. Back in 2020, Lewis worked with researchers at University College London, UK, to raise awareness of what deepfake technology was capable of and identify mitigation strategies.
Shaken but not stirred.
To test how straightforward it would be to produce a deepfake, the UCL team – based at the Centre for Doctoral Training in Data Intensive Sciences – swapped the face of Daniel Craig, from the James Bond movie Casino Royale, with that of Matt Lewis from NCC Group. The results showed the limitations of using 2D digital masks, which lacked depth. Another observation was that differently lit images performed particularly poorly in face swaps.
Lewis points to commercial ‘anti-deepfake’ tools that monitor how webcam images respond to changes in lighting, as live deepfakes lack the feedback of genuine objects or attendees, reflecting back on the earlier online meeting example. One area to watch is imposters that are already similar to their targets. In these cases, distinguishing between actual images and those generated using deepfake technologies becomes harder. And it should be remembered that while deepfakes are a digital trend, people pretending to be others is nothing new.