Insights |Cyber Security

28th September 2021

Deepfakes: 2021 Report

Cyber criminals are constantly discovering new ways of successfully duping unsuspecting individuals into handing over their money. As a result, recent years have seen a huge increase in the use of ‘deepfakes,’ a type of identity fraud that leverages artificial intelligence to create frighteningly convincing fake images, videos and voice recordings.

Deepfakes are not a new threat, but have been steadily advancing, making this type of fraud difficult to identify. Most deepfakes are created for entertainment purposes, for example in films to ‘resurrect’ deceased cast members, bringing them back to life on screen. Even a Google search will give you a taste of what can be achieved with deepfake technology, like this video apparently showing Mark Zuckerberg giving a speech about data “stolen” by Facebook. It was created by artists Bill Posters and Daniel Howe to demonstrate the potential power of fake news.

How is a deepfake created?

To create a convincing deepfake video, software is used to analyse the facial expressions of the chosen subject using artificial intelligence (AI). This information can then be used to create a video, superimposing the subject’s face onto real footage of someone else. Fairly convincing results can even be created live during a video call. The poor connection and grainy image we often experience during a video conference would mean that the fake doesn’t have to be perfect to work.

Next comes creating the audio. Freely available software such as Lyrebird AI allows you to impersonate anyone’s voice. “Record 1 minute from someone’s voice and Lyrebird can compress her/his voice’s DNA into a unique key. Use this key to generate anything with its corresponding voice,” says the Lyrebird website. None of this software is illegal or requires a special licence to use, meaning anyone can access it. And if you don’t have the technical knowledge to create a deepfake yourself, you can pay someone else a couple of hundred pounds to do it for you.

Deepfakes in business

But what are the threats to businesses from this type of technology? Admittedly, the chances of someone using a deepfake to impersonate your CEO to extract funds are slim. But, despite the odds, this has happened – and using a far less sophisticated approach than some of the high-profile examples you will see posted online for entertainment purposes. In October 2019, it was reported that a top executive in a UK-based energy company had been duped into transferring £200,000 to cyber fraudsters. The perpetrators used AI voice technology to mimic the executive’s boss, who was based at the German headquarters. The executive was instructed to move the funds immediately to a Hungarian bank account and was told they would be returned later. They never were.

This example demonstrates several factors fraudsters rely upon to ensure a deepfake fraud is successful:

  1. Authority: In the above example, the senior executive was the head of the UK arm of the company. Despite this, he still felt unable to question the instructions he was given from his boss. Fraudsters rely on professional hierarchies and social norms to predict people’s behaviour patterns. Authority is a useful tool for criminals because people are often reluctant to question it.
  2. Urgency: If we are told something is urgent (especially by a superior) it immediately changes the way we react and inhibits our ability to think clearly. Instead of following the normal steps required, we rush – focusing our attention on getting the task done, rather than how or why we are doing it in the first place.
  3. Doubt: 9 times out of 10, phone calls are genuine. Fraudsters rely massively upon the ‘benefit of the doubt’.
  4. Distance: Contacting someone through email or by phone creates a barrier, allowing for certain inconsistencies or discrepancies to be overlooked or allowed for. The slight difference in the German executive’s voice in the energy company might have been due to the phone line, or because of background noise, or because they’d been unwell. A difference in tone of voice in an email could be because the person was in a rush.

Deepfakes and Covid-19

Even if you had never used a video conferencing app before the Covid-19 pandemic, you will no doubt be more than familiar with the software today. This influx of inexperienced, regular users of apps such as Zoom, Skype and Microsoft Teams has provided an unending supply of data for cyber criminals to exploit. Zoom came under fire recently when it was revealed that thousands of private recordings of Zoom calls could be easily accessed online by doing a simple search of cloud data storage. The recordings weren’t those held by the platform itself, but were files that had been stored locally by the individual users – an option that was given to Zoom users once a recording had been created. Nonetheless, this still means hours of footage are available for fraudsters to access online and potentially use to create deepfake videos.

In times of crisis, financial institutions will always be prime targets for cyber criminals looking to cash in. For this reason, cyber security is something all firms should be investing in right now. But do they fully understand the risks posed by AI and deepfakes?

In my experience, the answer has to be, no, absolutely not. There is a complete lack of training, education and awareness about cyber risk in general, even in some of the UK’s largest financial institutions. AI now allows cyber criminals to take their phishing to the next level, meaning that we can’t rely on a phone call or even a video call to verify authenticity. The solution is to get a proper HR training regime installed in every institution. Every company in every sector should have one for every worker, from the cleaner to the CEO.

Social engineering

Cyber criminals use a combination of factors to break down security barriers in firms and improve the success rates of deepfakes, including social engineering. This involves manipulating individuals to divulge sensitive information which can then be used to access internal systems, or convincing people to transfer funds or data. Keeping employees happy is a huge part of mitigating against this risk. But ensuring sensitive information is always treated as such should be the first step.

I was delivering a talk at an event recently and 2 women who worked at one of the UK’s largest insurance firms approached me afterwards. They said they were a bit concerned about cyber security in their office because they had a notebook where everyone’s usernames and passwords were kept for convenience. This kind of mistake is frighteningly ignorant, but also terrifyingly common. Once criminals have access to a senior executive’s email account, they can easily impersonate that individual and gather enough data to extract funds or cause untold damage. And that’s before we’ve even entered deepfake territory. All they need is a disgruntled member of staff with an axe to grind and they could easily get hold of that notebook.

Combatting deepfake fraud

AI technology is advancing all the time, so deepfakes are only going to become more convincing. For businesses, this means upping the ante when it comes to verification methods, even if it seems ‘silly’ or unnecessary.

Encouraging employees to feel comfortable in getting verification from a senior member of staff is essential. Measures such as calling someone back if they have phoned you to ask for a funds transfer, or implementing a series of security questions which only that particular person would be able to answer, are vital. Of course, many of the examples we have seen of deepfake voice calls have involved something that doesn’t usually happen, e.g. a call out of the blue from a senior executive asking for an urgent payment. Firms need to create strict protocols that are always adhered to, to ensure that a call like this would immediately trigger alarm bells.

If going against protocol is a common occurrence, it increases the likelihood of a deepfake’s success. Being organised and always following the same process and verification measures – without fail – will mean any deviation from normal practice will be easily picked up. If you are unorganised and regularly make exceptions to the rules, how can you expect your staff to know the difference?

Always one step ahead

Active learning technology allows cyber criminals to boost the success rates of phishing emails and other such scams by gathering data on what works and what doesn’t, then using this information to adapt their approach. Cyber criminals are always one step ahead. This is why it is so important to keep educating staff about the technological capabilities cyber criminals now have. Share examples of incidents that have occurred, remind staff of the protocols they must follow and don’t just limit cyber risk training to new recruits; it should be ingrained in everyday, business-as-usual activities so that being aware of these types of risk is second nature.

The sheer speed with which technology is progressing is what makes deepfakes so concerning. In a documentary called The Weekly for The New York Times, investigative journalist David Barstow followed a group of AI engineers and machine-learning specialists in their quest to create the perfect deepfake. Their abilities and the capabilities of the technology they were using – which could easily fall into the hands of criminals – was as impressive as it was alarming. “It’s astonishing the progress a handful of smart engineers were able to make in a matter of months,” Barstow said. “Teams of computer scientists around the world are racing to invent new techniques to quickly identify manipulated audio and video. The bad news [is] some deepfake creators are incorporating the machine-learning algorithms behind those countermeasures to make future deepfakes even harder to detect.”

Barstow believes that even global web platforms like WhatsApp and Facebook are “woefully unprepared” to help users spot deepfakes. If business is to ensure it doesn’t fall foul of this growing threat, firms need to start taking it seriously now before it’s too late.

Common security mistakes that could result in deepfake fraud:

  • Sharing too much information on social media platforms.
  • Not questioning authority – assuming because the boss calls you should bypass all normal security protocols.
  • Leaving cyber security to the IT people – it should be part of every employee’s induction and ongoing training.
  • Not looking after staff welfare – disgruntled employees who have access to sensitive information/internal systems/usernames/passwords.
  • Not securing personal devices properly – especially in light of increased home working during the Covid-19 pandemic.
  • Trusting someone you have only met remotely.

High-profile examples of deepfakes:

American actor, writer and producer, Jordan Peele, created a deepfake video of Barack Obama making outrageous statements and openly criticising US president Donald Trump to demonstrate the potential power of fake news on politics.

Artists Bill Posters and Daniel Howe made a convincing deepfake video of Facebook CEO Mark Zuckerberg, where he appears to tell CBSN news that he owns “billions of people’s stolen data…all their secrets, their lives, their futures.”

Speaker of the US House of Representatives, Nancy Pelosi, has been targeted several times by individuals looking to damage her reputation through fake videos. The examples here are not technically deepfakes, but the speed and pitch of her voice has been altered to make it sound like she is drunk. It is still not known who is responsible for creating these videos.

Catalan artist Salvador Dalí was brought back to life in 2019 as an exhibition “host” by the Dali museum in Florida. The interactive installation included 45 minutes of footage over 125 videos, which allowed more than 190,000 different combinations depending on visitor responses.

Last year, a suspicious video of the President of Gabon after a long absence sparked rumours of a deepfake, resulting in an attempted military coup. This example demonstrates how even just the knowledge that deepfake technology exists can make us question whether what we are seeing is real.

In July 2020, it was discovered that published British journalist Oliver Taylor, who claimed to have studied at the University of Birmingham in the UK, was in fact a deepfake. Alarms were raised when an article by Taylor was published in US Jewish newspaper The Algemeiner, criticising activist couple Mazen Masri and Ryvka Barnard and accusing them of being “known terrorist sympathisers.” A fabricated photograph and an account on question-and-answer site Quora are the only record of his existence. Despite this, “Taylor” had several articles published in newspapers, including the Jerusalem Post.

cyber threat landscape

Cyber threats report

Discover more of the key cyber security threats you need to be aware of this year in our Special Report.

What are you looking for?

Get the advice you need

Deep dive for the answers you need
Or contact us on +44 (0)843 515 8686 or at advice@esarisk.com.

Deep dive for the
answers you need

Lawyers, accountants, advisors, investors, senior
management. You name it, we help them find the answers
they need. Ready to discover how we can help you?