Deepfake Attacks: How to Stop this New Cyber Storm

Table of Contents
Deepfakes have become synonymous with fraud and cyberattacks. Cybercriminals have weaponized voice, image, and video using AI to deliver cyberattacks that are challenging to detect and can cost businesses vast sums of money. According to a study from Osterman Research, offensive AI, like deepfakes, is expected to become "more sophisticated, voluminous, unique, and evasive".
What is a DeepFake Attack?
The era of the digital deepfake was ushered in via a Reddit community in 2017. Back then, the subreddit revolved around face-swapping technology, which generated deepfake pornography using celebrity faces. The malicious use of deepfake technology has continued unabated and now covers a wide range of use cases.
Deepfake technology is based on artificial intelligence and uses a mix of Generative Adversarial Networks (GANs) – specialized neural networks – along with Large Language Models (LLMs) to generate video and speech in real time.
Deepfakes are often combined with vectors like email or social post phishing to create deepfake attacks. Deepfake attackers modify legitimate media and change it to form the basis of an attack; for example, they may construct a deepfake of a CEO. Deepfakes are designed to manipulate people into acting in a way that benefits a cybercriminal, exploiting emotions and behavior like trust and urgency. By exploiting trust and relationships, deepfakes are the perfect device for social engineering-based cyberattacks.
The dark web feeds the rise of deepfake attacks, providing stolen company data to generate deepfake videos, voice, and images that are then used to develop deepfake-enabled attacks.
How Do Attackers Use Deepfakes?
Cybercriminals have developed numerous ways to utilize deepfakes in an attack. The following are some of those techniques:

Face Swap
Face-swapping is synonymous with deepfakes, and it continues to be a successful tactic. A recent example of a deepfake face swap attack was an attack on a businessperson. The attackers used face swap technology to create a deepfake of the man's friend. During a video call, the victim was tricked into handing over $622,000 (4.3 million Yuan) to the supposed friend. A report from security vendor iProov found a 704% increase in "Face Swap" deepfake attacks.
Voice Cloning (Voice Impersonation)
Humans connect with others by recognizing faces and other features, including voices. Voice cloning, also called voice impersonation, is another successful tactic used in deepfake attacks. Research by McAfee found that it only needs three seconds of audio to produce an 85% voice match. The report also found that 77% of respondents lost money to a voice clone.
AI voice impersonation is used across many channels, including WhatsApp. A recent WhatsApp voice-cloning scam involves tricking people into sending money by using fake voice messages from family members or other known individuals.
AI-Generated Identity Documents / KYC Scams
Deepfakes are not just a case of spoofing video and voice; documentation is also at risk of being faked. This has repercussions on the use of digital identity. Deepfake identity document-generating websites are popping up, offering to create deepfakes of passports and driver's licenses. ID documents, like passports, are often used during identity verification (Know Your Customer, KYC).
According to a Sensity report, there are over 10,000 AI image generation tools, with 47 designed to bypass identity verification checks and measures. KYC checks, such as biometric liveliness, can also be circumvented using deepfake technology.
Digital Assistants
Deepfake attackers are utilizing any channel they can to deliver cyberattacks. AI-enabled digital assistants, like Microsoft Copilot, are exploited by cybercriminals, tying the vector to deepfake-enabled attacks. A recent example involved phishing email campaigns targeting Microsoft Copilot users. The targeted users received a Copilot communication with a link to a deepfake video conference.
A report from BlackCloak predicts that attackers will build campaigns that exploit the personal lives of corporate executives. The deepfake attackers will develop campaigns focusing on social engineering the executives, ultimately encouraging the victim to send money or share confidential information.
The Costs of a Deepfake Attack
A report into deepfake-enabled fraud found that 50% of companies have experienced a video deepfake and 49% an audio deepfake. Of those impacted by a deepfake scam, the average losses were $450,000. Some of the most common types of deepfake cyberattacks are as follows:

Business Email Compromise
Deepfakes are an ideal way to socially engineer a target into handing over money. A recent case involved an employee at the engineering firm Arup. The employee was tricked into attending a video conference with his supposed CFO. However, the conference was fake, and the CFO and others in the conference were deepfakes. The company lost $25 million in the attack. A survey by Osterman found that one in five organizations has lost money through a business email compromise attack.
CEO Fraud
Impersonating a CEO is a form of identity fraud that can lead to massive losses and a lost reputation. A recent example involved the CEO of advertising agency WPP. The attack involved voice cloning. In this case, the attackers were unsuccessful. However, CEO fraud targets at least 400 companies per day with an average loss of $12,000.
Deepfake Identity Scams
Identity scams or KYC scams are another type of impersonation fraud that uses digital identities made using fake identity documents. These scams can be used to commit fraud in the name of an individual or business. KYC fraud can also cause reputation damage and non-compliance fines. Organizations with 500 or fewer employees who become victims of identity fraud lose an average of over $500,000.
Data and Credential Theft
Deepfake-enhanced phishing is the next major threat to companies of all sizes. Because deepfakes are designed to trick victims into believing they are communicating with a known and trusted person, they are ideal for socially engineering employees. Used alongside Generative AI used to create compelling and plausible phishing campaigns, deepfake phishing will become a force for credential and sensitive data theft. The Microsoft Copilot deepfake scam example mentioned earlier is a case in point. Phishing communications were presented through Copilot to encourage users to click a link to a deepfake video conference where personal and sensitive data can be extracted.
Best Practices in Responding to the Threat of Deepfakes
Companies of all sizes and across all sectors must develop comprehensive deepfake attack prevention policies that include the following measures:

Security Awareness Training
Deepfake attacks are designed to manipulate individuals. They often impersonate executives or managerial staff. Security awareness training must incorporate elements of training that cover deepfake attacks and social engineering in general.
Robust Identity Management and Verification Systems
Identity management that is robust and led by the principle of least privilege (PoLP) is an essential layer in the fight against deepfakes. PoLP ensures access is only authorized to employees and roles on a need-to-know basis. This layer of identity security helps prevent stolen credentials from becoming a security incident.
Cross-Checks and Verification Processes
Companies are at risk of CEO fraud and BEC scams. By having robust verification measures in place, a company can prevent the movement of money unless explicitly checked and verified by a chain of command.
Deepfake and AI Detection Solutions
Vendors are bringing anti-deepfake technology into the marketplace. Some examples include AI-enabled anti-phishing capability and deepfake-resistant KYC checks.
Dark Web Monitoring
Deepfake attacks are generated using stolen data bought and sold on the dark web. Companies are chosen as targets based on dark web intelligence. Dark web monitoring tools, like Sentinex, deliver deep insight into the dark web, allowing you to find out if attackers are targeting your company.
FAQ

Why are Deepfakes So Dangerous?
Deepfakes are designed to socially engineer individuals, tricking them into believing a deepfake is someone they know and trust. Deepfake technologies can be used to create realistic and plausible voices, images, and videos that are challenging to differentiate from the real thing. This plausibility makes them an extremely effective weapon in a cyberattack.
How do Deepfakes Differ from Generative AI, like ChatGPT?
Deepfakes use AI in the form of a specialized neural network called a Generative Adversarial Network (GAN). The deepfake uses a GAN along with Large Language Models (LLMs) to generate the deepfake image, video, or voice. Generative AI technologies, like ChatGPT, use LLMs and Natural Language Processing (NLP) to generate realistic text.
Are Deepfakes Illegal?
Deepfakes used for nefarious purposes are illegal. In other words, the context and use of a deepfake determine its legality. If a deepfake is used to carry out illegal activities, like fraud, it will be illegal.
Can You Deploy Deepfakes ethically for business?
While the illegitimate use of deepfakes is illegal, they can be used in certain ethical cases. For example, deepfakes can be used in an educational context, such as teaching employees about security. Other use cases include the use of deepfakes in marketing and customer service.
How To Spot a Deepfake?
Deepfakes are challenging to identify, both as a human interacting with the deepfake and using software tools. Identity verification tools are leading the way in creating deepfake-resistant tools and processes. However, as deepfake technology improves, the ability to detect deepfakes becomes even more challenging.
Test your ability to identify a deepfake using the DetectFakes website.