19
Recent headlines reveal a sharp rise in deepfake scams, synthetic identities and highly targeted impersonation attacks, exposing new vulnerabilities in traditional fraud controls.
Artificial intelligence (AI) promises efficiency, automation and transformative insights across industries. But alongside these benefits, fraudsters are increasingly leveraging AI’s power to craft highly convincing scams that can deceive individuals and businesses alike.
From deepfake impersonations of executives to AI generated phishing campaigns, recent headlines show that AI enabled fraud is no longer a distant threat, it’s an immediate and evolving challenge that organisations must understand and address.
Generative AI technologies, including large language models (LLMs), voice cloning and video synthesis, have dramatically lowered the barriers to creating realistic fraudulent content. While these tools are designed to enhance productivity, they also enable attackers to generate convincing illusions of legitimacy at scale. For example:
Several UK specific developments highlight how AI is shaping modern fraud threats:
Sharp increase in AI related fraud targeting UK businesses:Â A 2025 report from Experian found that over a third (35%) of UK businesses reported being targeted by AI related fraud, up significantly from the previous year. Techniques such as deepfakes, voice cloning, and synthetic identities are becoming more common in fraud attempts reported across sectors from retail banks to digital first retailers.
Identity fraud driven by AI tools: Data from Cifas shows that identity fraud cases have reached record levels, with criminals using AI enhanced methods to create fake identities and bypass verification systems. The National Fraud Database recorded more than 118,000 identity fraud cases in early 2025, with synthetic profiles and fabricated credentials playing a major role.
UK firms hit by large‑scale deepfake scams: British engineering giant Arup revealed that a deepfake video call led to a £20 million fraudulent transfer, where an employee was tricked into believing they were speaking to senior colleagues. This incident illustrates how convincingly AI can mimic voices and faces to bypass human checks.
Deepfakes used to exploit public trust: Trusted UK figures and brands are also being misused in scams. Deepfake videos featuring familiar TV doctors and public figures have been circulated on social media to promote bogus health products and investment schemes, eroding trust and confusing consumers.
Widespread consumer targeting: Research from TransUnion shows that a large majority of UK consumers (around 70%) have received scam messages appearing to come from trusted sources – with many believing these attempts involved AI generated voices or images – and impersonated brands like Royal Mail and Evri topping the list.
Taken together, these developments send a clear message: AI enabled fraud is rapidly evolving, and UK businesses are increasingly in the crosshairs. This has several implications:
AI doesn’t just automate fraud, it enhances believability. Fraud attempts now leverage hyper realistic visuals, voices and messages that can pass manual scrutiny and bypass traditional rule-based fraud detection systems.
Deepfakes exploit human instincts: employees and customers often assume that talking to what appears to be a senior executive on video, or receiving a trusted brand’s message on social media, means the communication is legitimate. These trust shortcuts can now be weaponised at scale.
Legacy approaches that rely on static rules, such as simple two factor authentication or manual identity checks, are less effective against synthetic identities and AI generated spoofing techniques. Modern fraud requires modern defences.
To respond to the rise of AI enabled fraud, organisations should consider a multi-layered approach:
AI is transforming both the opportunities and risks faced by businesses. As recent headlines make clear, fraudsters are rapidly embracing generative AI to make scams more believable, scalable and profitable. However, the same technological forces that enable this evolution also empower defenders: advanced detection tools, analytics and enhanced verification frameworks can help organisations stay ahead of these threats.
By monitoring emerging trends, investing in adaptive controls and educating teams, companies can turn awareness into action and safeguard themselves in an age of increasingly sophisticated deception.
The illusion of legitimacy has never been easier to fake or more dangerous to ignore. Deepfake directors and AI-generated documents are not science fiction, they’re happening now.
If you’re unsure whether a document or company is real, or if you need help investigating a suspicious entity, our team is here to help.
For further details of these services or to instruct us on a matter, contact our Client Services team at advice@esarisk.com, on +44 (0)343 515 8686, or via our contact form.
Safeguard your business
Our expert consultants are on hand to give you the support you need.