Insights |Fraud prevention

19th January 2026

AI and fraud: What recent headlines mean for businesses

Recent headlines reveal a sharp rise in deepfake scams, synthetic identities and highly targeted impersonation attacks, exposing new vulnerabilities in traditional fraud controls.

Artificial intelligence (AI) promises efficiency, automation and transformative insights across industries. But alongside these benefits, fraudsters are increasingly leveraging AI’s power to craft highly convincing scams that can deceive individuals and businesses alike.

From deepfake impersonations of executives to AI generated phishing campaigns, recent headlines show that AI enabled fraud is no longer a distant threat, it’s an immediate and evolving challenge that organisations must understand and address.

AI as a catalyst for sophisticated fraud

Generative AI technologies, including large language models (LLMs), voice cloning and video synthesis, have dramatically lowered the barriers to creating realistic fraudulent content. While these tools are designed to enhance productivity, they also enable attackers to generate convincing illusions of legitimacy at scale. For example:

  • Deepfakes, AI generated video and voice impersonations, are now being used to pose as real people in business contexts, tricking victims into authorising sensitive actions or financial transfers.
  • AI assisted phishing and social engineering campaigns craft personalised messages that improve scam success rates.
  • Synthetic identities and fabricated documents are increasingly used in loan and account takeover fraud.

Several UK specific developments highlight how AI is shaping modern fraud threats:

Sharp increase in AI related fraud targeting UK businesses: A 2025 report from Experian found that over a third (35%) of UK businesses reported being targeted by AI related fraud, up significantly from the previous year. Techniques such as deepfakes, voice cloning, and synthetic identities are becoming more common in fraud attempts reported across sectors from retail banks to digital first retailers.

Identity fraud driven by AI tools: Data from Cifas shows that identity fraud cases have reached record levels, with criminals using AI enhanced methods to create fake identities and bypass verification systems. The National Fraud Database recorded more than 118,000 identity fraud cases in early 2025, with synthetic profiles and fabricated credentials playing a major role.

UK firms hit by large‑scale deepfake scams: British engineering giant Arup revealed that a deepfake video call led to a £20 million fraudulent transfer, where an employee was tricked into believing they were speaking to senior colleagues. This incident illustrates how convincingly AI can mimic voices and faces to bypass human checks.

Deepfakes used to exploit public trust: Trusted UK figures and brands are also being misused in scams. Deepfake videos featuring familiar TV doctors and public figures have been circulated on social media to promote bogus health products and investment schemes, eroding trust and confusing consumers.

Widespread consumer targeting: Research from TransUnion shows that a large majority of UK consumers (around 70%) have received scam messages appearing to come from trusted sources – with many believing these attempts involved AI generated voices or images – and impersonated brands like Royal Mail and Evri topping the list.

What these trends mean for UK businesses

Taken together, these developments send a clear message: AI enabled fraud is rapidly evolving, and UK businesses are increasingly in the crosshairs. This has several implications:

The scale and sophistication are growing

AI doesn’t just automate fraud, it enhances believability. Fraud attempts now leverage hyper realistic visuals, voices and messages that can pass manual scrutiny and bypass traditional rule-based fraud detection systems.

Human trust is a vulnerability

Deepfakes exploit human instincts: employees and customers often assume that talking to what appears to be a senior executive on video, or receiving a trusted brand’s message on social media, means the communication is legitimate. These trust shortcuts can now be weaponised at scale.

Traditional controls are less effective alone

Legacy approaches that rely on static rules, such as simple two factor authentication or manual identity checks, are less effective against synthetic identities and AI generated spoofing techniques. Modern fraud requires modern defences.

Practical steps UK businesses can take

To respond to the rise of AI enabled fraud, organisations should consider a multi-layered approach:

Detection

  • Adopt AI powered monitoring tools that analyse behavioural and contextual anomalies rather than purely signature-based detection.
  • Use Realtime analytics to flag unusual access patterns, transaction anomalies or communications inconsistencies.

Prevention

  • Educate staff on the nature of AI enabled scams, including examples of deepfake calls and advanced phishing.
  • Implement multifactor authentication, transaction thresholds and redundant verification for high-risk actions.

Response planning

  • Build incident response playbooks that anticipate AI enabled deception scenarios.
  • Establish clear escalation paths for suspected fraud, including legal and compliance involvement.

Strategic resilience

  • Periodically reassess fraud risk models to incorporate new threat vectors.
  • Collaborate with external specialists where needed to test controls and simulate advanced attack scenarios.

Staying ahead of AI-enabled fraud

AI is transforming both the opportunities and risks faced by businesses. As recent headlines make clear, fraudsters are rapidly embracing generative AI to make scams more believable, scalable and profitable. However, the same technological forces that enable this evolution also empower defenders: advanced detection tools, analytics and enhanced verification frameworks can help organisations stay ahead of these threats.

By monitoring emerging trends, investing in adaptive controls and educating teams, companies can turn awareness into action and safeguard themselves in an age of increasingly sophisticated deception.

ESA Risk investigations and enhanced due diligence

The illusion of legitimacy has never been easier to fake or more dangerous to ignore. Deepfake directors and AI-generated documents are not science fiction, they’re happening now.

If you’re unsure whether a document or company is real, or if you need help investigating a suspicious entity, our team is here to help.

For further details of these services or to instruct us on a matter, contact our Client Services team at advice@esarisk.com, on +44 (0)343 515 8686, or via our contact form.

contact us online or by phone

Safeguard your business

Our expert consultants are on hand to give you the support you need.

What are you looking for?

Get the advice you need

Deep dive for the answers you need
Or contact us on +44 (0)343 515 8686 or at advice@esarisk.com.

Deep dive for the
answers you need

Lawyers, accountants, advisors, investors, senior
management. You name them, we help them find the answers
they need. Ready to discover how we can help you?