AI and fraud: What recent headlines mean for businesses

Artificial intelligence (AI) promises efficiency, automation and transformative insights across industries. But alongside these benefits, fraudsters are increasingly leveraging AI’s power to craft highly convincing scams that can deceive individuals and businesses alike.

From deepfake impersonations of executives to AI generated phishing campaigns, recent headlines show that AI enabled fraud is no longer a distant threat, it’s an immediate and evolving challenge that organisations must understand and address.

AI as a catalyst for sophisticated fraud

Generative AI technologies, including large language models (LLMs), voice cloning and video synthesis, have dramatically lowered the barriers to creating realistic fraudulent content. While these tools are designed to enhance productivity, they also enable attackers to generate convincing illusions of legitimacy at scale. For example:

  • Deepfakes, AI generated video and voice impersonations, are now being used to pose as real people in business contexts, tricking victims into authorising sensitive actions or financial transfers.
  • AI assisted phishing and social engineering campaigns craft personalised messages that improve scam success rates.
  • Synthetic identities and fabricated documents are increasingly used in loan and account takeover fraud.

Several UK specific developments highlight how AI is shaping modern fraud threats:

Sharp increase in AI related fraud targeting UK businesses: A 2025 report from Experian found that over a third (35%) of UK businesses reported being targeted by AI related fraud, up significantly from the previous year. Techniques such as deepfakes, voice cloning, and synthetic identities are becoming more common in fraud attempts reported across sectors from retail banks to digital first retailers.

Identity fraud driven by AI tools: Data from Cifas shows that identity fraud cases have reached record levels, with criminals using AI enhanced methods to create fake identities and bypass verification systems. The National Fraud Database recorded more than 118,000 identity fraud cases in early 2025, with synthetic profiles and fabricated credentials playing a major role.

UK firms hit by large‑scale deepfake scams: British engineering giant Arup revealed that a deepfake video call led to a £20 million fraudulent transfer, where an employee was tricked into believing they were speaking to senior colleagues. This incident illustrates how convincingly AI can mimic voices and faces to bypass human checks.

Deepfakes used to exploit public trust: Trusted UK figures and brands are also being misused in scams. Deepfake videos featuring familiar TV doctors and public figures have been circulated on social media to promote bogus health products and investment schemes, eroding trust and confusing consumers.

Widespread consumer targeting: Research from TransUnion shows that a large majority of UK consumers (around 70%) have received scam messages appearing to come from trusted sources – with many believing these attempts involved AI generated voices or images – and impersonated brands like Royal Mail and Evri topping the list.

What these trends mean for UK businesses

Taken together, these developments send a clear message: AI enabled fraud is rapidly evolving, and UK businesses are increasingly in the crosshairs. This has several implications:

The scale and sophistication are growing

AI doesn’t just automate fraud, it enhances believability. Fraud attempts now leverage hyper realistic visuals, voices and messages that can pass manual scrutiny and bypass traditional rule-based fraud detection systems.

Human trust is a vulnerability

Deepfakes exploit human instincts: employees and customers often assume that talking to what appears to be a senior executive on video, or receiving a trusted brand’s message on social media, means the communication is legitimate. These trust shortcuts can now be weaponised at scale.

Traditional controls are less effective alone

Legacy approaches that rely on static rules, such as simple two factor authentication or manual identity checks, are less effective against synthetic identities and AI generated spoofing techniques. Modern fraud requires modern defences.

Practical steps UK businesses can take

To respond to the rise of AI enabled fraud, organisations should consider a multi-layered approach:

Detection

  • Adopt AI powered monitoring tools that analyse behavioural and contextual anomalies rather than purely signature-based detection.
  • Use Realtime analytics to flag unusual access patterns, transaction anomalies or communications inconsistencies.

Prevention

  • Educate staff on the nature of AI enabled scams, including examples of deepfake calls and advanced phishing.
  • Implement multifactor authentication, transaction thresholds and redundant verification for high-risk actions.

Response planning

  • Build incident response playbooks that anticipate AI enabled deception scenarios.
  • Establish clear escalation paths for suspected fraud, including legal and compliance involvement.

Strategic resilience

  • Periodically reassess fraud risk models to incorporate new threat vectors.
  • Collaborate with external specialists where needed to test controls and simulate advanced attack scenarios.

Staying ahead of AI-enabled fraud

AI is transforming both the opportunities and risks faced by businesses. As recent headlines make clear, fraudsters are rapidly embracing generative AI to make scams more believable, scalable and profitable. However, the same technological forces that enable this evolution also empower defenders: advanced detection tools, analytics and enhanced verification frameworks can help organisations stay ahead of these threats.

By monitoring emerging trends, investing in adaptive controls and educating teams, companies can turn awareness into action and safeguard themselves in an age of increasingly sophisticated deception.

ESA Risk investigations and enhanced due diligence

The illusion of legitimacy has never been easier to fake or more dangerous to ignore. Deepfake directors and AI-generated documents are not science fiction, they’re happening now.

If you’re unsure whether a document or company is real, or if you need help investigating a suspicious entity, our team is here to help.

For further details of these services or to instruct us on a matter, contact our Client Services team at advice@esarisk.com, on +44 (0)343 515 8686, or via our contact form.

Inside the Insolvency Service’s 2026–2031 Investigations and Enforcement Strategy

The Insolvency Service has unveiled a sweeping new five-year strategy that signals a fundamental evolution in its role from an insolvency-focused regulator to a central pillar in the UK’s fight against economic crime. 

With fraud now the most commonly reported crime in the UK, the Service’s 2026–2031 Investigations and Enforcement Strategy lays out a clear and ambitious roadmap. The strategy outlines an expanded role for the agency, including increased enforcement powers, use of data analytics and technology, and closer collaboration with other government bodies. 

From liquidations to law enforcement 

Traditionally viewed as a body focused on liquidations and disqualifications, the Insolvency Service is expanding its scope. Its new strategic priorities emphasise criminal enforcement, asset recovery, and fraud prevention, particularly in areas of systemic abuse, such as Covid-19 Bounce Back Loan Scheme fraud and the misuse of corporate entities as vehicles for laundering money. 

What’s changing? 

  • A broader investigative remit that extends beyond insolvency cases 
  • Greater use of AI and data analytics to uncover complex fraud 
  • Stronger ties with enforcement partners like NATIS, CPS, HMRC, and Companies House 
  • A clear mandate to recover proceeds of crime, including from crypto-assets 

The numbers behind the strategy 

The strategy follows a period of heightened enforcement activity. In the 2024–25 financial year, the Insolvency Service: 

  • Secured 77 criminal convictions 
  • Disqualified over 1,000 company directors 
  • Achieved more than £4 million in compensation orders 
  • Delivered over £50 million in estimated economic benefit by removing bad actors from the market 

The strategy sets targets to expand enforcement activities over the next 5 years. 

Protecting market confidence 

One of the central themes of the strategy is restoring confidence in the UK’s corporate ecosystem. The Service will play a more prominent role in deterring misconduct, ensuring directors understand the consequences of non-compliance and that victims of economic crime see accountability in action. 

This aligns with the government’s broader ambitions to make the UK one of the safest places in the world to do business, especially in the wake of recent reforms at Companies House and increased scrutiny on shell companies and nominee directors. 

Tackling emerging threats 

The strategy doesn’t just address known threats; it also anticipates emerging ones. The Service is investing in expertise to handle: 

  • Cryptocurrency-linked fraud 
  • Cross-border financial crime 
  • Sophisticated abuse of government funding schemes 

With specialist teams and better intelligence-sharing frameworks, it aims to disrupt criminal networks before damage is done, moving from reactive to preventative enforcement. 

Strategic collaboration 

Perhaps most significantly, the strategy emphasises inter-agency collaboration. It recognises that no single authority can tackle complex fraud alone. The Insolvency Service will work closely with other government bodies, using shared data, joint investigations, and aligned enforcement tactics to deliver faster, more effective outcomes. 

Looking ahead 

As someone who’s worked in risk management and investigations for over 3 decades, I see the 2026–2031 Investigations and Enforcement Strategy as a major turning point in the UK’s approach to corporate oversight.  

For risk professionals, compliance leaders, and directors, the implications are clear: regulators will expect more transparency, better governance, and faster responses to warning signs of misconduct.  

This introduces both challenges and opportunities; increased scrutiny, a more aggressive enforcement posture, and expanded data surveillance mean businesses must take internal controls more seriously than ever. At the same time, the strategy promises a fairer marketplace, where those who follow the rules are no longer undercut by fraudsters operating with impunity. 

At ESA Risk, we’ll be tracking how this strategy plays out in practice, how cases are investigated, which industries come under the spotlight, and what risk professionals can do to stay ahead. 

Fraud investigations by ESA Risk 

If you suspect that a fraud has occurred within your business and need advice or support on the next steps, we’re here to help. 

Contact us at advice@esarisk.com, on +44 (0)343 515 8686 or via our contact form to find out more. 

 

The new face of corporate fraud

The digital transformation of business has unlocked unprecedented efficiencies, but it has also opened the door to sophisticated new forms of fraud.

Among the most concerning developments are the emergence of synthetic identities, deepfake corporate officers, and AI-generated document forgeries. Deceptions that are increasingly difficult to detect with traditional due diligence methods.

These developments are not theoretical. Investigations in jurisdictions around the world are now revealing how generative AI is being actively used to fabricate corporate actors, forge documents, and move illicit funds through legitimate-looking entities.

The rise of deepfake directors and synthetic ID fraud

In the past, fraudulent incorporations often relied on stolen or recycled identity documents. Today, malicious actors can use generative AI to fabricate entire identities, complete with hyper-realistic facial images, counterfeit passports, social media profiles, and digital footprints.

With minimal oversight in many corporate registries, these synthetic individuals are slipping through the cracks.

This creates a critical challenge for compliance teams and investigators: how do you verify an individual who doesn’t exist?

AI-powered document forgery

Just as deepfake technology enables false identities, AI tools are now being used to forge documents, from invoices and contracts to bank statements and audit letters, with alarming realism.

Where traditional fraud requires basic Photoshop skills or rudimentary manipulation, generative AI tools can now:

  • Recreate logos, watermarks, and signatures with high fidelity.
  • Mimic writing styles, layout consistency, and document metadata.
  • Generate false invoice histories that align with legitimate-looking supply chains.

These fake documents are often used to:

  • Support fraudulent loan or trade finance applications.
  • Validate fictitious revenue in accounting fraud schemes.
  • Obscure money laundering transactions via fake vendor invoices.

According to Experian’s UK Fraud and FinCrime Report 2025, 35% of UK businesses were targeted by AI-related fraud in Q1 2025 – up from 23% in the same period last year, a surge fuelled by increasingly sophisticated techniques, including deepfakes, identity theft, voice cloning, and synthetic identities.

Warning signs and red flags

While synthetic IDs and AI-generated documents are hard to detect, several forensic red flags can help:

For synthetic directors:

  • Inconsistent or missing public records of the individual.
  • Digital photos that lack EXIF metadata or show visual signs of AI rendering (e.g., asymmetrical eyes, blurred backgrounds).
  • No verifiable employment or education history.
  • Repetition of similar director names or details across unrelated entities.

For AI-generated documents:

  • Uniform pixel patterns under magnification (suggesting image-based generation).
  • Metadata inconsistencies or overwritten PDF/XMP fields.
  • Signatures that appear identical across multiple documents.
  • Too-perfect formatting or terminology mimicking templated contracts.

How to protect against AI-driven corporate fraud

As AI-driven corporate fraud evolves, businesses and investigators must adapt. Here are several actionable steps:

Enhance KYC/onboarding protocols: Introduce biometric verification, reverse-image search, and cross-referencing of director identities with reliable databases.

Deploy AI against AI: Use AI-based document forensic tools that detect synthetic generation patterns, inconsistencies in text generation, or cloned signatures.

Audit high-risk entities: Conduct periodic deep dives into entities showing abnormal transaction patterns, limited physical presence, or rapid incorporation behaviour.

Work with experts: Partner with investigative firms skilled in digital forensics and open-source intelligence (OSINT) to proactively identify emerging threats.

ESA Risk investigations and due diligence

The illusion of legitimacy has never been easier to fake, or more dangerous to ignore. Deepfake directors and AI-generated documents are not science fiction, they’re happening now.

If you’re unsure whether a document or company is real, or if you need help investigating a suspicious entity, our team is here to help.

For further details of these services or to instruct us on a matter, contact us at advice@esarisk.com, on +44 (0)343 515 8686, or via our contact form.

Deep dive for the answers you need
Or contact us on +44 (0)343 515 8686 or at advice@esarisk.com.

Deep dive for the
answers you need

Lawyers, accountants, advisors, investors, senior
management. You name them, we help them find the answers
they need. Ready to discover how we can help you?