Insights |Fraud prevention

17th July 2025

The new face of corporate fraud

AI is transforming how corporate fraud is carried out – and making it harder to detect.

The digital transformation of business has unlocked unprecedented efficiencies, but it has also opened the door to sophisticated new forms of fraud.

Among the most concerning developments are the emergence of synthetic identities, deepfake corporate officers, and AI-generated document forgeries. Deceptions that are increasingly difficult to detect with traditional due diligence methods.

These developments are not theoretical. Investigations in jurisdictions around the world are now revealing how generative AI is being actively used to fabricate corporate actors, forge documents, and move illicit funds through legitimate-looking entities.

The rise of deepfake directors and synthetic ID fraud

In the past, fraudulent incorporations often relied on stolen or recycled identity documents. Today, malicious actors can use generative AI to fabricate entire identities, complete with hyper-realistic facial images, counterfeit passports, social media profiles, and digital footprints.

With minimal oversight in many corporate registries, these synthetic individuals are slipping through the cracks.

This creates a critical challenge for compliance teams and investigators: how do you verify an individual who doesn’t exist?

AI-powered document forgery

Just as deepfake technology enables false identities, AI tools are now being used to forge documents, from invoices and contracts to bank statements and audit letters, with alarming realism.

Where traditional fraud requires basic Photoshop skills or rudimentary manipulation, generative AI tools can now:

  • Recreate logos, watermarks, and signatures with high fidelity.
  • Mimic writing styles, layout consistency, and document metadata.
  • Generate false invoice histories that align with legitimate-looking supply chains.

These fake documents are often used to:

  • Support fraudulent loan or trade finance applications.
  • Validate fictitious revenue in accounting fraud schemes.
  • Obscure money laundering transactions via fake vendor invoices.

According to Experian’s UK Fraud and FinCrime Report 2025, 35% of UK businesses were targeted by AI-related fraud in Q1 2025 – up from 23% in the same period last year, a surge fuelled by increasingly sophisticated techniques, including deepfakes, identity theft, voice cloning, and synthetic identities.

Warning signs and red flags

While synthetic IDs and AI-generated documents are hard to detect, several forensic red flags can help:

For synthetic directors:

  • Inconsistent or missing public records of the individual.
  • Digital photos that lack EXIF metadata or show visual signs of AI rendering (e.g., asymmetrical eyes, blurred backgrounds).
  • No verifiable employment or education history.
  • Repetition of similar director names or details across unrelated entities.

For AI-generated documents:

  • Uniform pixel patterns under magnification (suggesting image-based generation).
  • Metadata inconsistencies or overwritten PDF/XMP fields.
  • Signatures that appear identical across multiple documents.
  • Too-perfect formatting or terminology mimicking templated contracts.

How to protect against AI-driven corporate fraud

As AI-driven corporate fraud evolves, businesses and investigators must adapt. Here are several actionable steps:

Enhance KYC/onboarding protocols: Introduce biometric verification, reverse-image search, and cross-referencing of director identities with reliable databases.

Deploy AI against AI: Use AI-based document forensic tools that detect synthetic generation patterns, inconsistencies in text generation, or cloned signatures.

Audit high-risk entities: Conduct periodic deep dives into entities showing abnormal transaction patterns, limited physical presence, or rapid incorporation behaviour.

Work with experts: Partner with investigative firms skilled in digital forensics and open-source intelligence (OSINT) to proactively identify emerging threats.

ESA Risk investigations and due diligence

The illusion of legitimacy has never been easier to fake, or more dangerous to ignore. Deepfake directors and AI-generated documents are not science fiction, they’re happening now.

If you’re unsure whether a document or company is real, or if you need help investigating a suspicious entity, our team is here to help.

For further details of these services or to instruct us on a matter, contact us at advice@esarisk.com, on +44 (0)343 515 8686, or via our contact form.

contact us online or by phone

Safeguard your business

Our expert consultants are on hand to give you the support you need.

What are you looking for?

Get the advice you need

Deep dive for the answers you need
Or contact us on +44 (0)343 515 8686 or at advice@esarisk.com.

Deep dive for the
answers you need

Lawyers, accountants, advisors, investors, senior
management. You name them, we help them find the answers
they need. Ready to discover how we can help you?