17
AI is transforming how corporate fraud is carried out – and making it harder to detect.
The digital transformation of business has unlocked unprecedented efficiencies, but it has also opened the door to sophisticated new forms of fraud.
Among the most concerning developments are the emergence of synthetic identities, deepfake corporate officers, and AI-generated document forgeries. Deceptions that are increasingly difficult to detect with traditional due diligence methods.
These developments are not theoretical. Investigations in jurisdictions around the world are now revealing how generative AI is being actively used to fabricate corporate actors, forge documents, and move illicit funds through legitimate-looking entities.
In the past, fraudulent incorporations often relied on stolen or recycled identity documents. Today, malicious actors can use generative AI to fabricate entire identities, complete with hyper-realistic facial images, counterfeit passports, social media profiles, and digital footprints.
With minimal oversight in many corporate registries, these synthetic individuals are slipping through the cracks.
This creates a critical challenge for compliance teams and investigators: how do you verify an individual who doesn’t exist?
Just as deepfake technology enables false identities, AI tools are now being used to forge documents, from invoices and contracts to bank statements and audit letters, with alarming realism.
Where traditional fraud requires basic Photoshop skills or rudimentary manipulation, generative AI tools can now:
These fake documents are often used to:
According to Experian’s UK Fraud and FinCrime Report 2025, 35% of UK businesses were targeted by AI-related fraud in Q1 2025 – up from 23% in the same period last year, a surge fuelled by increasingly sophisticated techniques, including deepfakes, identity theft, voice cloning, and synthetic identities.
While synthetic IDs and AI-generated documents are hard to detect, several forensic red flags can help:
As AI-driven corporate fraud evolves, businesses and investigators must adapt. Here are several actionable steps:
Enhance KYC/onboarding protocols: Introduce biometric verification, reverse-image search, and cross-referencing of director identities with reliable databases.
Deploy AI against AI: Use AI-based document forensic tools that detect synthetic generation patterns, inconsistencies in text generation, or cloned signatures.
Audit high-risk entities: Conduct periodic deep dives into entities showing abnormal transaction patterns, limited physical presence, or rapid incorporation behaviour.
Work with experts: Partner with investigative firms skilled in digital forensics and open-source intelligence (OSINT) to proactively identify emerging threats.
The illusion of legitimacy has never been easier to fake, or more dangerous to ignore. Deepfake directors and AI-generated documents are not science fiction, they’re happening now.
If you’re unsure whether a document or company is real, or if you need help investigating a suspicious entity, our team is here to help.
For further details of these services or to instruct us on a matter, contact us at advice@esarisk.com, on +44 (0)343 515 8686, or via our contact form.
Safeguard your business
Our expert consultants are on hand to give you the support you need.