Can you spot a fake image?
Here are three images: a suburban collision scene (pictured above), a damaged yellow car with a cracked windscreen, and another accident photograph (both below).

Two are entirely or partially fabricated using artificial intelligence. One is real. If you cannot tell the difference, neither can many insurers - and that is precisely the problem.
Insurance fraud has added an average of £50 to consumer annual premiums, as new research demonstrates how artificial intelligence can create convincing fabricated claims scenes that closely mirror tactics used by fraudsters and organised crime groups.
Data and AI firm SAS conducted a study showing how generative AI can produce doctored insurance images in seconds.
The research comes as the Insurance Fraud Register reports the £50 average premium increase linked to fraudulent activity, while payment platform Adyen found the average cost of a fake claim has reached £84,000, with one in seven claims proven fraudulent.
Industry sources were already reporting last year that some drivers are using AI-generated images to present exaggerated vehicle damage in motor insurance claims.
Generative Adversarial Networks are making it increasingly difficult to distinguish between genuine and altered images, with Zurich reporting a rise in the use of digital technology in false or misleading claims, including digitally modified imagery.
To demonstrate the difficulty of distinguishing real from fabricated imagery, SAS created three images using generative AI tools.
The first image, which appears to show an ordinary collision scene, is entirely synthetic and was created using a prompt for a collision on a suburban English street. The second image features a yellow car from an authentic photograph, but AI was used to remove bystanders, alter number plates, and add windscreen damage digitally.
Only the third image is real.
In the second image, the removal of contextual elements such as people and surrounding vehicles eliminates evidence that insurers typically rely upon for verification.
Adam Hall, insurance fraud specialist at SAS, said fraudsters are using generative AI tools to create fabricated damage and doctored scenes that appear plausible. "With just a few prompts, they can create, enhance or erase visual evidence to support a false insurance claim," he said.
Hall noted that subtle inconsistencies can indicate AI-generated claims, including shadows that fall incorrectly, damage inconsistent with impact patterns, blurred number plates, or backgrounds that appear unusually clean or empty.
"These tiny visual mismatches are often the first red flags of an AI-generated claim," he said.
The technology is also being deployed by insurers to combat fraud. Hall said AI and machine learning can detect individual scams and organised networks by analysing large volumes of claims data to reveal anomalies and patterns that human reviewers cannot identify, reducing losses and improving accuracy.
As fraudsters adopt techniques including fake identities, forged documents, and digital-first scams, AI systems can review and retrain models, absorb new data sources, and deliver risk scoring to assist insurers in detection efforts, Hall added.