Deepfake fraud is increasingly widespread, according to study
Deepfake fraud is no longer a small-scale problem, but has gone industrial. That is according to a study by the AI Incident Database. The tools to create fake videos and audios are surprisingly cheap, accessible and effective.
Deepfake fraud has become a business model among scammers. Often the same patterns recur. Familiar and familiar faces, ranging from politicians and news anchors to entrepreneurs and celebrities are misused to tout so-called investment opportunities, crypto platforms or health products. Those videos are distributed through social media, reaching victims.
Deepfake doctors are also popular with scammers. They promote all kinds of wellness and health products.
In the UK alone, it is estimated that British consumers lost £9.4 billion (€10.8 billion) to AI fraud in the first nine months of 2025.
Non-consensual sexual imagery is one of the most alarming patterns to emerge from the study. From November 2025 to January 2026, several serious cases involving schools and minors were reported. Young people were targeted with fake images, which were often disseminated within their school environment.
“The models are getting better and cheaper, almost anyone can use them now,” Fred Heiding, a researcher at Harvard who researches AI and cybersecurity, told The Guardian. He points out that the worst is yet to come. Currently, the technology for voice cloning is already on point, making it easy for scammers to use it to scam people. However, he says the technology for deepfake videos is not yet on point.
Business AM





Subscribers 0
Fans 0
Followers 0
Followers