When you watch a video of someone you know, there’s a brief moment (perhaps half a second) when something seems a little off. A beat too late, the mouth moves. The lighting changes in a way that isn’t appropriate for the space. The eyes blink, but not quite in the same way. The majority of people scroll past that point. They don’t stop.
They believe what they see because, for the majority of human history, seeing something was sufficient to determine its veracity. One convincing fake at a time, that assumption is now being methodically undermined.
| Topic Overview | Deepfakes & AI-Generated Media |
|---|---|
| Technology Type | AI-generated synthetic audio-visual media |
| Core Technology | Generative Adversarial Networks (GANs), Deep Learning |
| First Mainstream Emergence | ~2017–2018 |
| Primary Risks | Fraud, impersonation, misinformation, reputational harm |
| Notable Case | Arup (UK engineering firm) — HK$200M (~£20M) lost via deepfake CFO video call, 2024 |
| Detection Research | DARPA Media Forensics Program, University at Albany, Purdue University, UC Riverside |
| Public Awareness Gap | Less than 4% of British citizens could identify all deepfakes in a test (Finder, UK) |
| Legal/Regulatory Status | Evolving; compliance obligations emerging across jurisdictions |
| Reference Website | The Guardian — Arup Deepfake Scam |
AI-generated media known as “deepfakes,” which are capable of superimposing a real person’s voice, face, and mannerisms on fake video, have advanced far beyond the novelty stage. They were mostly a curiosity for a while, occasionally making an appearance in online discussion boards or as fodder for celebrity scandals. That time has passed.
It has been replaced by something far more unsettling: a thriving industry of deceit, sophisticated enough to trick financial officers, convincing enough to empty corporate bank accounts, and accessible enough that the entry barrier is now closer to a laptop and a free afternoon than any specialized technical skill.
Early in 2024, a case emerged from Hong Kong that shocked many people. An employee of the British engineering behemoth Arup joined a video call with multiple coworkers and what looked to be the company’s CFO. There were instructions. There was permission for a wire transfer. Before anyone realized that the CFO on that call had never been on a call, about HK$200 million, or about £20 million, had left the account.
Every voice and every face had been manufactured. The real people, completely unaware, were somewhere else. For a moment, it’s difficult to ignore the fact that there are digital ghosts in the conference room and that nobody in the actual room noticed until the money was gone.
That incident wasn’t the result of some cutting-edge, secret technology. It made use of increasingly accessible tools that have been improved by the same type of machine learning that drives voice assistants and photo filters. The majority of deepfakes’ underlying architecture, generative adversarial networks, has been getting better over the years thanks to training on the enormous collections of audio and video that people have willingly shared online.
Every social media video, interview, and public speech has the potential to be used as training data. Someone is easier to imitate the more times they have been filmed.
British financial journalist and consumer advocate Martin Lewis has spent years establishing a solid reputation for reliability. By producing deepfake videos of him endorsing fraudulent investment platforms and sharing them on Facebook to audiences who had every reason to believe what they were seeing, scammers paid back that reputation.
Lewis talked about how seeing his own face used that way made him feel physically sick. This is not merely a technical issue; the emotional violation of it appears to be significant. It’s more intimate and destructive. The self becomes a kind of liability when your face can be used as a weapon without your knowledge.
Although detection is feasible, the fakers typically prevail in the chase. Face-swapping produces subtle resolution inconsistencies that trained algorithms can detect, according to research from the University at Albany. Neural networks are being used by Purdue teams to identify anomalies in video frames.
For years, DARPA has supported the Media Forensics program, which aims to automate digital media integrity assessment. The science is real and progressing. However, production methods are developing more quickly, and even the most sophisticated detection tools are powerless when the video never makes it past them—that is, when it appears in an inbox, plays during a private call, or spreads throughout a WhatsApp group before anyone has a chance to question it.
The secondary effect, which is rarely discussed, is what makes this especially challenging to deal with: once people are aware that deepfakes exist, they begin to doubt everything. Real videos are discounted. Actual claims are questioned.
Theoretically, a politician can now assert that a genuine recording of something they actually said is fake. There used to be no such defense. Now is the time. The deepfake issue involves more than just the belief of false information; it also involves the denial of true information. That type of damage is slower and more difficult to fix.
Perceived risks of deepfakes consistently outweigh perceived benefits across age groups and national contexts, according to research surveying citizens in seven European countries. Younger people, particularly in Sweden, France, and the Czech Republic, show somewhat less alarm — but the broad direction of public sentiment is worry, not comfort.
In one study, less than 4% of British citizens were able to correctly identify every deepfake that was shown to them. Four percent were not tricked, not four percent. Practically speaking, technology has already surpassed the average person’s capacity to fend it off with observation alone.
Organizations are starting to consider the structural implications of this. Visual confirmation-based verification procedures, such as seeing the CFO’s face or hearing the voice of a familiar colleague, are no longer trustworthy as independent controls. These days, multi-step authorization, callback protocols, and special authentication codes for sensitive requests are not only best practices but essential architecture. The unsettling reality is that, until recently, AI impersonation scenarios felt theoretical, so many businesses haven’t updated their incident response plans to include them. They don’t anymore.
Whether or not regulation can catch up is still up in the air. Although disclosure laws and platform liability frameworks are being implemented in a number of jurisdictions, the worldwide, dispersed nature of content distribution makes enforcement extremely challenging. Even though it can’t be the only line of defense, it is evident that public literacy—the ability to pause, question, and verify—matters greatly.
Santander UK used purposefully produced deepfakes in an awareness campaign to teach people what to look out for, such as strange blinking patterns, irregular background reflections, and strange mouth movements. It’s a beginning. It’s another matter entirely whether it scales quickly enough.
Even though the full ramifications are still being felt, there is a sense that the moment of reckoning has already come when observing this area. The idea that a voice recording, a video clip, or a picture depicted an actual event was the foundation of the internet. It will be difficult to mend the cracks in that faith. Technology will continue to advance.
The fakes will continue to improve. And somewhere, on a real afternoon in a real office, someone will watch a video of someone they trust and decide to do something they’ll later regret. Whether that will occur is not the question. It’s the number of times it has already done so without anyone noticing.





