When many people first noticed the ground shift, it was most likely something minor. A family member forwarded a phone video of a politician saying something they most likely never said. Or a picture that appeared overly tidy—the kind of tidiness that, upon closer examination, isn’t quite correct. The shadows are incorrect, and the hands are incorrect. It turns out that before anyone bothered to ask, half a million people had already shared it. That’s about where we are. In a slower, less dramatic setting where minor lies proliferate more quickly than corrections and corrections don’t actually spread, rather than at the cinematic edge of a deepfake apocalypse.
A few years ago, Tom Wheeler, the former chair of the FCC who now writes from Brookings, made an observation that hasn’t changed: technology that was meant to democratize truth has, in reality, frequently weakened it. In addition to being amazing tools—Wheeler referred to them as the greatest democratizing tool civilization has produced—the phones in our pockets are also the most effective means of delivering false information ever created. Both of these things are true. The real challenge of this moment is to hold them in mind simultaneously.
| Topic Snapshot | Details |
|---|---|
| Subject | How digital platforms, AI, and synthetic media are reshaping public belief and the verification of facts |
| Era | The transition from broadcast-era trust to algorithmic, hyper-personalized information feeds |
| Key Technologies Involved | Recommendation algorithms, generative AI, deepfakes, blockchain provenance, AR/VR |
| Notable Institutions Studying It | Brookings, Annenberg School for Communication, MIT Sloan, Futures Platform |
| Cultural Inflection Points | Trump deplatforming (2021), AI-generated political imagery (2024–2025), platforms scaling back fact-checking partnerships (2025) |
| Most Cited Concern | Hyper-personalized misinformation outrunning regulation and human attention |
| Reference Project | FactCheck.org — running since 2003, now contending with deepfakes “more real than real” |
| Open Question | Whether trust can be rebuilt by technology, or only by the slower work of institutions |
The supply side has evolved over the past two or three years. The cost of creating content that looks authentic has decreased thanks to generative AI. It used to take a small team and several hours to create a custom video, a fake quote, or a fake document. They now take about 90 seconds after being prompted. The economics of disinformation have changed: it used to be costly to fabricate information on a large scale and inexpensive to confirm it, but this is no longer the case. Researchers at organizations like the Annenberg Public Policy Center, which has been operating FactCheck.org since 2003, feel that the older toolkit—careful sourcing, slow investigation, published corrections—is being asked to fight a faster war on a smaller budget. According to the site’s own employees, AI-generated imagery now appears “more real than real.”

In the meantime, the platforms that used to serve as flawed referees have retreated. Earlier this year, Meta terminated its official fact-checking collaborations. X has relied almost exclusively on Community Notes, which can be surprisingly effective on certain days and completely ineffective on others. YouTube subtly favors “authoritative sources” without ever providing a clear definition of what constitutes an authoritative source. Political fatigue, cost pressure, and corporate caution are all recurring themes. In the long run, this handoff to users might be a better arrangement. Another possibility is that it’s simply an abdication disguised as empowerment.
However, platforms or even AI are not the deeper change. It has to do with what people now anticipate from information in general. A generation is emerging that has never experienced a single, widely accepted media reality, such as a Sunday morning consensus, a national front page, or a nightly news anchor. Instead, they maneuver through a thousand tiny currents. There are some really good currents. Many aren’t. The researchers who monitor this—at Brookings, Penn, and MIT Sloan, where a recent article detailed generative AI’s propensity to “persuasion bomb” users with increasing confidence rather than acknowledge uncertainty—keep coming to the same unsettling conclusion. Facts were never enough to build trust. It had to do with shared attention rituals, institutions, and repetition. Compared to ten years ago, all three are now weaker.
It’s really unclear what will happen next. There are genuine attempts at watermarking standards, content provenance procedures, and the gradual return of editorial gatekeeping in certain areas. A few of them could be effective. They won’t be able to completely undo the change. It was claimed that everyone would have access to the truth during the Information Age. Instead, it brought about the age in which a version of it is available to all. That is not the same thing. As subtle as it may sound, this decade’s story is most likely the difference.




