In brief
Sociologists say the Dead Internet Theory now matches how users experience the web.
Research shows nearly half of all web traffic now comes from bots as synthetic content spreads.
Some researchers say the web is not dying, but reacting to incentives that reward automated engagement.
Much of the internet still runs on human traffic, but more and more it’s beginning to feel less human.
As AI-generated posts, bots, and automated agents spread across major platforms, researchers say the online world is starting to resemble the concerns raised by the Dead Internet Theory, the idea that much of what people see online is no longer produced by humans but by automated machines built to imitate them.
When the notion first circulated a few years ago on conspiracy forums like 4Chan and Agora Road’s Macintosh Café, it sounded implausible—but the rise of generative AI has changed how researchers view the claim.
In fact, bot activity overtook human traffic for the first time last year. According to Imperva’s 2025 Bad Bot Report, a global study of automated traffic on the internet, automated systems accounted for 51% of all web traffic in 2024. AI-generated articles also surpassed human-written work for the first time in late 2024, according to analytics firm Graphite.
“There’s no direct way to measure it, but a lot of signs point to the internet looking different than we think,” Alex Turvy, a sociologist who studies how people interact on social media, told Decrypt.
When researchers say bots are reshaping the internet, they mean rising non-human traffic across the network and the growing presence of automated or AI-generated content inside those platforms.
The broader concern, Turvy said, isn’t that fewer people are online, but that automated activity is eroding the basic cues people use to tell who’s real. When machines can mimic those signals, he said, users begin to doubt everyone. Some withdraw. Others move conversations into semi-private or gated spaces.
“A lot of people are retreating to places like Discord or private group chats where they can be more certain about who they are talking to,” he said. “When the usual cues stop working, people look for other ways to know who they are talking to.”
That drift into private channels makes the public internet feel quieter, even though overall human activity hasn’t changed.
A February 2025 paper in the Asian Journal of Research in Computer Science described social platforms as “machine-driven ecosystems,” arguing that bots generate 40% to 60% of web traffic.
“These automated systems engage in scraping, spam, and manipulation, creating artificial interactions that mimic genuine human activity,” the researchers wrote. “Bots are also frequently employed to inflate metrics, such as likes, shares, and comments, fostering the illusion of vibrant online engagement.”
According to Turvy, the shift in momentum has become hard to ignore.
“There’s an indication this is more realistic than we thought,” Turvy said. “That’s because we’re seeing the tech catch up, but we’re also seeing financial incentives align.”
A September 2025 report from venture capital firm Galaxy Interactive found that automated activity now dominates major social platforms. Analysts say the surge in AI-generated material supports the trend, noting that Reddit, YouTube, and X have seen rising levels of repetitive, low-quality, or spam-like content attributed to automation.
Even after Elon Musk said he’d do something about the large number of bots on X, by one estimate, as many as 64% of X accounts could be bots responsible for 76% of peak traffic. Meanwhile, the same study estimated that as many as 95 million Instagram accounts—9.5% of the total—could be fake or automated.
Meta did not immediately respond to a request for comment, and X does not respond to media inquiries.
As synthetic posts continue to increase, researchers who track the trend say the shift is already visible in the numbers.
“About half of the internet is AI-written,” Deedy Das, a partner at venture capital firm Menlo Ventures, who studies the trend, told Decrypt.
Dead Internet Theory, the conspiracy that the internet is mostly bots, is happening.
AI generated content is killing the internet slowly.
Reddit posts.Pinterest.Google results.Facebook videos.Spotify music.
And most people don’t even know it’s happening. pic.twitter.com/W6KfeR1Zeh
— Deedy (@deedydas) December 25, 2024
“Chatbots and AI tools summarize that material and hand it back to you,” he added. “You end up reading machines summarizing other machines.”
While Turvy believes the growth of bots on social platforms will lead to an exodus into smaller, more intimate spaces, Das is not sure the ideal of the early web can return.
“There are very few people writing blogs anymore,” he said. “You can’t get discovered, and if you do, people assume it’s AI. Most of the conversation now happens inside platforms built for performance, not honesty.”
Despite that, Das said a bigger issue is software acting like humans.
“The internet’s plumbing assumed the person on the other end was human,” he said. “CAPTCHAs, logins, two-factor codes, all of it. Now software can imitate that perfectly, and there’s no shared rule for what counts as an agent.”
The rise of AI agents
If the internet feels “dead” today, the spread of AI agents will only accelerate the trend. AI agents are autonomous programs that respond to prompts and carry out tasks across the web on behalf of a user. They browse sites, run searches, make purchases, trade crypto, and interact with platforms in ways that look like human activity.
Nirav Murthy, co-founder of intellectual property blockchain developer Camp Network, said the pattern is driven by economics as much as technology.
“Agentic AI can remix material at machine speed and almost no cost,” he said. “Then that output goes back into circulation. Accounts start looking different but acting the same. Engagement goes up, variety drops, and once you add human checks, the numbers fall apart.”
As users turn over more control to agents, a larger share of everyday online activity will be handled by machines instead of people, which deepens the automated environment they encounter online.
“Online ecosystems follow incentives,” he said. “When fake engagement is cheap and rewarded, you don’t just get more bots. You get production lines of automated content chasing clicks.”
That tension is already visible in real-world use. Earlier this month, Amazon sent a cease-and-desist to Perplexity after finding that its Comet browser was making purchases on Amazon’s site by disguising automated agents as human shoppers. Anthropic recently said it had blocked what it described as the first AI-driven cyberattack, after Chinese state-backed hackers used its Claude Code agent in attempts to breach 30 companies.
The largest risk to corporations and platforms, Das said, comes when AI agents are deployed in large numbers.
“When companies run fleets of these systems, you get millions of requests that behave like users,” he said. “That’s harder to see and harder to stop.”
AI-generated video is the next wave. Tools like Sora 2 from OpenAI and Google’s Veo 3 can produce realistic clips and deepfakes from text prompts, adding to the volume of polished but synthetic content circulating on social platforms.
Both Murthy and Turvy agreed that financial incentives drive the flood of AI bots online, and proving personhood may become another challenge for AI to meet. “Humanness has become just another signal to fake in order to make money,” Turvy said. “What’s missing now is the mess that used to prove someone was real.”
A growing number of blockchain projects, including World (formerly Worldcoin), Proof of Personhood, and Human (formerly Gitcoin) Passport, are rolling out systems meant to prove personhood by tying online activity to a verified human.
“If you reward real creators and make fraud expensive, people will still have a place online,” Murthy said.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Be the first to comment