Is the 4chan dead internet theory coming true? The enshittification of social media
Fifteen years ago, a simple blue 'like' button changed the way we connect, transforming social media into a popularity contest of influence and attention. What started as a way to share blurry party pics and passive-aggressive Farmville requests has become a dopamine casino, with algorithms rigged to keep us scrolling until our thumbs cramp.
This shift has given rise to terms like 'doomscrolling' and even crowned 'brain rot' as Oxford Dictionary's 2024 Word of the Year—reflecting our growing struggle with compulsive and excessive content consumption. It’s a worrying sign for society.
What we’re witnessing is a "platform decay" - a systematic degradation of online services. Writer Cory Doctorow describes this process as an "enshittification," explaining how platforms evolve: First, they’re good to users; then they exploit users to attract businesses; finally, they exploit businesses to maximize profit. Platforms position themselves as gatekeepers, holding users and businesses hostage while squeezing every ounce of value from their interactions.
Let’s take a nostalgia trip. Ten years ago, social media had a different vibe. During what I call The Collection Era (2004–2012), Facebook was for passive-aggressive family updates, Twitter/X thrived on snarky one-liners and hashtag activism before hashtags became SEO tools, and Instagram was just artsy photos of lattes. Content went viral because humans—flawed, emotional, easily amused humans—wanted to share it.
Then came The Monetization Shift (2012–2018), aka "The Great Enshittening." Algorithms replaced chronological feeds, organic reach died a quiet death, and businesses were told to pay up. Platforms started tracking us, hoarding data to feed the AI algorithms that now run the show.
Now we’re in The Machine Learning Revolution (2018–Present)—the most transformative phase of social media yet. AI predicts our next move with eerie precision. TikTok’s "For You" page feels like it reads our minds, flooding our feed with videos tailored to our every interest. Watch one funny cat clip, and you’re screwed – it’s cats, cats, and more cats. Or you casually scroll past a friend’s vacation post on Instagram, and minutes later you’re seeing ads for flights to that same destination.
These intelligent systems aren’t just showing us content anymore—they are literally manipulating us. Like, comment, or share, and the algorithms use this data to feed us even more of what’s most likely to keep our attention hooked. These feedback loops are so powerful they’ve convinced people the Earth is flat and that birds aren’t real—they’re actually government surveillance drones. Yes, spy pigeons. It honestly sounds so ridiculous, but it’s a testament to how these algorithms can distort how we perceive our reality.
The rise of AI profiles marks an even more drastic shift. Meta has introduced AI-generated characters with bios and profile pictures to become our “friends,” “followers,” and even “influencers,” and they are now actively creating content and participating in our feeds.
So, what’s next? Now that the algorithms may know us better than we know ourselves, how might this be used to control us further?
Will we become junkies for algorithmic dopamine, trading real-world joy for virtual slot machines? Will our great-grandkids ask, "What’s a ‘human friend’?" Or will we end up in a Ready Player One dystopia where the real world is just that dusty place we visit to charge our VR headsets?
What safeguards are needed to ensure these systems don’t exploit our vulnerabilities or deepen issues like addiction and loneliness?
Let's investigate…
Enjoying this newsletter? Subscribe to Future Files on Substack to receive it straight to your inbox.
When Your BFF is an Algorithm
Remember the early days of CGI characters stuck in the "uncanny valley"? Yeah, those days are gone. Today, AI-generated faces are so realistic they see more ‘real’ than actual ones and can maintain perfect continuity, making virtual influencers like Lil Miquela—who has millions of followers—more believable.
Imagine your AI friend, let’s call him Finn, who remembers your pet’s birthday, your irrational fear of kiwifruit, and exactly what memes to send after a bad day. For isolated or anxious folks, this might feel revolutionary—a friend who never judges, cancels plans, or argues about Star Wars hot takes.
But imagine this too: a world where people become more attached to AI than to their real relationships, where dependency isolates us further, or where digital companions manipulate us into buying products. One mother even believes an AI chatbot was responsible for her son’s suicide after he fell in love with it.
As we step into this AI-besties era, we must ask ourselves: Can these companions provide meaningful support without diminishing the need for genuine relationships? Or will people increasingly rely on AI, avoiding the complexities and growth that come with human interactions?
Once dismissed as a fringe conspiracy theory on 4Chan – the "Dead Internet Theory" suggested that most of the internet is fake, dominated by bots and AI that corporations used to manipulate trends, push agendas, and create the illusion of organic interaction. While there’s a lot of truth to the rise of bots (which account for around 40-50% of online traffic) and AI-generated content, most activity online is still human-driven... for now.
Free speech
When Canadian truckers organized COVID protests via Facebook, their accounts were removed—showing how tech platforms can control public conversation. This exposed a modern paradox: platforms built to amplify voices now act as judge, jury, and censor in debates with real-world consequences. The question isn’t just “Who decides what’s allowed?”—it’s whether trillion-dollar corporations should wield unchecked power over the idea of a public square.
During the pandemic, platforms like Meta faced pressure to remove content labeled as medical "misinformation," even from doctors challenging mainstream views. Mark Zuckerberg later admitted governments pushed for more censorship. Meanwhile, Elon Musk’s takeover of Twitter (now X) in 2022 flipped the script, reinstating banned accounts and cutting moderation—sparking new fights over balancing free speech and harmful content.
The stakes grow as tech evolves. Imagine apps controlling not just your social feed, but job opportunities or loans based on a "social score" (think Black Mirror’s dystopian ratings). Who should hold this power? Private companies? Governments?
AI moderation tools trained on biased data could silence marginalized voices, while human moderators—overworked, underpaid, and traumatized by the internet’s underbelly—face impossible calls. Remove a post inciting violence? Good. But what about satire? Art? Activists documenting wars? It’s sorting a billion gray areas with a sledgehammer.
Free speech isn’t just about letting people yell—it’s about hearing through the noise. Algorithms designed for “engagement” prioritize extremes, turning debates into gladiator pits of hot takes. Research from MIT shows falsehoods spread 6x faster than the truth online. Why?
Lies are juicier—emotional junk food for our lizard brains.
Social Media’s Silver Linings
Small, focused communities showcase social media at its best. People with rare diseases can find support groups that would be impossible to form locally, while artisans in small towns can reach global customers and learn from masters of their craft. During natural disasters, local Facebook Groups and NextDoor become vital information hubs, coordinating community resources more effectively than traditional channels.
Education has also been transformed by these digital connections. Language learners can practice with native speakers worldwide, while professionals build career-changing networks through platforms like LinkedIn.
The fix isn’t quitting social media—it’s hacking it back into a tool, not a trap. Use it like a disciplined gym goer, not a buffet binge-r. Connect with friends, minimize scrolling, and join communities of likeminded individuals.
So how do we stop platform decay?
Social media is a crumbling McMansion these days—doors jammed with ads, windows cracked by conspiracy theories, floors are sagging under the weight of AI-generated sludge. But here’s the thing: we’re not just tenants here. We can fix this place. The repair job starts with rethinking both how these platforms operate and how we move through them.
Imagine feeds that actually prioritize people you care about—chronological, predictable, human. Cory Doctorow nails it: social media should work like a post office, delivering what we ask for.
Transparency is the next pillar. Every post should come with a nutrition label: Is this from a real person? Did someone pay to put it here? Why am I seeing it? We deserve to know. And while we’re at it, let’s demand audits of these black-box algorithms—explanations in plain language, not corporate jargon.
None of this works, though, without changing how we behave. Social media should not be for endless scrolls and performative outrage. What if we treated it like a tool, not a compulsion? Checking feeds deliberately instead of reflexively. Prioritizing group chats over viral videos. Investing in smaller communities (a hiking forum, a local arts Discord) where engagement metrics don’t dictate the vibe.
This isn’t about nostalgia for 2012 Facebook. It’s about rebuilding digital spaces where we’re users, not products. The cracks in these platforms are obvious—but the blueprint for something better is already here.
We can create spaces for genuine connection by demanding better platforms and being more thoughtful about how we use them. By taking these steps together – both big and small – we can help stop the decay and build something better together.