Perhaps Thoreau was prescient when he wrote, "while England endeavours to cure the potato rot, will not any endeavour to cure the brain-rot—which prevails so much more widely and fatally?" One hundred and seventy years later, society grapples with the same question, albeit from a markedly different perspective.
In our digital age, potatoes no longer represent a matter of life or death (capital is sufficient), but our attention has become the new scarce currency. Locked into an infinite content slot machine, we pull the lever not as discerning students of the internet but because it fills a desire artificially carved by the platforms themselves. A causa sui, per se.
Content algorithms have achieved perfect discrimination for their users, yet they feed you what they know you want to see because you have become the algorithm. Your preferences, once grass-fed and free-range, now lie trapped in a stable, force-fed the cud you once actively journeyed to chew. Except the act of scrolling inorganically mimics discovery, leading us to believe that we aren’t trapped. It’s the perfect crime.
Thoreau's observation provides crucial context, reminding us that this is not a new problem. Each generation faces its own version of brain rot. The onset of television, and prior to that, radio, introduced less aggressive forms of content delivery that were similarly accused of dulling minds. Even reading magazines instead of literature was ill-advised. However, the slot machine form factor we experience today proves more addictive—its infinite randomness making it more mind-numbing to consume. Like alcohol, a drink or two brings pleasure, but ten drinks later, consciousness dims.
Large language models (LLMs) represent the next evolutionary step in this digital transformation. Already, they penetrate our content spheres through "agents" and basic videos, but their true potential will emerge in new, even more engaging form factors. Augmented reality, virtual reality, and brain-computer interfaces represent the frontier where this revolution will unfold.
LLMs also present subtle dangers when it comes to their chat feature. In the pursuit of knowledge and task completion, we might find ourselves overly dependent on LLMs for answers. We must walk a fine line, not using chat as a crutch but rather leveraging it as an encyclopedia of niche internet forums and codebases—a tool for our cleverly-prompted curiosities.
Most critically, we must not let LLMs diminish our sense of meaning in life, rather, we must stare them dead in the eye. Their existence demands our unflinching attention. They may know more than any individual, but they do not possess the same advanced neural architecture nor consciousness that humans embody. The human brain uniquely enables discovery beyond existing knowledge, whereas LLMs remain confined within their vast ocean of training data. It falls to us, as humans, to continue pushing towards discovery with our new LLM companions, knowing that only we can invent new forms of cryptography, unravel the intersection of quantum mechanics and general relativity, and most importantly, maintain our essential humanity.
The choice is ours: engage mindfully with technology or succumb to brain rot.