How AI Could Hollow Out the Mind That Made It
At the Q&A of the 99th Police Leadership and Good Governance Lecture Series, where I had just delivered a talk titled Leveraging Large Language Models Without Outsourcing Your Mind, I told the audience that AI could be an existential threat, but probably not in the way most people imagine. The danger is not a rogue superintelligence or a machine that decides humanity is the problem. The more plausible threat is quieter: that we will, through comfort and convenience, gradually render ourselves unnecessary to our own civilization.
That kind of existential threat does not arrive with warning signs. It feels, at every step, like progress.
Harvard Kennedy School economist Dani Rodrik has observed that when we allow AI to do our learning and thinking for us, we degrade our own human capabilities and risk eventually destroying the knowledge base on which AI itself relies. The claim is civilizational in its scope. Cognitive ability, like physical fitness, is maintained through use. When we read difficult texts, argue with peers, sit with uncertainty, and work through problems without assistance, we are not just retrieving answers. We are building the mental architecture that makes complex thought possible. Delegation feels efficient, but it is also erosive. Every time we reach for an AI-generated summary instead of reading the source, every time we accept a generated argument instead of constructing one, we make a small withdrawal from a cognitive account that requires regular deposits to remain solvent.
The atrophy is gradual, which is precisely what makes it dangerous. No single act of delegation is catastrophic. The harm is cumulative and invisible until it is severe. A generation raised on AI-assisted thinking may not realize what they have lost, because they never had the reference point of having built it themselves.
Rodrik's observation points toward a further consequence that he did not spell out but that follows from his logic: if human intellectual output diminishes, so does the quality of what AI can learn from. The models are only as rich as the human thought they are trained on. Dumb down the humans, and eventually you dumb down the machine that was supposed to augment them.
The second loop I raised in that Q&A requires no human choice at all. It is already running. A 2025 analysis by SEO firm Graphite, widely reported by outlets including Axios and Futurism, examined 65,000 web articles published between 2020 and 2025 and found that roughly half of all new English-language articles were primarily AI-written. This matters enormously because the web is one of the central reservoirs from which AI models draw their training data. When a significant and increasing share of that reservoir is itself AI output, the models of tomorrow are learning not from human thought but from synthetic reconstructions of it.
Researchers call this model collapse. The term was given landmark definition in a study published in Nature in 2024 by Shumailov and colleagues, which demonstrated that when AI models are recursively trained on their own outputs, they undergo a gradual degradation in quality. Each generation inherits not just the knowledge of its predecessors, but their distortions, their blind spots, and their statistical biases, amplified. The process resembles photocopying a photocopy: each iteration introduces noise, flattens nuance, and narrows the range of what the model considers plausible. The output becomes increasingly homogenized, subtly but systematically wrong, and, most troublingly, confident.
What makes this loop structurally different from the first is that no individual actor is responsible for it. Humans make choices to delegate cognition. Nobody, however, decided that the internet should fill up with AI content and then be fed back into AI training pipelines. It is an emergent consequence of incentives: content is cheap to generate, and no one is curating the web for synthetic origin before scraping it for training data.
Considered separately, each loop is concerning. Together, they are compounding. The first depletes the supply of original human thought entering the knowledge ecosystem. The second floods that ecosystem with synthetic content that progressively degrades in quality. The signal erodes from both ends simultaneously.
The scenario this produces, if left unaddressed, is a slow dimming: a civilization that retains the appearance of knowledge production while the substance hollows out. AI systems that sound authoritative but are increasingly untethered from the rigorous, contested, revised human thinking that made them useful in the first place. And humans who, having long since outsourced the hard work of cognition, lack the capacity to notice the difference.
None of this is inevitable. The loops can be interrupted, but only deliberately.
The first loop is interrupted by treating human cognition as something worth protecting rather than a cost to be optimized away. This means educators resisting the wholesale substitution of AI for the productive struggle of learning. It means professionals choosing, at least some of the time, to reason through a problem before consulting a model. It means a cultural revaluation of the effort of thought, not as inefficiency, but as the activity through which understanding is actually built.
The second loop requires structural intervention. Researchers are already working on it. A growing body of work, including verification-based approaches that filter synthetic data before it re-enters training pipelines, suggests that model collapse is not inevitable if the right provenance controls are in place. The challenge is whether the incentives of the industry will allow such rigor to be prioritized before the degradation becomes irreversible.
But perhaps the most important intervention is the simplest: awareness. The loops are not visible to people who have not been told to look for them. Most people using AI tools today experience them as frictionless and helpful, which they often are. The question is what is being traded away at the aggregate level, across time, that no individual transaction makes legible.
AI, used well, is a genuine amplifier of human capability. It can surface information faster, assist with execution, and extend what a single mind can accomplish. Used well means used as a tool that enhances human reasoning rather than replaces it. The difference is not always obvious in the moment. It requires the kind of reflective awareness that, somewhat ironically, becomes harder to sustain the more we rely on AI to do our reflecting for us.
The goal, then, is not to resist AI but to remain the kind of thinkers that make AI worth having: to continue generating the original thought, the genuine inquiry, the hard-won understanding that replenishes the reservoir. Not for the machine's sake, but for our own.