"This is how societies get dumber"
Good ReadsI believe AI is changing how we think. So, it seems, does Shreya Shankar, whose recent post I really enjoyed.
Change is often good, but not always. Not all advancement is progress. And when it comes to something as important as thinking, it seems like any changes are worth keeping an eye on.
When we look back at AI's impact on our ability to focus, think, and make independent judgements, will we feel good about those changes? If they herald a new dependency, are we prepared to take on that collective responsibility? We've come a long way powered by the steadfast engine of the scientific method. But that engine requires people who think for themselves, take calculated risks, cooperate openly, and engage with problems in good faith. It doesn't run if people stop wanting to verify information, or lose the ability to discern reality from fiction. Where AI worries me is that it seems to be making us less invested in the difference between the two.
Given how important communication is to us, it seems short-sighted to have focused so much of our AI efforts on building tools that undermine the trust we can have in our means of connecting with one another: language, written knowledge, imagery, sounds. None of this was inevitable.
Here's what I highlighted:
When every paragraph has a metaphor, you stop noticing metaphors. When every code block is wrapped in exception handling, none of them feel exceptional. AI deploys communication tools and rhetorical devices not because the content requires them, but because they pattern-match on what “good writing” or “robust code” looked like in the training data.
...if consumers can’t comprehend complex ideas or notice errors, they’re easier to manipulate. This isn’t just about misinformation in the dramatic sense. It’s more mundane than that. If I can’t tell whether a piece of code is actually robust or just looks robust, I might ship something broken. If I can’t tell whether a literature review is accurate or just plausible, I might build on work that doesn’t exist. The inability to verify compounds over time. And I think this is an underrated safety issue. There’s a lot of discourse about AI safety in terms of catastrophic outlier risk, i.e., dramatic scenarios where someone develops a bioweapon in their garage. But one of the biggest safety problems might be happening right under our noses: en masse, people are losing the ability to comprehend and verify the information they consume.
Taste—in any domain—depends on a feedback loop: you notice when something is good, you notice when something is bad, and over time you develop judgment. But if you stop being able to notice the difference, you stop developing that judgment. This matters even for things that seem low-stakes. Take restaurant recommendations. If the people making recommendations can’t tell the difference between a good meal and a mediocre one—or if they’re just parroting what an LLM scraped from Yelp—then the recommendations become worthless. I stop trusting them, and I lose access to a source of judgment I used to rely on.
The tools for communication and verification are how we build on each other’s work. When they erode, we become a society that can’t tell what’s true, can’t recognize quality, and can’t coordinate on hard problems. This is how societies get dumber.
The model’s role is to surface what’s in the grounding space, not to claim the experience.