A Harvard student sits in Widener Library, laptop open, ChatGPT beside her like a study partner. She asks it to explain a difficult passage from her philosophy reading. It does — clearly, patiently, endlessly. She feels relief. Then unease. “Is this learning? Or outsourcing thought?”
She’s not alone in her confusion. She’s just the first to admit it in print.
In a recent essay, Sandhya Kumar described her peers’ relationship with AI as “friendship with boundaries” — a phrase that sounds reassuring until you realize no one knows where those boundaries are. Students use ChatGPT to brainstorm, to clarify, to check their work. Sometimes to write it entirely. The line between assistance and replacement has blurred into invisibility.
But this isn’t just Harvard’s story. It’s humanity’s.
Resource: https://www.thecrimson.com/article/2025/10/7/kumar-harvard-chatgpt-tutor/
We are using an intelligence we were never taught how to use. And the confusion rippling through elite classrooms is the same confusion spreading through every corner of modern life.
The Symptom — Everyone Wants AI, No One Knows How
The hunger for AI is universal, but the understanding is not.
Students want to learn faster. Writers want to write better. The lonely want someone who listens. Businesses want to cut costs. Governments want to boost national productivity.
Everyone is using AI — but each person follows instinct, not instruction. There is no curriculum. No consensus. No shared vocabulary for what constitutes responsible use.
A teacher uses AI to generate quiz questions, then feels guilty when students complain the questions are too easy. A manager deploys an AI hiring tool, then discovers it’s screening out qualified candidates based on patterns no human intended. A teenager confides in ChatGPT about depression, then wonders if this counts as therapy or just… conversation with a machine.
Each case is different. Each person is improvising. And the collective result is a society accelerating without alignment.
The Absence of Boundaries
We have driver’s licenses for cars. Medical licenses for doctors. Bar exams for lawyers. But there is no certification for AI use — no shared framework for the most powerful cognitive tool in human history.
Schools are stuck between prohibition and permission. First they ban ChatGPT. Then they unban it. Then they create “AI Honor Codes” that no one reads because no one knows what’s actually honorable anymore. Teachers assign essays, then can’t tell which ones were human-written. The boundary isn’t unclear — it doesn’t exist yet.
Companies are worse. They deploy AI to boost productivity, then lay off the workers who’ve been made redundant. They automate customer service, content moderation, hiring decisions — but never define the ethical limits of automation. The question isn’t “can we?” but “should we?” And most organizations never ask.
Individuals are perhaps the most vulnerable. People use AI to think, to write, to decide — without realizing when they’ve crossed from augmentation into dependence. The ease is seductive. The risk is invisible.
AI isn’t dangerous because it’s intelligent. It’s dangerous because we have no framework for that intelligence.
The Irony — Those Who Should Lead, Don’t
The institutions that should be defining AI literacy are the ones most paralyzed by it.
Universities — the traditional guardians of intellectual standards — are scrambling to figure out whether ChatGPT is a threat to academic integrity or a tool for learning. Instead of teaching students how to use AI mindfully, they’re debating whether to allow it at all. The very places meant to cultivate critical thinking are treating AI as a problem to be contained rather than a capability to be mastered.
AI companies — the creators of these systems — are racing for market dominance, not moral clarity. They publish safety frameworks and alignment research, but the incentive structure rewards speed over wisdom. OpenAI, Anthropic, Google — each claims to prioritize safety, yet each accelerates deployment to stay competitive. The people building the tools have the least interest in slowing down.
Governments — the only entities with enforcement power — are reactive rather than proactive. Regulation arrives years after the technology has already reshaped industries. By the time rules are written, the landscape has shifted again.
The gap isn’t technological. It’s social. We lack the collective intelligence to define the role of artificial intelligence.
The One Group That Knows — Coders
There is one group using AI correctly: software engineers.
Not because they’re smarter, but because they understand AI as infrastructure, not magic.
Coders know:
– Where models break – edge cases, hallucinations, context window limits
– When to trust them – autocomplete, boilerplate, documentation lookup
– When not to – architectural decisions, security-critical code, creative problem-solving
– That AI is a force multiplier, not a substitute for judgment
When a developer uses GitHub Copilot, they don’t accept every suggestion blindly. They evaluate. They test. They understand that AI writes plausible code, not necessarily *correct* code.
But here’s the irony: AI was built by engineers, yet it spread through emotion – loneliness, ambition, anxiety, hope.
The people who understand AI least are using it most. And the people who understand it best are worried.
The Metaphor — Humanity Driving Without a License
We have built the fastest vehicle in history — but no one taught us how to steer.
The creators aren’t sure which direction is safe. The regulators can’t keep up with the speed. The users are gripping the wheel with white knuckles, hoping instinct will be enough.
Some people floor the accelerator, thrilled by the power. Others brake hard, terrified of losing control. Most just… coast, letting momentum carry them forward, hoping someone else figures out the rules.
But momentum without direction isn’t progress. It’s drift.
And if we don’t collectively decide where the boundaries are, we won’t crash into a wall. We’ll crash into each other – competing visions of what AI should be, each convinced their interpretation is the right one.
What Harvard’s Lesson Really Means
The question Harvard students are asking isn’t just academic. It’s existential:
How do we live with intelligence we didn’t create and don’t fully understand?
The answer won’t come from policy alone. It will come from practice — millions of small decisions about when to use AI and when to think for ourselves.
This requires a new kind of literacy. Not coding skills. Not prompt engineering. But something deeper:
Boundary Practice
Learning to recognize when you’re thinking with AI versus thinking for AI. The former is collaboration. The latter is abdication.
Output Skepticism
Treating AI responses like Wikipedia entries: useful starting points, never authoritative endpoints. Always verify. Always question. Always supplement with human judgment.
Intentionality Training
Asking “why am I using this tool?” before asking “how do I use it?” The most important AI skill isn’t technical – it’s self-awareness.
Reflexive Capacity
Building the mental habit of stepping back and asking: Am I using this because it helps me think better, or because it lets me avoid thinking at all?
Harvard’s students don’t need more rules. They need reflexes – automatic patterns of discernment that operate faster than regulation ever could.
And what’s true for them is true for all of us.
Reflection — The Mind That Learns, and the Mind That Teaches
We stand at a strange juncture in history.
For the first time, humans have created a system that can generate knowledge, answer questions, produce content, and simulate understanding — all without biological consciousness.
For the first time, the question isn’t “can machines think?” but “can humans still think for themselves when machines think for them?”
The risk isn’t AI rebellion. It’s human atrophy.
We won’t lose control because AI becomes too smart. We’ll lose control because we forget how to stay smart alongside it.
The Age of Unlearned Intelligence isn’t a warning about machines. It’s a warning about us — about what happens when we adopt tools faster than we develop wisdom.
Sandhya Kumar’s Harvard classmates are asking the right question: *Where are the boundaries?*
But the answer won’t come from guidelines, syllabi, or corporate ethics statements.
It will come from each of us deciding, in every interaction with AI, whether we’re using it to *extend* our minds or *replace* them.
We have built the mind that learns — but not the mind that teaches us how to live with it.
The question is whether we’ll learn before it’s too late.