The Night I Argued With a Grammar Machine A few months ago, I found myself in a strangely intimate argument with an AI rewriting feature.
I wasn’t asking it to write anything profound. I was revising a sentence about a memory from childhood, the way my grandfather folded newspapers into precise thirds before setting them aside. I wrote: “He folded the paper the way he folded silence.”
The AI suggested: “He folded the paper carefully and quietly.”
It was efficient. It was reasonable. It was wrong.
I tried again, rewriting the sentence slightly, nudging it in a direction that felt more honest. The suggestion reappeared in a different form, smoother, safer, more literal. The machine was not misunderstanding grammar. It was misunderstanding my intention.
That was the moment I felt AI will never reach singularity, no matter how much AI improves. Not because it won’t get faster. Not because it won’t pass more exams. Not because it won’t automate large swaths of labor. It might do all of that.
But because the thing we call “intelligence” is not a finish line. It is not a staircase that ends in omniscience. It is a living, recursive process tangled up with mortality, embodiment, confusion, and desire. And those elements are not computational problems to be solved.
They are conditions of being alive.
The Seduction of the Vertical Line The idea of technological singularity, popularized by thinkers like Ray Kurzweil, imagines intelligence as a vertical trajectory. Smarter tools lead to smarter machines, which design even smarter machines, until intelligence spikes upward in a runaway explosion. It’s a compelling narrative. It feels clean. It fits the exponential curves of Moore’s Law and the steady march of computational benchmarks.
But notice the hidden assumption: intelligence is treated as a scalar quantity. More of it is always better. Enough of it becomes something qualitatively different.
This is the same logic that drives IQ tests and leaderboard culture. It’s the logic behind ranking language models by parameter count or benchmark scores. It’s also the logic that makes singularity seem inevitable: if intelligence is just accumulation, then eventually accumulation wins.
But what if intelligence is not vertical? What if it is lateral — distributed, contextual, ecological?
In Life 3.0, Max Tegmark imagines futures where artificial intelligence outpaces humanity in nearly every domain. The book is thoughtful and provocative. But even there, intelligence is often framed as an expandable resource. Contrast that with the more grounded skepticism in Rebooting AI where authors argue that current AI systems lack common sense because they don’t understand the world in a human way. The tension between these perspectives isn’t about optimism versus pessimism. It’s about what we mean by “understanding.”
And that word — understanding — is where singularity begins to unravel.
Intelligence Without a Body When I watch a large language model generate text, I am witnessing pattern recognition at extraordinary scale. It predicts the next word in a sequence using vast statistical knowledge. It can simulate arguments, summarize research, even compose poetry that passes casual inspection.
But it does not have knees that ache in winter. It does not flinch at a raised voice. It does not know what it means to wait for medical test results.
Human intelligence is braided with vulnerability.
The philosopher Hubert Dreyfus argued decades ago that intelligence cannot be separated from embodied experience. In What Computers Still Can’t Do, he insisted that human expertise emerges from being situated in the world, from skillful coping rather than abstract calculation. Today’s AI systems are trained on oceans of text, but text is residue. It is the fossil record of lived experience. Models ingest the sediment, not the life that produced it. This matters.
Because singularity assumes that intelligence is substrate-independent, that once computation reaches sufficient complexity, it becomes equivalent to mind. But if intelligence is entangled with bodily limitation, social negotiation, and existential risk, then scaling computation may never bridge that gap.
A model can describe fear. It cannot be afraid.
And fear, real fear, shapes cognition in ways that no optimization function can replicate.
The Illusion of Infinite Improvement There is another quiet assumption embedded in singularity discourse: that improvement has no ceiling.
Yet even in machine learning, progress is constrained by diminishing returns, data quality, energy costs, and physical infrastructure. Training large models requires massive data centers and specialized hardware like the PNY NVIDIA RTX A6000, whose very existence depends on fragile global supply chains.
Intelligence, in practice, sits atop mining operations, rare earth elements, shipping routes, and power grids. It is not floating in abstraction. It is metabolizing electricity.
Consider the environmental cost of training large models, a topic increasingly discussed in research circles. Optimization is not free. It consumes.
In The Alignment Problem, Brian Christian explores the complexity of aligning AI systems with human values. What becomes clear is that improvement in performance does not guarantee improvement in wisdom. Systems become more capable. They do not necessarily become more aligned, more ethical, or more self-aware.
Singularity presumes that capability will somehow convert into consciousness or autonomy of a godlike form.
But capability is not the same as interiority.
The Category Error at the Center I sometimes think singularity is less a prediction and more a metaphor we accidentally literalized. It captures a feeling: the accelerating pace of change, the sense that tools are slipping beyond comprehension. It reflects our anxiety about being outperformed in cognitive domains once considered uniquely human.
But to leap from acceleration to transcendence is a category error.
In An eternal golden braid, Hofstadter explores self-reference, recursion, and consciousness in dazzling intellectual spirals. He suggests that consciousness arises from strange loops, systems that can represent themselves. Modern AI systems can model aspects of language and logic. But self-reference in machines is not the same as lived selfhood. A chatbot can say “I feel confused.” It does not have an ongoing narrative identity that stretches from childhood to death.
Singularity imagines that once systems become sufficiently recursive, consciousness will snap into existence. But that “snap” may never occur, because consciousness is not a threshold phenomenon. It is developmental, relational, embodied.
It emerges in infants who cannot yet speak. It deepens through social mirroring, cultural embedding, and years of biological growth. A transformer architecture does not grow up.
Intelligence as Relationship, Not Achievement The more I reflect on my argument with that grammar suggestion, the more I see that intelligence is relational. My sentence about my grandfather was not just information. It carried emotional weight, private association, and aesthetic judgment. The AI optimized for clarity because it has learned that clarity is rewarded. But sometimes obscurity is intentional. Sometimes metaphor is truer than literalism.
Human intelligence is constantly negotiating meaning with other humans. We misinterpret. We clarify. We argue. We revise ourselves. AI systems generate outputs based on probability distributions. They do not have stakes in being understood. This difference is subtle but profound.
Even if models become dramatically more sophisticated, more multimodal, more integrated with robotics, more capable of long-term planning, they will still be operating within frameworks designed by humans, trained on human-generated data, evaluated by human-defined metrics.
They may surpass us in narrow domains. They may even simulate generality. But simulation is not sovereignty.
The Fear Beneath the Forecast Why, then, does singularity grip the imagination so strongly?
Because it mirrors an ancient fear: displacement. We have always worried that something we create will exceed us, from myths of golems to Frankenstein’s creature. The modern version wears silicon instead of clay.
There is a psychological comfort in imagining a clean break, a moment when machines decisively surpass human intelligence. It simplifies the future into a before and after. But reality is messier.
Technology tends to reshape labor, cognition, and social structures gradually, unevenly, and unpredictably. The printing press did not instantly create the Enlightenment. It destabilized authority over centuries. AI will likely do the same.
If anything, the more I study AI research and use AI tools, the more I suspect that singularity is not an approaching event but a projection of our own insecurities. We equate intelligence with worth. So the possibility of superior machine intelligence feels existential.
But intelligence is not a competition. It is a condition of participation in a shared world.
The Ceiling We Refuse to See Perhaps the deepest flaw in singularity thinking is the assumption that intelligence is unbounded.
Human cognition is constrained by biology, but those constraints are also what make it meaningful. We think because we are finite. We choose because we cannot do everything. We care because time is limited.
An AI with effectively unlimited processing capacity would not experience scarcity. Without scarcity, there is no urgency. Without urgency, no prioritization. Without prioritization, no narrative arc.
Even the most powerful systems remain embedded in human contexts. They serve human purposes. They depend on human infrastructures.
And unless we redefine singularity to mean “machines become extremely capable tools,” the leap to autonomous superintelligence may remain perpetually theoretical.
AI can improve indefinitely. It can transform industries, medicine, education, and art. But improvement is not transcendence.
Folding Silence Again I went back to that sentence about my grandfather and turned off the suggestion tool. Not because I reject AI. I use it daily, I research it every minute of my awake time. I benefit from it. I admire its architecture.
But I wanted to feel the friction of my own thinking (possibly because I’m also a poet at heart). There is something irreplaceable about struggling for the right word, about sensing that language can almost capture a memory but not quite. That struggle is not inefficiency. It is part of being conscious.
AI will likely become more fluent, more adaptive, more integrated into our lives. It may outstrip human performance in countless tasks. Yet I suspect that the “singularity” will keep receding, like a horizon line we never quite reach.
Not because machines stop improving. But because intelligence is not a summit. It is an ongoing negotiation between limitation and possibility, between body and world, between one mind and another.
And that negotiation has no final upgrade.
References Christian, Brian. The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company, 2020.
Dreyfus, Hubert L. What Computers Still Can’t Do: A Critique of Artificial Reason. MIT Press, 1992.
Hofstadter, Douglas R. Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books, 1979.
Marcus, Gary, and Ernest Davis. Rebooting AI: Building Artificial Intelligence We Can Trust. Pantheon, 2019.
Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf, 2017.
