What This Conversation Leaves Out: Why Artificial Intelligence Is Being Imagined Without Its Most Important Constraint
Elite AI debates assume intelligence will scale smoothly, but they omit the central constraint: electricity. As AI becomes agentic and self-accelerating, the grid—not the model—will decide what actually scales.
The recent roundtable, “Where Is A.I. Taking Us?”, published by The New York Times on February 2, presents itself as a wide-angle view of artificial intelligence over the next five years. Eight prominent thinkers weigh in on agents, productivity, creativity, employment, risk, and even consciousness. The discussion is thoughtful, varied, and serious. It is also built on a shared assumption so deeply embedded that it never needs to be named.
Electricity is treated as a background condition. This is not a nitpick. It is the central omission. Nearly every future imagined in the conversation depends on energy systems that are stable, elastic, inexpensive, and politically invisible. AI agents acquire legal standing. AI fades into the background of daily life. AI automates its own development. AI accelerates science, logistics, and work. These outcomes are framed as questions of capability, alignment, and social adaptation. They are equally, and more fundamentally, questions of infrastructure.
When Nick Frosst suggests that AI will become “boring in the best way,” like GPS or spreadsheets, the analogy sounds cultural. In fact, it is architectural. GPS did not become boring because people got used to it. It became boring because its physical and institutional foundations matured: satellites were standardized, signals stabilized, governance settled, and failure became rare enough to ignore. Boredom is not a user-experience milestone. It is an infrastructure achievement.
AI will not fade into the background by improving model quality alone. It fades when power supply, interconnection, siting, cooling, redundancy, and curtailment are resolved well enough that users never have to think about them. That is not a software problem. It is a grid problem.
The conversation also treats AI demand as if it were still paced by human rhythms. Tools assist people. Systems augment workers. Productivity reduces friction. But one passing observation quietly breaks this assumption. Ajeya Cotra notes that AI companies may substantially automate their own operations with AI, accelerating progress itself. Taken seriously, this implies that compute demand is no longer driven by human work cycles or decision timelines. It becomes reflexive.
Once systems optimize themselves, load growth stops being exogenous. It stops being forecastable using historical analogies. Demand becomes endogenous to the intelligence running on top of it. This is not a labor story. It is a planning story. It turns energy systems into feedback systems rather than supply systems.
Efficiency is repeatedly framed as relief. Less paperwork for doctors. Faster code. Smarter logistics. In energy systems, efficiency rarely reduces total demand. Lower marginal cost tends to increase throughput. Better logistics move more goods. Cheaper compute runs more models. The panel largely treats AI as substitutive, replacing existing tasks. The grid experiences it as additive, amplifying activity rather than displacing it.
Agency is another place where the infrastructure implications are acknowledged rhetorically but not operationally. Yuval Noah Harari argues that AI is not merely a tool, but an agent capable of making decisions and inventing ideas. This claim is usually read philosophically, as a statement about autonomy or consciousness. From an energy perspective, it carries a different weight.
If decision-making systems act with delegated authority inside markets, logistics networks, or operational environments, then energy demand is no longer purely human-mediated. Delegated cognition implies delegated infrastructure responsibility. When AI agents act, who bears accountability for the physical consequences? When systems collide with constraints, who sheds load? When reliability falters, who is responsible? These are not abstract ethical questions. They are operational questions that land squarely on grid operators, regulators, and public institutions.
Environmental impact appears briefly in the discussion, largely to be minimized. The framing is comparative: does AI use more energy than other industries, and is the value worth it? This misses the more important distinction. Data centers do not behave like factories, offices, or retail. They concentrate load, demand continuity, resist interruption, and cluster geographically. These are not moral failures. They are design choices. But they carry system consequences regardless of how much economic value they produce.
The conversation also assumes that energy systems will quietly accommodate these changes without political or institutional friction. That assumption is already breaking down. Interconnection queues are lengthening. Siting battles are intensifying. Local opposition to data center clusters is growing. Utilities and regulators are being asked to guarantee reliability for loads that do not behave like traditional customers. These are not future problems. They are present ones.
What this conversation leaves out is not energy as fuel. It leaves out energy as constraint. As architecture. As the governing layer that determines which forms of intelligence are allowed to scale, which are conditioned, and which are rejected. AI optimism, as expressed here, rests on the belief that infrastructure will simply adapt in time, quietly and competently, in the background.
That belief deserves scrutiny. The next phase of AI will not be decided by whether models speak more fluently or reason more flexibly. It will be decided by whether intelligence can be integrated into physical systems without destabilizing them. The electric grid is not a neutral input to that story. It is the system that enforces limits, allocates priority, and absorbs failure. Treating it as invisible does not make it so.
These questions are not anti-AI. They are pro-reality. Intelligence that cannot be grounded in physical systems remains speculative. Intelligence that ignores infrastructure eventually collides with it.
I explore this boundary—where delegated cognition meets delegated power, and where AI ambition meets grid reality—in The Cognitive Grid. The argument is simple: as intelligence becomes embedded in systems, energy ceases to be a background input and becomes a governing constraint. We can design for that transition, or we can stumble into it.
The Times conversation is a useful snapshot of elite AI thinking. Its greatest value may be what it unintentionally reveals: the future of AI is being imagined as if the grid were already solved. It is not.