Let the States Speak: Why AI Governance Should Remain a Shared Responsibility

Governance in the age of intelligent systems must be accountable, distributed, and democratic

Let the States Speak: Why AI Governance Should Remain a Shared Responsibility
Photo by Katie Moum / Unsplash

The Senate’s recent 99-1 vote to strike an AI preemption clause from President Trump’s proposed tax legislation was not just a procedural footnote—it was a critical inflection point in America’s evolving approach to artificial intelligence governance. Though initially overlooked amid the noise of federal tax reform, the clause would have barred states from enacting AI-related laws if they received funds from a new broadband deployment program. Its removal sends a clear message: in this unprecedented technological era, the future of AI regulation must remain a shared responsibility between state and federal actors.

The effort to preempt state AI laws was supported by a powerful coalition of technology companies, venture capitalists, and White House advisors. Their argument was grounded in economic efficiency and national competitiveness: a patchwork of state rules, they claimed, could slow down innovation, complicate compliance, and undermine America’s position in the global AI race. These concerns are not unfounded. As energy systems, defense infrastructure, and public services become increasingly dependent on intelligent software, consistency matters.

But consistency must not come at the cost of accountability, nor innovation at the expense of consent. The clause’s removal reflects a broader public consensus: Americans are not ready to surrender AI oversight to Washington or to Silicon Valley. The same democratic pluralism that underpins our energy landscape—where states serve as laboratories of reform—must also be preserved in the governance of emerging technologies.

Over a thousand AI-related bills have been introduced at the state level in the past two years alone. From protections against deepfakes and synthetic voice impersonation to measures addressing algorithmic discrimination and data transparency, states are responding to the real, localized consequences of AI. Whether it is California’s data privacy rules or Tennessee’s “Elvis Act” protecting musical artists from synthetic mimicry, these are not fringe efforts—they are democratic signals that communities want a say in how artificial intelligence unfolds in their lives.

To erase those signals through financial coercion—even inadvertently—would have been a mistake.

Energy leaders and grid innovators should pay close attention. AI is rapidly permeating core energy functions: demand forecasting, DER optimization, predictive maintenance, real-time balancing, and carbon accounting. These tools hold immense promise. But without public trust and transparent oversight, their deployment risks public resistance, political backlash, and reputational harm. In an era where utilities and developers already face mounting scrutiny, aligning AI deployment with local norms and state regulations is not a bureaucratic obstacle—it is a strategic imperative.

The real question is not whether the federal government should have a role in AI governance. Of course it should. National standards for safety, interoperability, and security are essential. But a strong federal framework should complement—not erase—the valuable work happening in state legislatures, attorney general offices, and local civic organizations.

To be clear, the attempt to preempt state authority was not necessarily malicious. Many of the provision’s supporters genuinely fear the chaos of conflicting rules and the drag on AI adoption that might follow. But innovation does not thrive in a vacuum. It flourishes in ecosystems where trust is earned, harms are addressed, and human agency is respected. This cannot be achieved solely through federal edict or industry code.

AI governance is no longer an abstract debate. It is now embedded in the energy systems we build, the infrastructure we digitize, the contracts we automate, and the choices we delegate to algorithms. We must govern accordingly.

Rather than fighting over jurisdiction, we should be designing a layered, resilient regulatory architecture. A cooperative model—where federal agencies set baseline protections while states retain authority to go further—would reflect the spirit of American federalism. It would also enable AI regulation to adapt to different regional needs: from densely populated urban grids experimenting with AI-driven demand response, to rural states seeking protections against deepfake electoral interference.

The removal of the AI preemption clause is not the end of this story. Industry will continue to lobby for streamlined rules. Some in Congress may still push for centralized authority. But for now, the Senate has affirmed a foundational principle: governance in the age of intelligent systems must be accountable, distributed, and democratic.

As AI and energy become increasingly intertwined, we must ensure that our regulatory structures evolve not only to support technological progress, but to safeguard human dignity, social cohesion, and democratic integrity. In this new era of convergence, we do not need fewer voices. We need more—and we must listen to all of them.