In June 2023, Southern Company received one of the first conditions-based FAA waivers authorizing remote-operated, beyond-visual-line-of-sight autonomous drone operations at critical infrastructure sites across its system, from Georgia to California. The hardware is the Skydio X2 paired with the Skydio Dock, a weatherproof drone garage that launches its aircraft, monitors the airspace, and recovers it without a pilot on site. The operational use case is the one Skydio describes plainly: after a weather event or equipment alarm, the drone flies a pre-planned route over a substation to inspect switches, transformers, and busbars, and streams the result back to a technician in an office somewhere else.
Two things happen in that single deployment. A task that a substation technician or a contractor pilot would have performed in person, often hours after the alarm, is now performed remotely in minutes. And a capability that did not exist at all in most substation operations is now routine: thermal scans that previously required a scheduled outage can be flown on demand, on a schedule, or automatically in response to a sensor trip. The data is continuous where it was episodic. The inspection is remote where it was physical. The technician has become a supervisor of a system, not a visitor to an asset.
Many of the AI and robotics deployments visible in US utility operations in 2026 share this two-part structure. Some part of a human task moves to a machine. Some new capability becomes possible that no human was performing before. The interesting question is not whether the pattern will scale across the industry. It is how long it takes to show up as measurable workforce change, where it shows up first, and what utilities, regulators, investors, and workers should do about it now. Three distinctions and a twelve-role sample are a workable place to start.
The Cognitive Grid work explored on AIxEnergy the nearest analytical neighbor to the question this series takes up. His argument is that operational judgment in critical infrastructure is quietly migrating into automated systems faster than the governance frameworks around that authority can adapt, a phenomenon he calls the latency gap between machine-speed execution and human-speed oversight. His EthosGrid Open Standard proposes a constitutional response, separating machine proposal from human authorization so that capability does not silently become permission. This series runs the same institutional diagnosis through the workforce rather than through governance: when operational judgment migrates into software, the humans whose roles were defined around that judgment are the other side of the erosion.
Three distinctions to start with
Most attempts to read the workforce implications of AI in utilities go wrong on one of three distinctions. All three deserve to be put on the table early, in plain language, before any of the analysis here is asked to do work.
The first distinction is between vendor capability and utility adoption. What an AI system or a robotic platform can do in a controlled demonstration is bounded by engineering: model performance, sensor precision, data availability, hardware reliability. What a regulated utility actually deploys in production is bounded by something else entirely: data architecture, operational trust, liability allocation, licensing, union contracts, regulatory staffing minimums, insurance coverage, and capital stock turnover. Vendor demonstrations move on the first clock. Utility production deployments move on the second. The two clocks tick at very different rates. The technical shorthand used here, and throughout the series, is that vendor capability sets the capability horizon and utility deployment sets the adoption horizon. Capability horizons for the systems considered here are short. Adoption horizons are longer, more variable, and more dependent on institutional context than on technological readiness.
The Censys Technologies BVLOS demonstration in February 2026 makes the gap concrete. A 79-mile dual-leg drone mission flown between Daytona Beach and Mims, Florida, launched from an automated dock and transiting Class C controlled airspace, was completed with a fixed-wing VTOL aircraft. It was not flown under a special-purpose research authority. It was flown under existing FAA Part 107 waivers, of which Censys holds more than 85. The capability to inspect transmission and gas-pipeline corridors at scale is in hand, and has been demonstrated commercially over populated areas. What is not in hand is routine legal authority to operate at corridor scale without applying for a per-mission waiver each time. The FAA's Part 108 Notice of Proposed Rulemaking, published August 7, 2025 and now in final-rule drafting, is the proposed unlock. Until it lands, the binding constraint on corridor-scale autonomous line patrol is rulemaking throughput, not vendor capability. The capability and adoption horizons in this case are months apart in the lab and several years apart in production. That gap is not exotic. It is typical.
The second distinction is between two kinds of work inside a utility, which run on two very different institutional clocks. The first kind is reliability work: dispatch, load forecasting, market analysis, customer service, billing, predictive maintenance, meter reading, most regulatory compliance analysis. Failure in reliability work produces consequences that are bounded and recoverable: imbalance charges, customer-minutes-out, modest SAIDI or SAIFI degradation, complaints to a regulator, the occasional rate case finding. The harms are economic and serviceable; insurers price them, regulators monitor them, and utility boards are comfortable authorizing AI deployment because the downside is circumscribed.
The second kind is safety work: nuclear reactor operations, protection relay coordination, energized switching, live-line work, gas-pipeline pressure management, dam operations. Failure in safety work produces immediate physical harm, sometimes catastrophic. The human-in-the-loop requirement in safety work is not a managerial preference. It is attached to NRC, FERC, and state commission licensing regimes. It is reinforced by criminal and civil liability frameworks built around specific past failures, including Three Mile Island, the 2003 Northeast blackout, the San Bruno gas explosion, and the cascade of utility-caused wildfires across the western United States in the past decade. It is codified in union contracts that specify staffing patterns around high-hazard work. And it is shaped by the institutional memory of the regulators, operator communities, and union leaders who have lived through the events that the licensing regime was built to prevent.
These two kinds of work coexist inside a single utility, often inside a single operations center. They are not separated by job title in any clean way. But they run on very different clocks. Regulatory and institutional comfort with AI deployment in reliability work builds in years. Regulatory and institutional comfort with AI deployment in safety work builds in decades. That difference is not a matter of technological maturity; the same machine learning techniques that have already moved into reliability work could in principle inform safety work. It is a matter of institutional architecture, and most of the heterogeneity in how AI metabolizes through the utility workforce can be traced to it.
The third distinction is between AI's effect on a task and its effect on a role. A task is a specific activity performed within a job: climbing a transmission tower to torque a bolt, writing the first draft of a rate case filing, reviewing a thermal image for equipment defects, making a dispatch adjustment during a frequency excursion. A role is a bundle of tasks that a single worker performs, usually under a single job title and under a specific set of institutional arrangements around licensing, pay, and supervision. AI can compress tasks without eliminating roles, and it usually does. A wind technician still climbs the tower to torque the bolt; what AI changes is the inspection flight that preceded the climb, the work-order generation that followed it, and the condition-assessment judgment that triaged it in the first place. A lineman still makes the physical connection on an energized conductor; what AI changes is the patrol that identified the repair need. The physical work in both cases is robust. The cognitive scaffolding around it is what erodes first.
The task-versus-role distinction matters because AI effects on task content can run far ahead of AI effects on role headcount, especially in fields with significant physical-dexterity requirements or strong institutional defenses around the role itself. When the two are conflated, the analysis either overstates near-term displacement (by assuming that task compression means role elimination) or understates it (by assuming that a persistent role means the work inside it is unchanged). Neither is right. The matrix below, and the mechanism developed in Part 2 of this series, both depend on keeping task and role effects analytically separate.
A first look at the workforce
The three distinctions above are most useful when they are mapped onto specific jobs. Below is a sampling of twelve roles drawn from across the US electricity industry. Each row carries the role title, an approximate national headcount drawn from BLS Occupational Employment and Wage Statistics for NAICS 221100, Electric Power Generation, Transmission and Distribution, May 2024, a tag identifying whether the role's institutional clock is reliability or safety, the dominant AI system type expected to drive change, an arrow estimating the medium-term direction of role-level headcount, and the time horizon over which the principal change becomes visible.
Horizons used here and throughout the series are: near-term, three to five years; medium-term, five to fifteen years; long-term, fifteen to thirty years. Headcount arrows are: ↑ growth, ↔ steady, ↓ contraction, ↓↓ sharp contraction. The arrows describe role-level employment direction. Separately, task content inside every row in this matrix is already shifting, and in some rows shifts substantially even as the role-level arrow points up or stays flat. The full role-by-role matrix, covering roughly forty positions across generation, transmission, distribution, markets, and the regulatory infrastructure, will be developed in Part 3.
|
Role |
Approx US
employment |
Reliability or
safety |
Dominant AI
system |
Role-level
headcount |
Horizon |
|
Power dispatcher /
system operator |
5,800 |
Reliability |
Reinforcement
learning, time-series ML |
↓ |
Medium |
|
Load forecaster / market analyst |
(split across
NAICS 523, 221100) |
Reliability |
Time-series
ML, LLMs |
↓ |
Medium |
|
Customer service
representative |
17,600 |
Reliability |
Large language models |
↓ |
Near-medium |
|
Meter reader |
3,600 |
Reliability |
AMI plus
computer vision |
↓↓ |
Near |
|
Compliance /
regulatory analyst |
2,400 |
Reliability |
LLMs with retrieval
augmentation |
↓ |
Medium |
|
Substation technician |
(subset of
SOC 49-2095, ~16,000) |
Reliability +
Safety |
Computer
vision on aerial drones |
↓ |
Medium |
|
Distribution
lineman |
57,800 utility-direct |
Safety |
Aerial drones for
inspection; limited physical displacement |
↔ |
Medium-long |
|
Wind technician |
4,600 |
Reliability |
Computer
vision, predictive ML; task-level only, role persists |
↑ (industry
growth) |
Near |
|
Solar PV installer
/ technician |
3,400 |
Reliability |
Computer vision,
optimization; task-level only, role persists |
↑ (industry growth) |
Near |
|
Protection engineer |
(subset of
SOC 17-2071, ~20,000) |
Safety |
Physics-informed
ML |
↔ → ↓ |
Long |
|
Nuclear reactor
operator |
4,300 |
Safety |
Physics-informed ML,
RL |
↔ |
Long |
|
Project developer / interconnection |
(outside
NAICS 221100) |
Reliability |
LLMs,
optimization |
↑ |
Near-medium |
Three patterns in the table are noteworthy.
The first is the heterogeneity of the role-level arrows. They do not point in one direction across the industry. Some roles grow, some hold steady, some contract, some contract sharply. The growth roles concentrate in renewables and project development, where physical buildout expands total employment even as AI augments the cognitive work around it. The contraction roles concentrate in coordination-layer work, where AI substitutes more than it augments. This is not a story of uniform displacement. It is a story of shifting composition, and the composition shift produces both winners and losers within the same workforce.
The second is that growth arrows are not the same as low AI exposure. Wind technicians and solar installers are flagged up at the role level because the industries are expanding, but the task content inside those roles is already changing significantly. AI-driven inspection, diagnostics, and work-order generation are compressing the cognitive tasks that once filled the technician's shift between climbs. The physical dexterity requirement is robust; the surrounding cognitive scaffolding is not. That asymmetry is the task-versus-role distinction at work, and it generalizes beyond renewables to much of the field-physical workforce.
The third is the systematic difference in horizons along the reliability-safety axis. Reliability roles change on near and medium horizons. Safety roles change on medium and long horizons. The difference is not random and is not a coincidence. It is the institutional-clock difference made visible at the role level.
What produces the timing, and what to do about it
Three patterns out of a twelve-row sample are suggestive rather than conclusive. They raise a question they do not answer: what is the actual mechanism by which AI capability translates into workforce change in a regulated industry, and why does it run so much faster on the reliability clock than on the safety clock?
Part 2 of this series takes up that question directly. The short answer is that the translation runs through a four-stage process of capability maturation, task-content erosion, institutional-defense attenuation, and eventual role-level headcount change. Each stage has its own timeline, and the timelines differ systematically across the reliability-safety axis. The mechanism is visible in three other regulated industries with safety-critical workforces (aviation, freight rail, and maritime), each of which has moved through some version of the same arc on its own timeline. It is beginning to be visible in the US power sector now, at stage one for safety work and deeper into stage two for most reliability work. Parts of the US utility workforce have already completed the arc. Meter reading and utility customer service are the clearest examples, and they are not the last ones.
What comes in the rest of the series
Part 2 develops the four-stage mechanism by which AI capability translates into workforce change in regulated industries, traces the mechanism through aviation, freight rail, and maritime precedents, maps the mechanism onto the US power sector workforce, and draws out the implications for utilities, regulators, investors, and workers.
Part 3 builds the full role-by-role matrix across the major positions in NAICS 221100 and the institutional infrastructure that surrounds it: FERC, NERC, the seven ISOs and RTOs, and the fifty state public utility commissions. Roughly forty roles, each assessed on the same axes as the twelve-role sample above.
Part 4 takes up the half of the AI-and-robotics story that gets less attention than displacement: the new capabilities that AI and robotics make possible, and that humans were never able to perform at all. Continuous inspection of all 200,000 miles of US transmission line. Energized-line close-range work. Inside-transformer monitoring. Post-storm damage pre-assessment. The capabilities that expand what a utility can do, rather than compressing what its workforce must do.
Part 5 returns to strategy, with specific recommendations for utilities, ISOs and RTOs, regulators, capital allocators, and the workforce itself, including a direct engagement with the productivity claims in current consultancy framings of AI in utilities.
The translation from AI capability to utility workforce change runs through institutional defenses that hollow out from within rather than break from outside. Task content inside roles shifts first and most visibly. Role-level headcount follows on longer timelines that differ systematically between reliability and safety functions. The decade ahead looks slow until it does not.