What Regulators Are Signalling About AI in Operational Technology – and How Altior Enables Secure Digital Twin Deployment at Scale

What Regulators Are Signalling About AI in Operational Technology – and How Altior Enables Secure Digital Twin Deployment at Scale

Across energy, utilities, transport, and industrial infrastructure, organisations are proceeding carefully with artificial intelligence in operational technology (OT) environments. This caution is not a lack of ambition. It reflects a growing alignment between industry experience and regulatory concern.

AI systems interacting with physical infrastructure introduce risks that cannot be addressed late in the stack or purely in the cloud.

Recent regulatory signals help explain why adoption is deliberate rather than rapid. In November, Anthropic published an analysis showing how advanced language models had already been misused by state-aligned actors to support cyber operations against critical infrastructure. While the report focused on AI misuse rather than OT directly, its implications were clear: AI can amplify attacker capability just as easily as it amplifies operational insight.

It is against this backdrop that security agencies and regulators have begun to articulate what "safe" AI integration into OT actually means.

The NSA's Principles for AI in Operational Technology

In the United States, the National Security Agency (NSA) has taken a notably practical approach. Rather than issuing abstract AI policy, it published a set of principles for the secure integration of artificial intelligence into operational technology.

At a high level, the NSA highlights several core ideas that are particularly relevant to energy and industrial operators:

Security must begin at the OT device level
OT systems were not designed for probabilistic decision-making or opaque inference. The NSA stresses that AI integration cannot assume upstream data is trustworthy by default. Devices, sensors, and controllers must be authenticated, validated, and monitored before AI systems are allowed to consume or act on their data.
Data must be protected throughout its lifecycle, not just at rest
Traditional IT security focuses heavily on data at rest or in central platforms. The NSA makes clear that in OT, the most critical exposure is often during data in transit — as telemetry moves from field devices, through gateways, across networks, and into higher-level systems. Every stage must be governed.
AI systems must be bounded and auditable
AI must not be allowed to operate as an unbounded decision-maker in operational environments. The NSA emphasises clear limits on what AI can recommend, influence, or control, with human oversight, traceability, and rollback mechanisms preserved.
Determinism and safe failure modes matter
OT environments prioritise predictability and safety. The NSA cautions against introducing AI behaviours that undermine deterministic operation or fail unsafely when models degrade, data becomes sparse, or connectivity is disrupted.
Architecture is as important as algorithms
Perhaps most importantly, the NSA frames AI risk as a system design issue, not a model quality issue. Secure outcomes depend on how AI is embedded into operational architecture, not simply on how well a model performs in isolation.

Parallel Signals from Europe and Beyond

Europe has issued similar signals through different mechanisms. NIS2, national cybersecurity authorities, and sector-specific guidance all reinforce the same themes: sovereignty, traceability, and resilience must be preserved as AI enters OT environments.

The Middle East is also moving in this direction, particularly in energy and water infrastructure, where regulators are emphasising sovereign control, air-gapped or hybrid architectures, and strict separation between analytics and control layers.

While terminology varies, the regulatory intent is aligned: AI must be integrated into OT deliberately, with architecture that reflects physical risk.

Why Cloud Controls Alone Are Not Enough

Cloud platforms play a critical role in analytics, optimisation, and portfolio-level insight. They are an essential part of the ecosystem. However, regulators are increasingly explicit that cloud-only security controls arrive too late to address many OT risks.

By the time data reaches a cloud service, device trust has already been assumed, protocol translation has already occurred, semantic meaning has already been inferred, and routing decisions have already been made.

If these steps are inconsistent or implicit, risk accumulates invisibly. This is why regulatory guidance repeatedly stresses securing OT at source and throughout data in transit, rather than relying solely on downstream governance.

Why Digital Twins Become a Control Mechanism, Not Just a Model

In this context, digital twins take on a fundamental role.

Digital twins are often associated with 3D visualisation or simulation, and those uses remain valuable. However, in regulated operational environments, their deeper value lies in acting as authoritative, structured representations of physical systems.

When AI systems operate through digital twins — rather than directly on raw telemetry — the twin becomes a control surface. It defines what exists, how components relate, what "normal" looks like, and which actions are permitted. This allows AI to be applied without bypassing operational constraints.

Importantly, this definition of a digital twin is less about presentation than about operational truth and enforceable structure.

How Altior Enables This Safely and at Scale

Altior was designed specifically to make this kind of deployment viable in the real world.

Rather than replacing digital twin platforms or cloud services, Altior operates beneath them as an operational data and control layer. Its role is to handle the hardest part of regulated AI deployment: integrating heterogeneous, often legacy OT systems and enforcing consistency, validation, and policy before data ever reaches higher-level AI or twin platforms.

Altior enables this by integrating directly with OT devices and protocols common in energy and industrial estates, applying semantic validation and policy controls close to the edge, governing data flows continuously as they move through the system, supporting distributed and sovereign deployment models, and preserving deterministic behaviour where safety requires it.

This allows digital twins — whether 3D simulation-based or operational twins — to be deployed as trusted system components, not fragile overlays. AI can then operate on twins that already reflect regulatory and operational boundaries.

In practice, this turns digital twin deployment into a system-level capability rather than a site-by-site integration exercise.

A Shared Responsibility Across the Ecosystem

Regulators are not anti-AI. They are setting the conditions under which AI can be trusted in environments where failure has real-world consequences.

Cloud providers, digital twin platforms, OT specialists, and infrastructure operators all play a role in meeting those conditions. The organisations that succeed will be those that treat AI, digital twins, and OT security as parts of a single architectural problem.

Altior exists to make that architecture deployable at scale — enabling AI-ready digital twins that align with regulatory expectations, operational reality, and the long-term resilience of critical infrastructure.