Time Is Not a Commodity

Time is often treated as something that is simply there. Like power or connectivity, it is assumed to be available, reliable, and good enough for whatever system happens to need it.

In modern infrastructure, that assumption no longer holds.

Most organisations rely on a familiar combination of satellite-based time sources, standard synchronisation protocols such as NTP or PTP, and delivery over shared network infrastructure. This model is convenient, widely adopted, and easy to deploy.

It is also fundamentally fragile.

Where the Model Breaks Down

The limitations are not tied to a single protocol. They are structural.

Time typically originates from GNSS systems such as GPS or Galileo. While highly accurate at source, these signals are weak, exposed, and increasingly vulnerable to interference. Jamming and spoofing incidents are no longer theoretical. They are occurring across aviation, telecommunications, and financial systems with growing frequency.

From there, time is distributed across systems using standard protocols and delivered over shared networks. Regardless of whether NTP or PTP is used, accuracy is ultimately constrained by the delivery path. Routing variability, congestion, and asymmetry introduce jitter that cannot be fully eliminated in uncontrolled environments.

In parallel, most architectures lack a continuous, auditable chain of traceability back to UTC. Time may be “close enough” in practice, but it cannot be consistently proven.

The result is a system where precision exists at the source, but is degraded, distorted, and difficult to verify by the time it reaches applications.

When Precision Becomes a Requirement

For many systems, these limitations remain invisible.

For regulated environments, they do not.

Frameworks such as MiFID II, FINRA CAT, and DORA have raised the bar. It is no longer sufficient for systems to be loosely synchronised.

Organisations must demonstrate:

  • Sub-100 microsecond accuracy

  • Continuous traceability to UTC

  • Evidence of compliance over time

This shifts time from a background utility to something that must be explicitly controlled and evidenced.

The gap between what most architectures provide and what regulations require is where risk accumulates.

A Different Approach to Time Delivery

Closing that gap requires a different model. Not a different protocol, but a different architecture.

One that treats time as a managed layer of infrastructure rather than a byproduct of other systems.

In practice, that means combining multiple independent sources of UTC, including both GNSS and terrestrial Stratum Zero references, so that no single point of failure defines the system. It means delivering time over controlled, deterministic network paths rather than relying on the variability of shared infrastructure. And it means continuously measuring, recording, and reporting alignment so that accuracy is not assumed, but proven.
This is the model behind Hoptroff’s Time Feed.

By bringing together multi-source timing, dedicated network delivery, and SLA-backed monitoring and reporting, it becomes possible to achieve both the level of precision required and the evidence needed to support it.

Why This Matters More Than It Appears

Timing failures rarely present as immediate outages.

They show up as small inconsistencies. Slight differences in timestamps between systems. Events that do not quite line up. Logs that require interpretation rather than simply being trusted.

Over time, those inconsistencies propagate.

They affect data integrity, transaction ordering, and the reliability of audit trails. And when an incident occurs, those weaknesses are exposed all at once.

At that point, the issue is no longer whether systems were accurate in practice. It is whether that accuracy can be demonstrated with confidence.
In many cases, the inability to prove when something happened is more damaging than the event itself.

From Utility to Infrastructure

This is why more organisations are rethinking how time is handled within their architecture.

Not as a background service that is assumed to be correct, but as a critical layer that must be designed, controlled, and defended in the same way as any other part of the system.

In distributed, high-speed, and regulated environments, accuracy is only part of the equation.

The ability to prove that accuracy, continuously and under scrutiny, is what ultimately defines whether a system can be trusted.

👉 Learn more: Solutions

Next
Next

When Cloud Migration Exposes Discrepancies