Edge AI on High-Altitude Platforms: Practical Architecture Patterns
edgeaihaps

Edge AI on High-Altitude Platforms: Practical Architecture Patterns

DDaniel Mercer
2026-05-06
24 min read

A practical guide to deploying constrained edge AI on HAPS: packaging, updates, telemetry, and bandwidth-aware design patterns.

High-altitude pseudo-satellites (HAPS) are moving from niche aerospace experiments to real operational infrastructure, and the software stack is rapidly becoming the differentiator. For developers building surveillance, environmental monitoring, or engine diagnostics workloads, the challenge is not just “can we run a model?” but “how do we package, update, validate, and operate constrained AI safely across a stratospheric platform with limited power, bandwidth, and human intervention?” This guide is a technical rundown of practical architecture patterns for edge AI on HAPS, with emphasis on model deployment, onboard inference, telemetry, and the bandwidth/latency trade-offs that shape every design decision. If you’re also thinking about reliability and deployment discipline, the same principles echo in CI/CD and clinical validation for AI-enabled devices, where correctness and traceability matter as much as raw performance.

The market backdrop helps explain why this is becoming urgent. FMI’s 2026–2036 outlook frames HAPS as a specification-driven environment with growing demand for surveillance and reconnaissance payloads, communications, imaging, and environmental sensing. That mix implies AI at the edge will increasingly be expected to filter, classify, and prioritize data onboard rather than streaming everything to the ground. In practical terms, that means model efficiency, update paths, and fault-tolerant packaging matter as much as model accuracy. Similar “operate in harsh conditions with limited payload and high consequence” thinking shows up in hybrid compute strategy for inference and in systems engineering lessons from digital twins and simulation.

1. Why HAPS Changes the Edge AI Design Problem

1.1 HAPS are not just “airborne edge devices”

It is tempting to treat HAPS like a drone with a bigger battery, but that framing breaks down quickly. A HAPS payload may spend days, weeks, or even months aloft, which makes thermal cycles, memory integrity, software update safety, and communications scheduling core architectural concerns rather than afterthoughts. The platform is also subject to changing weather, solar power variability, line-of-sight constraints, and backhaul limitations that create intermittent connectivity patterns far closer to space systems than to ground robotics.

For developers, the real shift is operational: you cannot assume a convenient redeploy when a model misbehaves. Your system must support remote observability, safe rollback, staged rollout, and graceful degradation. That is why lessons from cloud-connected safety devices and fleet-scale endpoint hardening translate surprisingly well to HAPS software, even if the hardware domain is different.

1.2 Surveillance and diagnostics have different latency budgets

Real-time surveillance on a stratospheric platform typically wants low-latency inference for event detection, prioritization, or triage, while engine diagnostics may tolerate slightly higher latency if the output is a predictive health score, anomaly signature, or maintenance recommendation. This distinction should shape the model topology. A video pipeline may use a tiny detector, a region-of-interest cropper, and a secondary classifier, while diagnostics may use time-series embeddings, anomaly scoring, and rule-based guards around the model output.

When you separate these workloads, you can allocate compute more intelligently. The ground truth is that HAPS bandwidth is precious, so every byte sent down is a product decision. This is analogous to how live-event systems manage spikes in demand: you should read proactive feed management strategies and context-aware fan communications to see how prioritization beats brute-force delivery when capacity is constrained.

1.3 The market is rewarding payload efficiency

Source market data points to surveillance and reconnaissance as the dominant payload segment, which signals that platform buyers value actionable intelligence rather than raw data exhaust. That has direct software implications: if your stack only uplinks full-resolution frames, uncompressed sensor logs, or redundant diagnostics, you are probably leaving payload value on the table. The best HAPS architectures maximize signal per bit by pushing filtering, compression, and inference as close to the sensor as possible.

This is the same logic that underpins modern content pipelines, from AI-assisted launch docs to embedding cost controls into AI projects. In both cases, good systems engineering reduces waste before it becomes an operational cost.

2. Reference Architecture: The Practical HAPS Edge AI Stack

2.1 Sensor ingestion, normalization, and time alignment

Every good HAPS AI stack starts with deterministic sensor ingestion. Whether the payload includes EO/IR video, hyperspectral imaging, vibration sensors, turbine telemetry, or environmental measurements, you need a normalization layer that tags each sample with platform time, sensor ID, calibration state, and confidence metadata. Without that, downstream models become difficult to audit, and debugging becomes guesswork when connectivity is intermittent.

A strong pattern is to keep sensor adapters lightweight and stateless, then hand off normalized events into a bounded local queue. That queue should support backpressure, prioritization, and persistence across transient power events. If you are designing the data model, think in terms of “events with lineage,” not “streams of bytes.” Similar traceability principles appear in audit trails for scanned documents and in ML poisoning controls.

2.2 Onboard inference layer

The onboard inference layer should be designed around inference tiers. A tiny first-stage model can run at sensor ingress to identify candidate events, after which a more expensive model can refine the classification. For surveillance, this might mean a lightweight object detector followed by a tracker or scene classifier. For engine diagnostics, it could mean anomaly detection followed by a compact temporal model that distinguishes transient spikes from true degradation.

Keep the runtime predictable. On HAPS, jitter is often more dangerous than raw latency because it makes scheduling and thermal planning unreliable. Quantized models, TensorRT-optimized graphs, ONNX runtimes, and carefully bounded CPU fallback paths are typically more practical than “best possible” models that depend on dynamic shapes or unsupported ops. If you need a reminder that compute selection should be workload-specific, see when to use GPUs, TPUs, ASICs, or neuromorphic inference.

2.3 Command, control, and telemetry plane

The telemetry plane should be separated from the inference plane. That separation makes it easier to prioritize health data, model outputs, logs, and raw sensor snippets independently. A common mistake is to send everything over one channel and assume QoS will save you; in reality, a clean architecture with explicit priorities is easier to operate and safer under degraded links. Telemetry should include model version, feature schema hash, runtime memory pressure, inference latency percentiles, and data quality indicators such as blur, saturation, or sensor drift.

For systems that must remain privacy-compliant or security-hardened, keep personally identifiable data off-platform where possible, and use selective transmission. This approach parallels the thinking in cloud video privacy checklists and privacy-aware data handling, where the best security control is often data minimization before transmission.

3. Model Packaging for Constrained Platforms

3.1 Package models as artifacts, not ad hoc files

On HAPS, model deployment must be reproducible and reversible. That means packaging the model with its runtime dependencies, preprocessing logic, calibration constants, and schema metadata as a versioned artifact. A practical bundle includes the model weights, an inference manifest, a checksum, a hardware compatibility matrix, and a fallback policy for unsupported accelerators. If you only ship a .onnx file and hope the payload team remembers the right preprocessor, you are setting yourself up for silent failures.

The packaging format should also support deterministic startup checks. During boot, the platform should verify artifact signatures, confirm that the model matches the expected sensor schema, and ensure that memory and compute budgets are sufficient before activating inference. This resembles the disciplined rollout and validation mindset seen in clinical AI deployment and in endpoint policy enforcement—except here the cost of a bad deploy can be a stranded platform with no easy repair path.

3.2 Optimize for size, speed, and thermal profile

Model compression should be evaluated in the context of platform thermals and duty cycle, not just benchmark accuracy. Quantization can dramatically reduce memory bandwidth and compute load, but the resulting accuracy delta should be validated on the actual sensor distribution you expect in the stratosphere. Pruning can help, but unstructured sparsity may not deliver if your hardware lacks efficient sparse kernels. Distillation is often the most practical route when you need a small student model that preserves the behavior of a larger teacher.

A useful pattern is to define three tiers: a “safe minimum” model for degraded mode, a “normal operations” model for routine inference, and an “enhanced” model used when conditions permit. That mirrors the decision discipline in live-service systems and response workflows such as rapid response templates and live-service lessons from multiplayer games, where the platform needs a known fallback posture when things get noisy.

3.3 Validate packaging under realistic failure modes

Packaging tests should simulate truncated downloads, partial updates, checksum mismatches, and runtime library incompatibilities. The model should fail closed: if the weights are invalid, the system should keep the last-known-good version running instead of attempting a half-initialized activation. This is especially important on HAPS because a failed update path may require waiting for the next maintenance window or ground intervention.

One good practice is to integrate packaging verification into your CI pipeline and your ground-station deployment workflow. The pattern is similar to the kind of operational rigor described in cost-control patterns for AI projects and fleet security policies: make unsafe states hard to reach, not merely detectable afterward.

4. Bandwidth and Latency Trade-Offs: What Actually Moves the Needle

The simplest rule is also the most important: if the platform can infer locally, it should. In many surveillance workflows, transmitting a tiny event record, cropped imagery, or embedding vector is vastly cheaper than streaming raw video. For diagnostics, an anomaly score, rolling window summary, or fault class can preserve most of the value while reducing link usage by orders of magnitude. That matters because bandwidth on HAPS is often shared among commands, telemetry, payload data, and contingency traffic.

This is where edge AI earns its keep. Onboard inference reduces bandwidth, improves response time, and makes the system more resilient to downlink interruptions. It also lets you shape the human workflow: the ground team sees only prioritized events, which lowers cognitive load and improves time-to-action. Similar “signal over volume” strategies appear in organic value measurement and action-oriented impact reporting.

4.2 When to stream raw data anyway

There are cases where raw data still belongs in the architecture. Model drift investigations, incident forensics, and periodic calibration audits may require representative raw samples from a small subset of time windows. The key is to treat raw streaming as an exception path, not the default mode. A good design stores local clips or compressed sensor bursts and uplinks them only when an anomaly is detected, when the link is idle, or when the ground requests a sample.

This hybrid approach offers the best of both worlds: low steady-state bandwidth and high diagnostic fidelity when needed. The same idea shows up in simulation-first capacity planning and in traffic management for high-demand events. Design for selective escalation, not constant verbosity.

4.3 Latency is not just network delay

Many teams define latency only as the time between sensor capture and downlinked result, but on HAPS the more important metric is time-to-decision. If the onboard model can trigger a local action—such as storing a clip, retuning the camera, flagging an engine maintenance ticket, or adjusting a sensor duty cycle—you may not need a ground round trip at all. In those cases, the platform itself becomes the real-time actor, and the ground system becomes the supervisory layer.

This distinction matters when you model budgets. Network latency can be hundreds of milliseconds or seconds, but if the onboard pipeline makes a useful decision in under 50 ms, the system may still meet operational goals. That is why the architecture should separate local actuation from human review. For a useful framing on designing systems that adapt to real conditions rather than ideal ones, see real-time guided experience systems.

5. Update Paths: Safe Model Rollouts in Intermittently Connected Systems

5.1 Use staged, versioned updates

On HAPS, model updates should be staged like firmware updates for mission-critical systems. A robust process typically includes artifact signing, preflight validation on the ground, canary deployment to a subset of inference nodes, and automatic rollback if health metrics degrade. If the platform has multiple compute nodes or partitioned workloads, you can update one path while keeping a known-good fallback active.

Versioning must include more than weights. You should version the feature schema, preprocessing parameters, threshold configuration, and post-processing logic together. If any of those drift independently, your “same model” may become a different system in practice. This kind of dependency discipline is familiar to teams reading shipping AI-enabled medical devices safely or managing operational change in connected safety devices, where traceability is non-negotiable.

5.2 Delta updates reduce bandwidth pressure

When link capacity is tight, delta updates can be more practical than full artifact replacement. You can compress new model weights relative to the previous version, or ship only changed components such as a new threshold pack, a calibrated quantization table, or an updated class map. Delta updates are especially useful for frequent tuning cycles during early deployment, when the model is still being adapted to real-world stratospheric data.

That said, delta updates increase complexity. They require deterministic reconstruction, strict checksum validation, and a robust fallback when the base version is missing or corrupted. For many teams, the safest compromise is a weekly or monthly full snapshot plus smaller configuration deltas between snapshots. The reasoning is similar to the “buy vs. build” trade-off in buy-vs-build decisions: lower bandwidth is attractive, but complexity can erase the benefit if operational confidence drops.

5.3 Offline-first update logistics

Because HAPS connectivity can be intermittent, your update pipeline must be offline-first. The onboard system should cache pending updates, verify them locally, and apply them only when prerequisites are met. Ground systems should expose clear status: uploaded, queued, validated, activated, or rolled back. Every stage should emit telemetry so operators can tell whether a stale model is due to transport, validation, or an application fault.

Good operators also keep human-readable change notes. When the platform returns from a link outage, the team should know exactly which model is active, which data snapshot informed it, and what behavioral differences to expect. This is the same operational clarity emphasized in automation playbooks and AI agent ops guides, where observability turns automation from a black box into a managed system.

6. Telemetry Design: What to Measure and What Not to Send

6.1 Model health metrics

Telemetry should answer three questions: is the model alive, is it useful, and is it still trustworthy? Useful health metrics include inference latency p50/p95, queue depth, accelerator utilization, memory fragmentation, confidence distributions, and label entropy over time. For vision pipelines, also include scene quality indicators such as motion blur, contrast, saturation, and occlusion rate. For engine diagnostics, capture sensor drift, missingness, frequency-domain shifts, and feature stability across rolling windows.

Model health telemetry is not the same as application logs. Avoid dumping every intermediate tensor or high-volume debug output unless you are in a controlled diagnostic session. Instead, aggregate locally and uplink summaries. A comparable principle is discussed in small analytics stacks, where the value comes from the right metrics, not all metrics.

6.2 Operational telemetry

Operational telemetry should capture platform-level constraints: battery state, solar harvest, thermal headroom, storage wear, link quality, and compute throttling. On HAPS, model performance is inseparable from platform health, because a hot accelerator or a power-saving mode can alter inference quality and response time. Operators need visibility into these couplings to distinguish a bad model from a constrained platform.

It is often useful to define health envelopes. For example, a surveillance workload may remain “degraded but acceptable” if p95 latency stays under a threshold and detection recall remains above a minimum, while diagnostic workloads may prioritize precision to avoid unnecessary maintenance escalations. These thresholds should be codified and versioned, not left to operator intuition.

6.3 Privacy-aware telemetry minimization

Many HAPS use cases intersect with surveillance or sensitive infrastructure, so telemetry minimization is not optional. Send just enough to support operations and compliance, and keep the raw evidence local whenever possible. If the system supports export, the export path should be auditable and policy-driven, with clear retention rules and redaction controls.

That approach aligns with best practices in cloud video privacy and auditable document workflows. Trust is built by reducing unnecessary collection and by making every retained byte explainable.

7. Reliability Patterns for Harsh Operating Conditions

7.1 Degraded mode is a feature, not a failure

Every HAPS deployment should define degraded modes explicitly. If the accelerator fails, the system should switch to a smaller model or a lower duty cycle. If the link is lost, the platform should continue onboard inference and cache outputs until reconnection. If storage approaches capacity, the system should prune low-value artifacts first, not the evidence needed for incident review.

Designing for degradation is one of the clearest ways to improve resilience. It is the same principle behind live-service resilience patterns and safety device fail-safes, where “keep operating safely” is more important than “stay fully featured.”

7.2 Protect against silent drift

Drift on HAPS can be environmental, sensor-related, or operational. Atmospheric changes, seasonal lighting differences, hardware aging, and calibration drift can all alter model behavior. The solution is a drift detection loop that monitors feature distributions, confidence shifts, and downstream operator overrides. When drift crosses a threshold, the platform should trigger a reduced-trust state and request ground review.

Do not wait for accuracy reports weeks later. Put lightweight drift sentinels onboard and heavier analysis on the ground. If you want a system-level analogy, think about how fraud controls detect suspicious feedback loops before they corrupt the entire pipeline.

7.3 Keep rollback simple

Rollback should be a first-class API, not a manual rescue procedure. The platform should keep the last-known-good model resident until the new one has passed its acceptance window. If the new model increases false positives or exceeds latency budgets, rollback should restore prior behavior without requiring operator intervention. Keep rollback metadata, including why it happened, what triggered it, and what data informed the decision.

The simpler the rollback, the more aggressively you can iterate. That agility matters because HAPS deployments often need tuning after real-world exposure. The operational lesson is familiar across domains: good systems are not the ones that never fail; they are the ones that recover predictably.

8. Practical Deployment Patterns by Workload

8.1 Surveillance and reconnaissance

For surveillance, the winning architecture is usually hierarchical. A low-cost detector identifies candidate frames or regions, a tracker reduces redundancy, and a second-stage classifier or anomaly detector ranks severity. This lets you preserve situational awareness without saturating bandwidth. You can also use event-driven capture, where the platform stores pre-roll and post-roll around detections instead of continuous raw streams.

When the mission requires evidence retention, use a policy that bundles cropped frames, confidence, metadata, and a short context window. This gives analysts enough context to validate the event without pulling the entire feed. For more thinking on context-aware delivery, see personalized matchday communications and real-time guided experiences, where the system acts on relevance rather than volume.

8.2 Engine diagnostics

Engine diagnostics benefit from compact temporal models, feature extraction at the edge, and rules that translate anomaly scores into actionable maintenance signals. In many cases, onboard AI should not attempt a full root-cause analysis; it should classify the type of anomaly, estimate confidence, and package the supporting telemetry for later review. That is enough to reduce downtime and allow maintenance planners to prioritize investigation.

For diagnostic pipelines, false positives are expensive because they can trigger unnecessary inspections, while false negatives can hide emerging failures. That means you want calibrated thresholds and perhaps asymmetric decision logic. Teams used to operational monitoring in regulated systems will recognize this style from medical device validation and from fire detection systems.

8.3 Environmental and situational sensing

Environmental payloads are a great fit for edge AI because the onboard system can aggregate, de-noise, and summarize signals before downlinking them. For example, a platform may compute air-quality trends, cloud formation features, or storm-cell indicators locally, then transmit compact geospatial summaries. That keeps the downlink focused on decisions and trends rather than raw sensor firehose data.

If you are building multimodal sensing, consider a shared feature bus with workload-specific heads. That structure allows a single normalization layer to support multiple models while preserving modularity. It is also easier to evolve over time than a monolithic sensor pipeline.

9. Data Governance, Security, and Compliance

9.1 Minimize data collection at the source

The best compliance posture is often to avoid collecting unnecessary data in the first place. For HAPS, this means local filtering, short retention windows, selective evidence capture, and strict export policies. If a frame, log, or diagnostic trace does not serve an operational purpose, it should not leave the platform. This lowers privacy risk, reduces storage pressure, and simplifies legal review.

Security and privacy controls should be designed into the data plane, not bolted onto the ground station after the fact. That is the same philosophy explored in privacy-aware data navigation and cloud video security checklists.

9.2 Sign everything, log everything important

Artifact signing, secure boot, encrypted telemetry, and immutable deployment logs are essential. If the platform is updating models in the field, you need cryptographic provenance for every artifact and every decision about whether it was activated. Without that, troubleshooting and compliance become unacceptably hard.

Important logs include deployment approvals, update checksums, rollback triggers, model version transitions, and policy changes. Avoid logging raw sensitive payloads unless necessary, and when you do, control access tightly. The governance model should make it easy to answer: what ran, when did it run, what data did it see, and why did it make that decision?

9.3 Design for cross-border and mission-specific constraints

Depending on deployment region and application, the same platform may face different rules around telemetry retention, imagery export, or infrastructure surveillance. Your architecture should separate policy from implementation so the platform can be configured per mission profile. A clean policy layer lets legal, security, and operations teams make bounded changes without rewriting the inference stack.

This policy abstraction is similar to how enterprise teams manage content, communications, and data access across regions in accessibility systems and in merged recognition programs, where local rules shape the final behavior.

10. Implementation Checklist and Comparison Table

10.1 Architecture comparison

The table below compares common HAPS AI deployment patterns so you can quickly choose the right approach for your workload and operating envelope. The right answer is rarely “maximum accuracy”; it is usually the best mix of robustness, bandwidth efficiency, and maintainability. Treat this as a starting point for design reviews and test plans.

PatternBest forBandwidth useLatencyOperational riskNotes
Raw stream to groundForensics, early R&DVery highHighLow technical risk, high link riskUse only when bandwidth is abundant or during limited test windows.
Onboard detect, ground classifySurveillance triageMediumMediumModerateGood balance when onboard compute is limited but event filtering matters.
Full onboard inferenceReal-time alerts, engine health scoringLowLowModerateRequires strong packaging, validation, and rollback discipline.
Hierarchical cascadeComplex multimodal missionsLow to mediumLow to mediumHigher complexityBest when workloads have clear cheap-first and expensive-second stages.
Store-and-forward with event clipsIntermittent linksLowVariableLow to moderateExcellent for high-latency or partially disconnected missions.

10.2 Practical rollout checklist

Before shipping a HAPS model, verify the following: the model is quantized or otherwise size-optimized for the target accelerator; the package includes schema and preprocessing metadata; the deployment path supports signatures and rollback; telemetry distinguishes platform health from model health; and the system can continue operating in degraded mode. Also test recovery after intermittent connectivity, partial update failure, and storage exhaustion.

For teams used to conventional cloud ML, this checklist is a reminder that operational “last mile” work is not optional. If you want to estimate the economics of your pipeline, borrow cost-awareness ideas from cost-control engineering and apply them to link usage, storage churn, and operator time, not just GPU hours.

11.1 Prioritize deterministic behavior

The most valuable thing in a HAPS AI stack is not a flashy benchmark; it is predictability. Deterministic preprocessing, explicit thresholds, bounded runtimes, and documented fallback states make the system operable over long missions. If the platform behaves consistently, operators can trust it, tune it, and automate around it.

This predictability becomes more important as the system matures. A HAPS platform with multiple mission profiles may carry one software family into surveillance, another into diagnostics, and a third into environmental monitoring. Shared infrastructure should remain stable while the models evolve independently.

11.2 Treat every byte as expensive

Bandwidth budgeting should influence every layer of the stack. Cache locally, compress intelligently, send summaries before raw data, and gate high-volume uploads behind conditions or operator requests. If you do that well, your platform will feel much smarter than it is, because it spends its limited communication budget on the most useful information.

This is the same “signal discipline” that helps other constrained systems, from high-demand feed management to designing reports that drive action. Good engineering focuses attention where it counts.

11.3 Invest in observability before scale

If your fleet will grow, observability should be in place before the first major operational expansion. You need metrics for inference, storage, power, thermal state, and link performance, plus logs that can reconstruct what happened during an incident. Without that foundation, scale turns small mistakes into expensive mysteries.

For teams thinking ahead, the lesson is clear: build the ground tooling, telemetry schemas, and update pipeline with the same seriousness you give model training. In HAPS, the system is the product, not just the network or the model.

12. FAQ: HAPS Edge AI Deployment Questions

What is the best model format for HAPS deployment?

There is no universal winner, but ONNX is often a strong interoperability layer, especially when paired with hardware-specific optimization such as TensorRT, OpenVINO, or vendor runtimes. The important thing is to package the model together with its preprocessing, schema metadata, and compatibility checks so deployment is reproducible. If your hardware is highly specialized, a native runtime may outperform generic formats, but it should still be wrapped in a versioned artifact with rollback support.

Should all inference run onboard?

No. The right split depends on bandwidth, latency requirements, and the cost of false positives or false negatives. Many systems do best with a two-stage design: cheap onboard filtering followed by ground-side review or heavier analysis. Use onboard inference when you need real-time action or want to reduce bandwidth, and keep some ground analysis for explainability, calibration, and forensics.

How often should HAPS models be updated?

Update frequency should be driven by drift, mission type, and connectivity windows. Early in deployment, you may update frequently to tune thresholds and packaging, then settle into slower, scheduled releases once the model stabilizes. The key is to keep update mechanisms safe, cached, and reversible so you can ship improvements without risking mission continuity.

What telemetry is most important?

The most important telemetry usually includes model version, latency percentiles, queue depth, memory pressure, power state, thermal headroom, sensor quality metrics, and drift indicators. Those signals tell you whether the model is healthy, whether the platform can sustain it, and whether performance is degrading because of data, hardware, or software. Avoid over-collecting raw data unless you have a specific diagnostic reason.

How do you handle low-bandwidth or disconnected periods?

Design for store-and-forward operation. Cache high-value evidence locally, send summaries first, and use event-based uploads when connectivity returns. Also ensure the platform can keep running its onboard models in degraded mode, so a temporary link loss does not stop detection or diagnostic scoring.

What is the biggest mistake teams make?

The biggest mistake is treating HAPS like a normal cloud endpoint. In reality, the platform is a long-duration, resource-constrained, intermittently connected system that demands strict packaging, observability, and rollback discipline. Teams that ignore these constraints often end up with models that look good in lab tests but are painful to operate in the stratosphere.

Advertisement
IN BETWEEN SECTIONS
Sponsored Content

Related Topics

#edge#ai#haps
D

Daniel Mercer

Senior Technical Editor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
BOTTOM
Sponsored Content
2026-05-06T01:38:48.490Z