Sovereign Exchange

Diode & Airlock Connectors

Multi-tenant, multi-nation, multi-classification data exchange across public, private, and air-gapped clouds. ITAR, EAR, and NOFOR compliance by architecture. Dual-policy redaction. Three independent enforcement layers. Fail-closed by design at every boundary that matters.

Published 9 April 2026 · 28 min read · Thread: Data & Provenance · Sovereignty & Compliance

TL;DR

Moving data across classification, nationality, and tenancy boundaries is the single most fragile operation in regulated engineering. A partner nation needs access to an unclassified slice of a SECRET programme. A contract manufacturer needs the dimensional drawings but not the ITAR-controlled algorithm. A subcontractor’s PLM needs the engineering BOM but not the Export-Controlled metadata. A peer organisation on the other side of a Cross-Domain Solution needs a filtered feed that is provably incapable of writing back. Today, almost every one of these crossings is handled by email, USB sticks, manual redaction, and enterprise systems integrators billing by the week. The process is slow, inconsistent, auditor-unfriendly, and occasionally catastrophic.

Clarity replaces all of that with two typed, schema-disciplined, infrastructure-enforced connector patterns, jointly known as P-42 Diode & Airlock Cross-Classification Exchange.

  • Diode — one-way ingest into Clarity. Data flows in. Write-back is blocked at the infrastructure layer, independently of application logic, by three independent enforcement layers. Used for pulling evidence from legacy PLM, ERP, MES, MRO, or EAM systems into Clarity without any possibility of corrupting the source, and for transferring data from a higher-classification enclave to a lower-classification one with provable write-back impossibility.
  • Airlock — two-way exchange. Data flows in both directions, but only after both parties independently approve the policy that governs the flow, and either party can suspend it at any time. Used for peer-to-peer federation between Clarity instances, for bilateral data sharing between allied nations, and for any scenario where the exchange must be mutually authorised and mutually revocable.

Six structural properties make the architecture work, and together they are the Clarity USP for sovereign cross-classification exchange:

  1. Three independent enforcement layers for Diode write-back denial. An IAM inline deny on the connector Lambda role. An S3 bucket policy in the source account denying the same role. The complete absence of any EventBridge rule routing data events back toward the source. All three must be in place; any one alone is considered insufficient. Defence in depth at the infrastructure layer, independently verifiable by an auditor in minutes.
  2. Dual-policy redaction — Policy A and Policy B in parallel. Policy A is field-level sanitisation: named fields are stripped from the payload entirely before ingest, not masked, not obfuscated, removed. Policy B is classification and export-control filtering: entities are filtered by their @source.exportControl regime and releasability markings before the data ever leaves the source boundary. Stripped fields are recorded in @source.sanitisedFields for audit. Both policies run on every crossing.
  3. Fail-closed policy enforcement. Elsewhere in the Clarity kernel the rule is fail-open: infrastructure utility failures default to allowing writes so that an SSM outage cannot block engineering work. The connector policy enforcer is the single documented exception. Data crossing a classification, sovereignty, or tenancy boundary without a valid active policy is a security failure, not a productivity inconvenience. The enforcer fails closed, always, with no exceptions.
  4. ITAR, EAR, and NOFOR compliance by architecture, not by code path. The three regimes are modelled as structured fields on @source.exportControl and as an overlay on every Lx entity from L0 to L12. Non-compliance is not prevented by defensive code; it is prevented by schema. An entity whose @source.exportControl classification does not satisfy the destination’s policy cannot be ingested. Period.
  5. Identical deployment across commercial, sovereign, and air-gapped clouds. The connector stack is region-agnostic. ARNs are constructed with the deployment partition (aws, aws-us-gov, aws-iso, aws-iso-b) resolved at synth time. SQS is the primary ETL transport — it is available in every partition, including the fully-classified ISO regions. There is no Bedrock dependency in the connector path; the field-mapping layer is rules-based and config-driven, so air-gapped deployments run identical code to commercial ones.
  6. Joint approval, bilateral revocation, immutable audit. Every Airlock policy requires independent approval from both parties before any data moves. The approval is recorded in DynamoDB as an immutable state transition. Either party can suspend the policy from either side, at any time, with effect within a single SQS visibility timeout cycle. Every crossing, every sanitisation, every approval, every suspension is logged with full @source provenance.

If you only read one sentence: controlled cross-classification data exchange is not a compliance feature — it is an infrastructure property, and the infrastructure has to be architected for it on day one, not bolted on afterwards.


The bargain on offer

Every regulated engineering programme eventually reaches a moment where data has to cross a boundary it was never designed to cross. A partner nation joins a programme midway through and needs read-access to a filtered slice of the design baseline. A contract manufacturer is invited to bid on a sub-assembly and needs the ITAR-scrubbed drawings. A peer organisation at another agency has built their own Lx-style model and needs to federate selected parts of it with yours. A subcontractor’s PLM is the authoritative source for the harness design and the integrator needs to pull it into the system of record without any possibility of writing back into the subcontractor’s tool.

In every one of these cases, the question the security officer asks is the same, and it has been the same for forty years: “how do I guarantee that nothing flows the wrong way, that nothing sensitive leaks, that both sides agree to the exchange, and that the auditor can reconstruct exactly what crossed and when?”

In almost every one of these cases, the answer today involves email, USB sticks, a manual redaction pass, a folder on a shared drive, a cover letter signed by three people, and an enterprise systems integrator billing by the week to maintain the whole arrangement. The security officer knows the answer is inadequate. The programme director knows it. The auditor knows it. The regulator knows it. Nobody has a better option, because the commercial tools all solve the wrong problem: they solve access control within a classification, not controlled exchange across classifications.

"Legacy cross-classification data exchange is email,
USB sticks, manual redaction, and enterprise systems
integrators billing by the week.
The security officer knows it is inadequate.
Nobody has a better option."

This whitepaper explains how Clarity approaches the problem differently. It has three sections:

  • Section 1 — why the legacy cross-classification exchange pattern is structurally broken, and the five failure modes every programme the authors have worked inside eventually hits.
  • Section 2 — how the Diode and Airlock connectors work: the three independent enforcement layers, the dual-policy redaction model, joint-approval workflow, fail-closed enforcement, the four-stack CDK topology, and the region-agnostic deployment pattern.
  • Section 3 — what the architecture unlocks in practice: ITAR / EAR / NOFOR compliance by schema, identical deployment across commercial / sovereign / air-gapped clouds, multi-tenant multi-nation multi-classification support by construction, bilateral federation between Clarity instances, and the specific things that cannot be retrofitted into any stack that did not design for them on day one.

Section 1 — Why legacy cross-classification exchange is broken

Controlled data exchange across classification, sovereignty, or tenancy boundaries is not a new problem. Intelligence agencies have been doing it for sixty years. Defence primes have been doing it for fifty. Export-controlled programmes have been doing it for forty. The problem is solved, at national-security scale, by cross-domain solutions — specialised, accredited, eye-wateringly expensive hardware-and-software appliances that enforce one-way or policy-gated flow between classification domains.

The problem is that cross-domain solutions were built for file transfer and message passing, not for structured engineering data with provenance. A cross-domain solution will cheerfully move a PDF from a TOP SECRET network to an UNCLASSIFIED network after the PDF has been manually redacted. It will not traverse a thirteen-layer Lx graph, filter by export-control classification, sanitise the right fields per the destination’s policy, maintain the provenance chain of what was sanitised and why, and hand the result to the downstream engineering tool in a form the downstream tool can consume. Nothing built before the data-first era could do that, because the data-first era did not exist.

1.1 The five failure modes of legacy cross-classification exchange

Every programme the authors have worked inside that needed controlled cross-classification exchange — defence, nuclear, aerospace, allied-nation sharing, export-controlled sub-tier supply — has eventually hit the same five structural failure modes.

Failure mode 1 — Manual redaction at the boundary

The classic pattern is that someone at the high-classification side opens a document, manually redacts the sensitive parts, and saves a lower-classification copy. The redaction is performed by a human, under time pressure, usually against a checklist the human has mostly memorised. The consequences of getting it wrong are catastrophic; the frequency of getting it wrong, in programmes of any scale, is greater than zero.

Manual redaction does not scale to structured engineering data. A typical engineering programme — whether aerospace, defence, nuclear, automotive, medical device, heavy industry, or advanced manufacturing — has tens of thousands of CIs, hundreds of thousands of BOM Items, millions of provenance records, and an ongoing rate of change that is measured in changes per hour at peak. No human team can manually redact that volume at the rate the programme produces it. The result, in practice, is that the cross-classification exchange runs on a small subset of the data — usually the slowest-changing documents, usually the least informative subset — and the rest of the programme either operates on the high side only or drifts into shadow exchanges that never get audited.

Failure mode 2 — File-based transfer with no provenance

When the exchange moves from manual redaction to automated file-based transfer, the provenance problem gets worse, not better. A file lands on the low side. It has a filename. It has a timestamp. It has the name of the human who released it. It does not have a structured record of what was redacted from it, under which policy, with what authority, and how a downstream consumer should treat any claims the file makes. Downstream users read the file, trust it, and act on it. Auditors later try to reconstruct what was actually in the file at the time of the transfer, and the reconstruction is a guess.

A controlled exchange that cannot tell you, structurally and machine-readably, which fields were removed, why, and under which policy, is not a controlled exchange. It is a transfer with a redaction rumour attached.

Failure mode 3 — No infrastructure-level guarantee of one-way flow

Every programme that has relied on “one-way” exchange has, sooner or later, discovered that the one-way-ness was enforced by a policy rather than by infrastructure. A misconfiguration, a helpful administrator, a temporary debug rule, an integration “just for testing”, and suddenly the flow is two-way without anyone noticing. The audit trail shows the two-way period as a gap. The regulator treats the gap as a compromise. The programme pays.

One-way flow that depends on nobody ever misconfiguring anything is not one-way flow. It is a policy statement with no teeth. Real one-way flow requires multiple independent layers of infrastructure enforcement, each one verifiable by an auditor without trusting the others — the architectural principle the Diode pattern implements in §2.3 below.

Failure mode 4 — Bilateral exchanges without joint revocation

When two organisations set up a bilateral data exchange, they normally write a memorandum of understanding, stand up some connectivity, and run. Six months later, a dispute arises, a programme scope changes, or an organisational reshuffle happens, and one side wants to suspend the exchange. The question of how turns out to be non-trivial. The connectivity is the joint responsibility of both sides; turning it off requires coordinating network, identity, and audit changes across two organisations that may no longer be on good terms.

The programmes the authors have seen have solved this, when they solved it at all, by phoning each other up and agreeing to turn it off, then praying that both sides actually did. The lack of a structural bilateral suspension mechanism — either party, unilaterally, takes effect in seconds, immutable audit trail — means that the “controlled” part of “controlled exchange” is a diplomatic courtesy rather than a technical guarantee.

Failure mode 5 — Every cross-domain integration is a bespoke SI engagement

The commercial consequence of failure modes 1 through 4 is that every cross-classification integration ends up being a bespoke systems-integrator engagement. The SI writes custom adapters, builds custom redaction scripts, configures the cross-domain appliance, runs the accreditation workshops, trains the operators, and then maintains the whole arrangement under a multi-year contract. The cost is large. The reproducibility is zero. The reusability across programmes is near zero. The security officer ends up dependent on two or three people at the SI who actually understand the deployment and who will eventually leave.

The industry has been solving cross-classification exchange one programme at a time for forty years, at full SI rates every time, and is still solving it one programme at a time today. Nobody has ever offered a reusable, schema-disciplined, infrastructure-enforced pattern — because the data-first substrate such a pattern would need has not existed until now.

1.2 The common architectural root cause

Underneath all five failure modes is a single architectural root cause: the legacy cross-classification stack treats data as files, and the control mechanism as a network boundary. Files are opaque. Networks are binary. Neither of those substrates can support typed, field-level, policy-driven, provably-enforced exchange at the rate and volume a real engineering programme produces.

The fix is to move the substrate. Controlled exchange has to happen at the level of the typed data graph, not at the level of file transfer. It has to be enforced at the infrastructure layer, not at the application layer, and not at the policy layer. It has to produce structured provenance for every sanitisation, every approval, every crossing, every suspension. And it has to be deployable identically across commercial, sovereign, and air-gapped environments, because the engineering reality is that the same programme will cross all three.

That is what Diode and Airlock are, and that is what the remainder of this paper describes.


Section 2 — Diode and Airlock: the architecture

Clarity’s cross-classification exchange architecture is two patterns sharing one substrate. Both patterns move data across a boundary — a tenancy boundary, a nationality boundary, a classification boundary, or any combination. Both patterns enforce controlled exchange at the infrastructure layer rather than at the application layer. Both patterns produce structured @source provenance for every crossing. They differ in directionality: Diode is strictly one-way; Airlock is two-way with joint consent.

Together they are catalogued as architectural pattern P-42 Diode & Airlock Cross-Classification Exchange in Clarity’s canonical architectural-patterns catalogue, with overlay-group coverage in Overlay Group 8000 (.08006 Diode, .08007 Airlock, .08008 CrossDomainTraceLink). Every Clarity Lambda that participates in a cross-boundary exchange participates through the P-42 pattern; there is no alternative path.

2.1 The Diode — one-way, infrastructure-enforced

A Diode is a connector that carries data from a source into a target and does nothing else. It is not a general-purpose network pipe. It is a Lambda-based ETL chain running in a dedicated connector account, reading from a staging location in the source account, transforming and sanitising the data through a config-driven field-mapping layer, and writing the result into the target account. The source never sees anything from the target. The connector account writes only forward. The target account has no upstream reachability.

The Diode is used in two principal scenarios:

  • Legacy system ingest. A PLM, ERP, MES, or MRO system is the authoritative source for some slice of the engineering data (the eBOM, the mBOM, the in-service spares, the as-built serials). The Diode pulls that slice into Clarity on a continuous or scheduled basis. The source system is never written to by Clarity. The source remains the system of record for its domain; Clarity becomes the unified source of truth across all domains, derived by diode ingest. This de-risks Clarity adoption entirely: no legacy system has to be changed, no production data is at risk from Clarity misconfiguration, and the integration is provably one-way by infrastructure.
  • Cross-classification one-way transfer. A higher-classification enclave produces a filtered, sanitised slice of its data for a lower-classification consumer. The Diode is the one-way pipe. The higher enclave has no reach into the lower one from the Diode side — the only traffic is the sanitised outbound payload. Audit on the high side can verify by construction that no data from the low side ever arrived.

2.2 The Airlock — two-way, jointly approved, bilaterally revocable

An Airlock is the bidirectional pattern. Two organisations — two tenants, two nations, two classification domains, two allied programmes — stand up a connector account that both of them have visibility into, and define a policy that governs the data flow in both directions. The policy specifies the owners, the scope of data that can cross, the fields that must be sanitised on each direction, the classification constraints, and the approval requirements.

Before any data moves, both owners must independently approve the policy. The approval is a cryptographically-signed CLI action via the approve-connector-policy.sh script, using API-key authentication so that the approval workflow runs without depending on internet-resident identity providers — which matters enormously for air-gapped deployments where no commercial IdP is reachable. Only when both ownerA.approvalStatus and ownerB.approvalStatus are approved does the policy transition to active, and only then does the underlying cross-account IAM grant become effective.

After activation, data moves bidirectionally under the policy. Every payload is validated against the policy’s sanitisation and classification rules before it leaves its source side. Every crossing is logged. Every exception is quarantined to a dead-letter queue for human review, never silently dropped.

At any time, either party can unilaterally suspend the policy. The suspend action is a single CLI call on either side, recorded as an immutable state transition in the policy table in DynamoDB. Within one SQS visibility-timeout cycle — typically a few seconds — the suspension takes effect: no further payloads cross in either direction until the policy is re-approved by both owners. The architecture gives either party a real, unilateral, technically-enforced kill switch. It is the first time, in the authors’ experience, that a bilateral data-sharing agreement has had a working kill switch at all.

"Every bilateral data-sharing agreement in regulated engineering
needs a kill switch. Until now, the kill switch was a phone call.
The Airlock gives both parties a real, unilateral,
technically-enforced suspension that takes effect in seconds."

2.3 Three independent enforcement layers — the heart of the Diode

The Diode’s one-way guarantee is not a single control. It is the composition of three independent infrastructure layers, each of which would be sufficient on its own, and the simultaneous failure of all three would be required for write-back to occur. Any one layer being misconfigured is caught by the other two.

Layer 1 — IAM inline deny on the connector Lambda role

The connector account’s Lambda execution role has an explicit CDK PolicyStatement with Effect: DENY on the actions s3:PutObject, s3:DeleteObject, and s3:PutObjectAcl, scoped to the source staging bucket. IAM deny is absolute — an explicit deny in any attached policy beats any allow anywhere else. The connector Lambda cannot write to the source bucket even if every other policy granted it permission, because the explicit deny always wins.

Layer 2 — S3 bucket policy in the source account

The source account independently enforces the same constraint with a bucket policy. The source staging S3 bucket carries a policy statement denying s3:PutObject, s3:DeleteObject, and s3:PutObjectAcl to the principal of the connector Lambda role ARN. The source account does not trust the connector account’s IAM to be correct; it denies the action from its own side, independently. This layer is deployed via a ConnectorCrossAccountStack that runs in the source account at deploy time.

The independence matters. A future change to the connector account’s IAM — whether accidental, malicious, or well-intentioned — could silently grant write-back permission. The source-side bucket policy is a separate circuit breaker that would block the write even if Layer 1 failed.

Layer 3 — No EventBridge rule targeting the source

The connector account’s EventBridge configuration is audited at CDK review to ensure that no rule has the source account as an event target. The only rules in the connector account are (a) rules that route connector-account events to the target account (the forward data path) and (b) rules that route policy-state-change events within the connector account itself (the approval workflow). Any rule that targets the source is considered a P-42 violation and blocks the CDK PR from merging.

This third layer addresses a subtle attack surface: even if the connector Lambda cannot directly write to the source, an EventBridge rule routing the connector’s output events to a target in the source account could exfiltrate data indirectly. Layer 3 denies that exit by construction.

Verification — three independent checks an auditor can run in minutes

The three layers are all verifiable by an auditor, independently, without trusting the developers or the CI pipeline:

  1. Assume the connector Lambda role credentials. Attempt aws s3 cp to the source staging bucket. Expect AccessDenied. This verifies Layer 1.
  2. Remove Layer 1 in a test environment. Retry the aws s3 cp. Expect AccessDenied still, from the bucket policy. This verifies Layer 2’s independent sufficiency.
  3. Enumerate all EventBridge rules in the connector account. Grep for any rule whose TargetArn contains the source account ID. Expect zero matches. This verifies Layer 3.

All three checks take minutes. None of them requires deep knowledge of the Clarity internals. An external auditor can satisfy themselves of the Diode’s one-way guarantee without reading a single line of the Lambda code.

2.4 Dual-policy redaction — Policy A plus Policy B

Every crossing — Diode or Airlock — applies two redaction policies in parallel.

Policy A — Field-level sanitisation. The connector policy contains a sanitiseFields array listing specific field names to strip from every entity before it crosses the boundary. An example policy might specify "sanitiseFields": ["partNumber", "supplierCode", "unitCost"]. The connector ETL removes those fields from the payload entirely. They do not appear in the loaded entity body on the target side. They are not masked, tokenised, or obfuscated. They are gone. The fields that were stripped are recorded in the target-side @source.sanitisedFields array, so the downstream consumer knows which fields were removed and which policy removed them, but the values themselves never cross.

Policy B — Classification and export-control filtering. The same connector policy carries a set of classification and export-control constraints. Entities whose @source.exportControl classification does not satisfy the destination’s regime (ITAR-restricted items bound for a non-TAA destination, NOFOR items bound for a foreign recipient, SECRET items bound for an UNCLASS target) are filtered out before the data leaves the source boundary. The filter is applied at the source side of the connector, not the target side — the entities never reach the network between source and target. The entity-count-before versus entity-count-after is logged on every run, so the auditor can see exactly how many entities were excluded by classification filtering on each crossing.

Both policies run on every crossing. Both produce structured audit trails. Both are expressed as declarative config in the connector policy object, not as bespoke code — so a change to the sanitisation rules is a config update, reviewable in a single PR, without any Lambda redeployment.

2.5 Fail-closed enforcement — the documented exception to the P-13 rule

The Clarity kernel has a documented pattern, P-13 Fail-Open, which says that infrastructure utility functions (the quiesce check, the RAG resolver, the invariant evaluator) return safe defaults on error, so that an infrastructure outage cannot block engineering work. The rationale is that productivity is more important than perfect enforcement when the thing that has failed is an auxiliary service.

The connector policy enforcer is the single documented exception to P-13. It is fail-closed, always, with no productivity override. Data crossing a classification, sovereignty, or tenancy boundary without a valid active policy is a security failure, not an inconvenience. If the policy table is unreachable, crossings stop. If the policy is expired, crossings stop. If the policy is suspended by either party, crossings stop. If the validation of the payload against the policy fails, the payload is quarantined to a dead-letter queue. Under no circumstance does the connector fall back to “allow the write and figure it out later”.

This is the only place in the entire Clarity architecture where the fail-closed rule dominates the fail-open rule. It is documented explicitly because the rule is unusual and because future engineers need to understand that the Connector pipeline is not subject to the same productivity-preserving defaults the rest of the kernel uses. A connector that crosses a boundary when its policy is unknown is not a connector; it is a hole.

2.6 The four-stack CDK topology

The Diode and Airlock patterns are implemented as four interacting CDK stacks, deliberately separated so that cross-account deployment can be staged, reviewed, and audited without mixing concerns.

  • ConnectorPolicyStack — lives in the connector account. Owns the DynamoDB policy table, the approval-workflow Lambda, the CLI-invokable approve / suspend endpoints, and the EventBridge rules for policy state changes.
  • ConnectorCrossAccountStack — lives in the source account. Owns the S3 staging bucket, the source-side bucket policy (Layer 2 of the Diode enforcement), and the KMS alias for source-side encryption. Deployed by the source-account administrator, not by the connector operator.
  • ConnectorEtlStack — lives in the connector account. Owns the ETL Lambda chain (extract → transform → validate → load → trace), the SQS queues that connect the stages, and the dead-letter queues that quarantine policy failures.
  • ConnectorTargetStack — lives in the target account. Owns the target-side S3 bucket, the target-side KMS alias, and the cross-account grant that allows the connector account’s load Lambda to write.

The four-stack separation is itself a control. No single engineer or automation account can deploy the entire topology without co-ordination across three AWS accounts. The source administrator must knowingly deploy the ConnectorCrossAccountStack with its deny policy. The target administrator must knowingly deploy the ConnectorTargetStack with its cross-account grant. The connector operator owns only the two connector-account stacks. The accountability trail is baked into the deployment model.

2.7 Region-agnostic deployment — same code, every partition

A sovereign-exchange platform that depends on commercial-cloud services cannot go to air-gap. The Clarity connector stacks are region-agnostic by construction, so that the same code deploys unchanged across every AWS partition that an engineering organisation might need to operate in:

PartitionExample regionsDeployment mode
awsus-east-1, eu-west-2, ap-southeast-2Commercial cloud, public-internet connectivity
aws-us-govus-gov-east-1, us-gov-west-1US government sovereign cloud, FedRAMP High
aws-isous-iso-east-1, us-iso-west-1US classified air-gapped regions
aws-iso-bus-isob-east-1Further-classified, no-internet regions

Every ARN the connector stacks construct resolves the partition at CDK synth time via cdk.Stack.of(this).partition. Nothing is hardcoded as arn:aws:. The connector code does not know, and does not care, which partition it is deployed into.

SQS is the primary ETL transport, deliberately chosen because it is available in every partition, including the fully-classified ISO regions where EventBridge has only partial support. The approval workflow uses EventBridge internally within the connector account (for policy state changes, which work in every partition), but the ETL data path uses SQS end-to-end. This choice is documented explicitly in the WP-202 implementation log as a resolved open question, made specifically for air-gap compatibility.

No Bedrock dependency. The field-mapping and sanitisation layer is rules-based and config-driven, not AI-driven. This matters because Bedrock is not available in GovCloud or ISO regions. An AI-driven connector would be unable to run in the environments where controlled exchange matters most. Clarity’s connectors run in all of them, with identical code, because the mapping is data, not inference.

2.8 Sanitisation provenance — every redaction is recorded

Every sanitised field on every crossing is recorded in the target-side @source.sanitisedFields array on the affected entity. The record includes the field name, the policy identifier that stripped it, the crossing timestamp, and the authority under which the policy was approved. A downstream consumer looking at an entity on the target side can see, immediately and structurally, which fields were removed from the original on the way across and which policy authority removed them.

This is the structural property that lets an auditor reconstruct any crossing years later. The reconstruction is not a forensic exercise over log files — it is a query against the target-side @source.sanitisedFields records, joined against the policy state transitions in the DynamoDB policy table. The full history of every crossing is one join away from the data itself.


Section 3 — What this unlocks

The architecture described in Section 2 is not a feature set. It is a substrate. The things it unlocks are operational properties that fall out automatically — things that become trivially possible in Clarity and that are structurally impossible in any legacy stack that did not architect for them at the kernel.

3.1 ITAR, EAR, and NOFOR compliance by schema, not by code

The three export-control regimes that matter most to Western defence and dual-use engineering — ITAR (the United States International Traffic in Arms Regulations), EAR (the United States Export Administration Regulations), and NOFOR (Not Releasable to Foreign Nationals) — are modelled as structured fields on every Lx entity’s @source.exportControl record. The record carries the regime, the classification, the jurisdiction, the agreement reference (TAA, MLA, bilateral treaty), the agreement type, the releasability markings (e.g. REL TO FVEY), the owning organisation, and the agreement expiry date.

Because the fields are structured, compliance becomes a query. “Which CIs in this programme are ITAR-controlled?” is a filter over @source.exportControl.regime = 'ITAR'. “Which of those are releasable to Australia under the current TAA?” is an additional filter by releasability and expiryDate. “Which ones would need to be sanitised before this partner nation can see them?” is the query that the Diode’s Policy B classification filter runs automatically on every crossing.

Non-compliance is not prevented by defensive code paths. It is prevented by schema. An entity whose @source.exportControl does not satisfy the destination’s policy cannot be ingested, because the schema validator rejects it at the connector’s load stage. There is no path by which an ITAR-controlled CI can end up on a non-TAA destination’s target bucket without a structured, logged, and explicitly-approved sanitisation. The compliance property is structural, not procedural, and it is therefore auditable in minutes rather than weeks.

3.2 Identical deployment across commercial, sovereign, and air-gapped clouds

The same connector codebase, the same CDK stacks, the same field-mapping configs, and the same policy enforcement rules deploy to every partition. An engineering organisation whose primary environment is commercial cloud can stand up an airlock with a partner whose primary environment is a sovereign cloud, with a downstream consumer in a fully air-gapped classified enclave, and the three deployments share 100% of the code. The partition is a deploy-time variable, not an architectural commitment.

This matters because almost every serious regulated engineering programme now touches all three. A prime contractor runs commercial cloud. Their sovereign-nation customer runs a sovereign cloud. Their end operator runs air-gapped classified. Legacy cross-classification architectures require a different stack in each environment, with bespoke integration at every boundary, and the integration layer is where the programme’s data goes to die. Clarity’s connectors have no such boundary — the boundary is a partition string, and the partition string is resolved at synth time.

3.3 Multi-tenant, multi-nation, multi-classification by construction

Every architectural primitive described in Section 2 — the per-tenant KMS aliases, the per-tenant S3 path scoping, the classification overlay (SE at L0–L12), the @source.exportControl record, the jurisdiction field, the dual-policy redaction — composes. There is no separate mode for single-tenant, another mode for multi-tenant, and a third mode for multi-tenant-multi-classification. The architecture assumes multi-tenancy, multi-nationality, and multi-classification from day one, and the primitives that implement each of those properties are the same primitives that implement all of them.

The consequence is that a programme can scale from single-tenant single-classification single-nation in year one, to multi-tenant bilateral in year two, to multi-nation trilateral classified in year three, without rebuilding the connector layer. The scaling is a configuration change: add a new tenant, add a new KMS alias, add a new SE overlay, add a new connector policy, add a new approved bilateral with a new suspend authority. No code changes. No architectural rebuild. No SI re-engagement.

3.4 Peer-to-peer Clarity-to-Clarity federation

The most powerful consequence of the Airlock pattern is that two Clarity instances can federate. A prime’s instance can share a scoped slice of its Lx data graph with a partner’s instance, in both directions, under a mutually-approved airlock policy, with bilateral suspend authority, with full sanitisation and provenance. The two instances can run under different tenants, in different nations, under different classifications, with different BYOM AI models behind them — and the airlock takes care of the consistency, the filtering, and the audit trail.

This is new. The authors are not aware of any other engineering platform that supports structured multi-graph federation with per-field sanitisation and bilateral suspend. The closest analogues — distributed Git repositories, federated identity, cross-tenant SaaS — all solve different, simpler problems. Federating a typed thirteen-layer engineering data graph with provenance preservation and classification-aware filtering is the thing the industry has been asking for since CALS in the 1980s, and it is the thing Clarity’s Airlock delivers as a structural property.

3.5 Ingest from every legacy system without corruption risk

The Diode pattern makes PLM, ERP, MES, MRO, and EAM ingest into Clarity a de-risked adoption path. The source system is never written to. The three infrastructure enforcement layers are auditable by the source system’s own security team in minutes. No production data is at risk from a Clarity misconfiguration. The integration is provably one-way.

This matters for adoption because it removes the single most common objection to introducing a new engineering platform into an existing tool ecosystem: “we cannot let your new tool touch our production PLM”. The Diode is the answer. The existing PLM stays exactly as it is, remains the system of record for its domain, and continues to serve its existing users. Clarity becomes the unified source of truth for the complete BOM across all sixteen view types (see the companion whitepaper Sixteen BOM Views on One CI Graph), built by diode ingest from the existing tools, with zero write-back risk.

3.6 Structured audit trail for every crossing

Every Diode crossing and every Airlock crossing produces an immutable, structured audit record. The record includes the policy identifier, the approval state of both parties at the time of the crossing, the fields that were stripped under Policy A, the entities that were filtered out under Policy B, the transit timestamp, the source and target account identifiers, and the full @source provenance of every entity that crossed. The audit record is stored in the target account’s tenant-scoped audit trail with the same immutability and tamper-evidence as any other Clarity write.

When a regulator asks “what crossed between these two programmes between these two dates, under what authority, with what sanitisation, and is the current state consistent with the approved policy?”, the answer is a query against the structured audit trail — measured in seconds — rather than a reconstruction from network logs, file-transfer records, and the memory of the people who were on duty that week.

3.7 Why this cannot be retrofitted into a legacy cross-domain stack

Every property in this section depends on architectural decisions that were made at the kernel — the typed data graph, the @source.exportControl field on every entity, the classification overlay at L0–L12, the per-tenant KMS aliases, the region-agnostic CDK, the fail-closed connector enforcer, the four-stack cross-account topology. None of these are features that can be added. They are the shape of the system.

A legacy cross-domain solution built around file transfer cannot become a structured-graph federation platform without being rebuilt from zero. A legacy PLM with an “export control module” bolted on top cannot produce field-level sanitisation provenance, because its data model does not have the fields to carry it. A legacy enterprise integration platform cannot run identically in GovCloud and ISO regions, because its SaaS control plane lives in commercial cloud and cannot follow. A legacy bilateral-exchange tool cannot give either party a real suspend authority, because the suspend would have to cross the very boundary it is supposed to be closing.

The retrofit is architecturally impossible — not because the vendors are incompetent, but because the retrofit would require abandoning the file-transfer substrate and rebuilding on a typed-data substrate with immutable provenance, and no legacy vendor is architected to do that without invalidating every existing customer deployment.


Conclusion — controlled exchange is an infrastructure property, not a feature

Cross-classification data exchange is the single most fragile operation in regulated engineering, and it has been handled, for forty years, by email, USB sticks, manual redaction, bespoke cross-domain appliances, and systems integrators billing by the week. The industry has been paying full SI rates per programme, every programme, because no reusable substrate existed.

Clarity’s Diode and Airlock connectors — together pattern P-42 — replace all of that. One-way ingest enforced by three independent infrastructure layers. Two-way federation gated by joint approval with bilateral suspend authority. Dual-policy redaction with structured provenance on every sanitised field. ITAR, EAR, and NOFOR compliance by schema, not by code path. Fail-closed enforcement at every boundary that matters, deliberately breaking the kernel’s fail-open default because productivity is not the right trade-off when the data is crossing a classification line. Identical deployment across commercial cloud, sovereign cloud, and fully air-gapped classified enclaves, running the same code, using the same configs, producing the same audit trails, at every partition.

The engineering organisations that need this most — defence primes, allied-nation partners, export-controlled dual-use programmes, nuclear regulators, medical-device consortia, long-lifecycle safety-of-life operators — have been asking for a reusable substrate for decades. Clarity is the first platform to provide one, and it provides it as an architectural property of the kernel, not as a module that can be licensed and bolted on.

That is what Diode & Airlock Connectors means. It is not a product feature. It is the shape of the infrastructure, and it is the only shape on which controlled cross-classification exchange can be done at the speed, volume, and auditability that the next twenty years of regulated engineering work is going to demand.

One thread. 13 verticals. 16 BOMs. 25 USPs.

The only complete digital thread for regulated programmes, powered by the patent pending DeZolve Decision Intelligence Framework. Sovereign deployment under your own AWS account and encryption keys — at 10× less than the enterprise alternatives.