Legacy Orchestration

The Orchestrator — and the Lightweight Replacement

One kernel, two postures. For tier-1 primes with decades of legacy PLM, ERP, MES, MRO, and EAM investment, Clarity sits above the stack as a decision-intelligence and digital-thread orchestrator — consuming legacy data through diodes, protecting the existing investment, delivering the unified evidence chain the legacy stack was never able to produce. For SMBs and Tier 2–4 suppliers who never had the legacy stack, the same kernel replaces it entirely — same-day deployment, zero systems-integrator, a tiny fraction of the total cost of ownership. Same codebase, same data model, same kernel. Two postures — decided at deployment time, not at product-edition time.

Published 9 April 2026 · 30 min read · Thread: Data & Provenance · Commercial Architecture

TL;DR

The engineering-software industry has, for the last forty years, produced exactly two kinds of product. On one end, enterprise PLM, ERP, MES, MRO, and EAM suites — Teamcenter, Windchill, ENOVIA, SAP, Oracle, DOORS, Polarion, FactoryTalk, Opcenter, Trax, Ramco, IFS, Maximo — targeted at tier-1 primes that can absorb hundreds of thousands to millions of first-year total cost of ownership, a twelve-to-twenty-four-month implementation timeline, a standing army of systems-integrator consultants, and dedicated infrastructure. On the other end, a shadow ecosystem of Microsoft Office, SharePoint, Jira, Confluence, Google Drive, and Excel spreadsheets, used by the 60–70% of engineering SMBs and Tier 2–4 suppliers who cannot afford the enterprise suites and therefore do not use them at all.

Nothing has ever bridged the two. The enterprise suites were never architected to run without their full ecosystem of dedicated infrastructure, certified consultants, and bespoke customisation. The shadow ecosystem has no engineering data model, no traceability, no provenance, no decision-intelligence, and no audit trail. The engineering profession has been split in half for four decades — one half paying enterprise rates for systems that underdeliver, the other half paying nothing for systems that cannot deliver — and no vendor has been able to serve both.

Clarity is the one architecture that can. Six properties make this possible, and together they are the Clarity USP for the legacy-orchestration and legacy-replacement markets simultaneously.

  1. One kernel, two postures. Clarity is not an enterprise edition and an SMB edition. It is a single codebase — one Lx data substrate, one event-driven kernel, one DeZolve decision-intelligence layer, one Library subsystem, one set of three algorithmic engines — deployed in two different postures depending on whether the customer has a legacy stack to integrate with.
  2. The orchestrator posture. For tier-1 primes with decades of sunk legacy investment, Clarity sits above the existing PLM, ERP, MES, MRO, and EAM stack. Legacy systems remain the systems of record for their domains. Clarity consumes their data through Diode and Airlock connectors (see the sibling whitepaper Diode & Airlock Connectors), aggregates it into the Lx graph, runs the full DeZolve and BOM and digital-thread machinery over the unified view, and provides the programme-level decision-intelligence layer that no legacy stack has ever been able to deliver.
  3. The lightweight-replacement posture. For SMBs and Tier 2–4 suppliers who never had the enterprise stack in the first place, Clarity replaces it entirely. Same kernel, same Lx schema, same DeZolve, same BOM views, same change management, same audit trail — but running without the legacy systems underneath, at a fraction of the total cost of ownership, with same-day deployment and zero systems-integrator time required.
  4. Data-first from the kernel, not process-first from the workflow engine. The reason Clarity can orchestrate or replace is that its kernel is a typed, versioned, provenance-carrying data graph with event-driven aggregation. Legacy vendors cannot make this move because their kernels are process-first workflow engines built around document management and in-place database updates — which is the architectural category error the sibling whitepaper Event-Driven by Kernel, Not by Feature describes in full.
  5. Deployment topology is the only thing that differs. The orchestrator posture runs in commercial cloud, sovereign cloud, or air-gapped classified environments alongside the customer’s existing infrastructure. The replacement posture runs as multi-tenant SaaS for individual SMBs, as own-account for mid-market customers who want their own AWS deployment, or as sovereign-partner deployments for regulated SMBs. Every topology uses the same kernel, the same Lx schema, and the same decision-intelligence machinery. The customer chooses the topology at deployment time; the product does not change.
  6. Incremental adoption for primes, same-day deployment for SMBs. Primes start with one programme, prove the orchestrator layer, and expand across the portfolio over quarters and years. SMBs deploy in hours, produce their first structured engineering output on day one, and run the full lifecycle on Clarity from the beginning. The contrast is not two products; it is two entry points into the same product, dictated by the customer’s starting conditions.

If you only read one sentence: Clarity is one kernel with two postures — orchestrator for tier-1 primes with sunk legacy investment, lightweight replacement for SMBs without it — and the single-kernel architecture is the architectural move no legacy vendor can match, because their kernels were built process-first for a different era.


The bargain on offer

Every engineering organisation in the world is currently running one of two kinds of technology stack, and both of them are failing their users in different ways.

The tier-1 primes — the defence manufacturers, the commercial aerospace OEMs, the nuclear operators, the automotive platform builders, the medical device companies with regulatory approvals on the line — are running enterprise PLM, ERP, MES, MRO, and EAM suites. They have been running them for ten, twenty, or thirty years. They have sunk hundreds of millions into customisation, integration, training, and systems-integrator engagements. They have certified personnel, audited workflows, compliance attestations, and regulatory approvals baked into the current stack. They cannot rip it out without invalidating decades of work and restarting their entire governance posture from zero.

And yet the stack does not deliver what the primes actually need. It does not close the digital thread from stakeholder intent to disposal. It does not score decision defensibility. It does not produce a unified BOM across sixteen view types. It does not handle cross-classification exchange structurally. It does not tell the programme director, on a Monday morning, which decisions in the current programme are well-defended and which are not. The primes know this, because they have been running the gap assessments and the post-incident reviews, and they have seen the reconstructive narratives their current audit trails actually produce. They want the digital thread, the decision intelligence, the unified BOM, and the through-life governance. They cannot get any of it by replacing the legacy stack, and they cannot get any of it by waiting for the legacy vendors to add it as a feature.

Meanwhile, the other 60–70% of engineering organisations — the SMBs, the Tier 2–4 suppliers, the consultancies, the component manufacturers, the system integrators, the specialist design houses — are running almost none of this. They are running Microsoft Office, SharePoint, shared drives, Jira, Confluence, and a collection of Excel spreadsheets maintained by one or two people who hold the entire engineering data model in their heads. They have no PLM because the per-user-per-year licensing is outside their reach and the mandatory systems-integrator implementation would consume the entire first-year profit of the business. They have no engineering data model because the tools that would provide one are unaffordable. They have no digital thread because they do not have the substrate it would need to run on. They have plausible deniability about regulatory audit requirements because the regulator knows, too, that the tools are unaffordable — and the regulator accepts, reluctantly, that the SMB is doing the best it can with what it has.

"The tier-1 prime has too much legacy stack and too little digital thread.
The SMB has no legacy stack at all and no digital thread either.
The legacy vendors have been unable to serve either case for forty years,
because the first case is unserveable without rebuilding the kernel
and the second case is unaffordable at the legacy vendors' pricing."

This whitepaper describes how Clarity solves both cases with a single kernel, deployed in two postures. It has three sections:

  • Section 1 — the structural failure of the legacy market split, the five specific pain patterns that primes experience with their enterprise stack, and the five specific pain patterns that SMBs experience with the shadow ecosystem.
  • Section 2 — how the Clarity kernel works for both cases: the orchestrator posture for primes, the lightweight-replacement posture for SMBs, the single codebase underneath both, and the deployment topologies that distinguish them.
  • Section 3 — why the single-kernel-two-postures architecture is the thing no legacy vendor can match, why it is the only answer to the forty-year market split, and how it bookends the entire Clarity technical series that the previous six whitepapers have laid out.

Section 1 — The forty-year split and why neither side works

The engineering-software market has been split into two irreconcilable halves since the PLM, ERP, MES, MRO, and EAM product categories emerged in the 1980s and 1990s. The split is not accidental. It is the result of a specific architectural choice that every major vendor made early and has been unable to unmake since.

1.1 Why the legacy vendors built for the primes

The commercial history is straightforward. In the 1980s and 1990s, the only customers who could afford the early PLM and ERP suites were the very largest manufacturers — the aerospace primes, the automotive OEMs, the defence integrators. These customers had deep pockets, multi-year programmes, dedicated IT departments, and a willingness to fund long implementation projects because the alternative was worse. The legacy vendors, rationally, built their products for these customers. The products became deeply customisable, infinitely configurable, and architecturally committed to a certain shape: a process-first workflow engine sitting on top of a relational database, with a document-management module, a change-management module, a bill-of-materials module, and a permissions model, all bolted together with the assumption that a systems integrator would spend twelve to twenty-four months turning the product into something the specific customer could actually use.

This architecture worked for the primes because the primes could afford it. Per-user-per-year licensing in the thousands, plus mandatory systems-integrator time, plus dedicated infrastructure, plus ongoing customisation and support, was a rounding error against the primes’ engineering budgets. A tier-1 defence prime running a multi-billion-pound weapons platform programme does not care about a mid-six-figure first-year PLM TCO; they care about whether the PLM is sufficiently customisable to fit their specific workflow, and they can fund the systems integrator to make it so.

The problem is that the architecture the vendors committed to does not work for anyone who cannot afford the first-year TCO. And the 60–70% of the engineering market that cannot afford the first-year TCO is where most of the actual engineering work — and most of the next generation of engineering innovation — is being done.

1.2 Five pain patterns at the tier-1 primes

The primes are not happy with their legacy stack either, despite being the customer base it was designed for. The pain is different from the SMB pain — the primes can afford the tools, and the tools do work for the processes they were customised to handle — but the pain is real and it is structural. Five specific patterns recur across every tier-1 prime the authors have worked inside over three decades.

Sixteen tools, no unified view

A modern tier-1 prime runs at least a dozen commercial engineering tools: a PLM (Teamcenter, Windchill, or ENOVIA), an ERP (SAP or Oracle), a requirements tool (DOORS or Polarion or Jama), an MBSE tool (Cameo or Capella), a project-management tool (Jira or similar), a configuration-management tool, a document-management tool, a compliance tool, a quality-management system, a maintenance-management tool, and half a dozen specialist analysis and simulation tools. Each tool has its own data model. Each tool’s integration to the others was bespoke-built by systems integrators. The unified view across the tools does not exist. When the programme director asks “what is the current state of the product design across all sixteen tools?”, the answer is a multi-week reconciliation exercise, and the answer is already stale before it is delivered.

The digital thread is a marketing claim, not a property

Every legacy vendor claims to deliver “the digital thread”. None of them actually do, at least not in the structural sense described in the sibling whitepaper Thirteen Lifecycle Phases, One Graph. What they deliver is a set of pre-built integrations between their own modules, and an invitation to spend systems-integrator time building integrations to the other vendors’ modules. The primes have been paying for digital-thread delivery for twenty years and have not received it, because the architecture the legacy vendors built cannot produce it.

Decision audit is reconstructive, not structural

Every serious prime has been through a contract dispute, a post-incident investigation, or a regulatory audit where the question was “which decision was this, on what evidence, when, by whom?” and the answer took months to reconstruct. The reconstructive cost is an order of magnitude larger than the running cost of the decision-management tools, and the reconstructive answer is always partial. The primes know this. They have resigned themselves to it because the legacy stack cannot produce a structural answer.

Cross-classification exchange is manual and bespoke

Primes running defence, nuclear, aerospace, or dual-use programmes need to exchange engineering data across classification boundaries, across allied nations, and across prime-to-subcontractor boundaries. The legacy stack has no structural answer for this. The current practice is manual redaction, USB-stick transfers, email attachments, and a bespoke systems-integrator engagement per programme, per boundary, per partner. The sibling whitepaper Diode & Airlock Connectors describes the structural alternative; the point here is that no legacy vendor has ever provided it.

SI dependence is permanent

Every customisation the prime has ever made to their legacy stack is tied to the specific systems-integrator personnel who built it. Those personnel leave, the SI company reorganises, the original configurations are lost to institutional memory, and the prime is locked into ongoing SI engagements just to keep the existing customisations running. The per-user-per-year licensing is not actually the largest cost of owning the legacy stack. The ongoing SI dependence is, and it is a cost the primes cannot exit.

1.3 Five pain patterns at the SMBs and Tier 2–4 suppliers

The other 60–70% of the engineering market lives a different kind of pain. The SMBs and Tier 2–4 suppliers are not running the legacy stack at all. They are running spreadsheets on shared drives, SharePoint sites with uncontrolled version history, Jira tickets that serve as ad-hoc requirements repositories, and email threads that function as change records. Every one of them knows this is inadequate. None of them can afford the alternative.

The enterprise stack is flatly unaffordable

Per-user-per-year licensing in the thousands is not a negotiation; it is a market-exclusion price. A thirty-engineer SMB with a single digit percent operating margin cannot absorb a per-user-per-year licensing bill in the thousands, plus a first-year systems-integrator engagement in the hundreds of thousands to millions, plus dedicated infrastructure costs, plus ongoing training. The total first-year cost of legacy PLM adoption is, for most SMBs, larger than the entire annual engineering department budget. The SMB does not adopt the legacy stack because adoption would bankrupt it.

The shadow ecosystem has no data model

SharePoint is a file-and-folder store. Excel is a spreadsheet. Jira is an issue tracker. None of them is an engineering data model. None of them knows what a configuration item is, what a bill of materials is, what a requirement is, what a decision record is, what an @source provenance record is. The SMB cannot retrofit an engineering data model onto SharePoint and Excel because SharePoint and Excel do not have the primitives to support one. The data model lives in the heads of one or two senior engineers, and the SMB is betting its future on those engineers never leaving.

Audit is aspirational

When an SMB pitches for a regulated-market contract — a sub-tier supply into a defence prime, a component supply into a medical-device manufacturer, a software component into a nuclear instrumentation programme — the prime asks for a requirements traceability matrix, a configuration management plan, a change history, an audit trail, and a decision record. The SMB cobbles these together manually from spreadsheets and email threads, delivers a document pack that the prime accepts reluctantly, and hopes the contract closes before the next audit cycle. The SMB knows the document pack is theatrical. The prime knows it too. The relationship depends on both sides pretending otherwise.

Every new programme starts from scratch

There is no template library at the SMB. There is no reusable decision journey. There is no problem-space template. Every new programme starts by re-building the same spreadsheet structure, the same folder hierarchy, the same requirements list, the same BOM template, from memory or from the last programme’s working files. The efficiency-through-reuse principle — NEED4 from the original 2012 DeZolve whitepapers cited in the sibling whitepaper The DeZolve Decision Intelligence Framework — is structurally unavailable to the SMB because the SMB has no substrate to reuse on.

The SMB cannot hire the specialists the legacy stack demands

Even if the SMB could afford the per-user-per-year licensing and the first-year SI engagement, they could not staff the ongoing operation. Legacy PLM requires dedicated configuration managers, PLM administrators, change-management specialists, and certified users. A thirty-engineer SMB cannot justify any of these headcount lines. The legacy vendors built their products assuming every customer had a dedicated PLM team. The SMB does not, and cannot, and therefore the legacy product is structurally unadoptable.

1.4 The forty-year gap nobody has filled

Both pain patterns — the prime’s five and the SMB’s five — have existed in essentially their current form for forty years. Every major consulting firm, every enterprise software analyst, every industry association, every regulator has catalogued the gap repeatedly. The gap is well understood. The reason nobody has filled it is that filling it requires building a single product that can run in both the orchestrator posture and the lightweight-replacement posture without becoming two different products, and every attempt to do this from the legacy-vendor side has failed for the same architectural reason.

The legacy vendors cannot run a lightweight replacement because their kernels are process-first workflow engines that require the full customisation apparatus to function at all. They cannot build a lightweight version without either ripping out the customisation layer (which would alienate their existing prime customers) or maintaining two codebases (which would cannibalise their SI revenue stream). Neither option is acceptable inside a legacy vendor’s commercial model, and so neither option has ever been attempted seriously.

The SMB-side shadow-ecosystem vendors — Microsoft, Google, Atlassian — cannot run an orchestrator either, because their products have no engineering data model to orchestrate with. SharePoint cannot ingest and reconcile data from sixteen specialist engineering tools and produce a unified thread, because SharePoint does not know what an engineering thread is. Google Drive cannot compute a DeZolve truth vector across a legacy decision chain because Google Drive does not have the primitives to represent one.

The gap has remained unfilled because no existing commercial architecture can fill it. Filling it requires a new kernel — a data-first kernel with an explicit engineering schema, event-driven aggregation, provenance at every field, and a deployment model that scales from a single-seat SMB laptop to an air-gapped classified enclave without changing shape. That kernel is the Clarity platform, and Section 2 describes how it delivers both postures from the same codebase.


Section 2 — One kernel, two postures

The architectural claim of this whitepaper — and of the entire seven-part Clarity technical series — is that a single data-first kernel can deliver the orchestrator posture and the lightweight-replacement posture simultaneously, from the same codebase, with the same data model, the same event bus, the same DeZolve decision-intelligence layer, and the same Library subsystem. The kernel does not know, and does not need to know, which posture it is running in. The posture is a property of the deployment topology, not of the product.

This section describes how each posture works, what the kernel underneath them actually is, and why the architectural move is one no legacy vendor can match.

2.1 The kernel — what it actually is

Before describing the two postures, it is worth stating what the Clarity kernel consists of. The six previously published whitepapers in this series have each described one of its primitives; taken together, they describe the complete kernel.

  • The Lx data substrate — a thirteen-layer typed engineering data graph spanning stakeholder intent (L0) through disposal (L12), with forward dependency and evidence back-propagation. Described in Thirteen Lifecycle Phases, One Graph.
  • The event-driven kernel — immutable JSON writes, EventBridge-debounced aggregation, typed schemas, @source provenance on every field, and the absence of a workflow engine. Described in Event-Driven by Kernel, Not by Feature.
  • The DIKW architecture — thirteen lifecycle verticals, ten overlay groups, and a structural answer to the Data-Information-Knowledge-Wisdom gap that legacy stacks cannot climb. Described in Breaking the DIKW Ceiling.
  • The sixteen-BOM CI graph — eleven stored BOM view types across L6–L12 plus five filter/aggregation modes, all running as queries over a single configuration-item graph. Described in Sixteen BOM Views on One CI Graph.
  • The DeZolve Decision Intelligence Framework — a fifteen-node, twenty-six-edge directed graph with a reverse-traversal truth-vector evaluator, scoring every committed decision for defensibility at the moment of commit. Described in The DeZolve Decision Intelligence Framework.
  • The 325 bounded AI agents — a narrow, schema-constrained approach to AI participation with BYOM support for air-gapped deployments, lineage tagging at every field, and structural visibility of AI participation. Described in 325 AI Agents, Bounded by DeZolve.
  • The Diode and Airlock connector framework — one-way ingest from legacy systems and two-way federation between Clarity instances, with three independent enforcement layers and dual-policy redaction. Described in Diode & Airlock Connectors.

These are not seven separate products. They are seven views of one kernel. The kernel is the substrate. Every one of them runs in both postures, because the posture is a property of the topology, not of the kernel.

2.2 The orchestrator posture — for tier-1 primes with sunk legacy investment

The orchestrator posture is Clarity deployed above an existing tier-1 legacy stack. The prime’s existing PLM (Teamcenter, Windchill, or ENOVIA) remains in place. Their ERP (SAP, Oracle) remains in place. Their MES, their MRO, their EAM, their requirements tool (DOORS, Polarion, Jama), their MBSE tool (Cameo, Capella), their project-management tool, their document-management system — all of it remains in place, unchanged, as the customer’s current system of record for its domain.

Clarity sits above the stack and does three things the legacy stack cannot:

Consume legacy data through Diodes

Every legacy source system is connected to Clarity via a Diode connector (see Diode & Airlock Connectors). The Diode is one-way: data flows from the legacy system into Clarity, and write-back is blocked at the infrastructure layer by three independent enforcement layers. This is critical for prime adoption, because it means Clarity cannot corrupt the legacy system — not by misconfiguration, not by software bug, not by malicious action, not by any combination of the three. The prime’s existing engineering data is provably safe. The IT governance team can confirm this in minutes by inspecting the three enforcement layers. This is what makes the orchestrator posture adoptable in environments where “touching the production PLM” is a career-ending phrase.

Each legacy source becomes a typed ingest into the Lx graph. Teamcenter ECRs land at L4 as change records. Teamcenter eBOMs land at L6 as engineering BOMs. SAP mBOMs land at L7 as manufacturing BOMs. DOORS requirements land at L0 as needs and requirements, with their original document-management metadata preserved in @source. Jira tickets land at L4 as change candidates awaiting classification. GitHub pull requests land at L6 or L7 depending on whether they are design-plane or implementation-plane changes. Each ingest carries @source provenance pointing back to the originating legacy system, so that downstream consumers can always trace any field in the Clarity graph back to the specific source system and the specific version of the source record.

Aggregate into the unified Lx graph

Once the legacy data has landed in the Lx graph, every other Clarity primitive — the DeZolve truth-vector evaluator, the sixteen BOM views, the three algorithmic engines, the Library’s auto-classification pipeline, the AI agents bounded by the world model — operates on it as if it had been authored natively in Clarity. The prime’s engineering team sees, for the first time, a unified view across all sixteen of their legacy tools. The programme director asks “what is the current state of the design across all tools?” and gets a real answer in seconds. The regulator asks “which decisions depended on this supplier’s qualification report?” and gets a structural answer computed from the Lx graph, not reconstructed from sixteen separate data stores over six weeks.

Run decision intelligence and digital thread on the unified view

The DeZolve truth vector, computed over the legacy-ingested data, gives the prime — for the first time — a structural measure of how well each of their decisions is actually defended by the evidence their legacy stack contains. The sixteen BOM views, computed over the ingested CIs, give them a single queryable BOM across all the legacy BOM stores. The three-ring invariant enforcement catches violations that the legacy stack could not detect because the invariants had nowhere to live in the legacy data model. The digital thread — forward dependency from L0 intent to L12 disposal, evidence back-propagation from L10 telemetry to L5 decisions — runs structurally, because the Lx graph has the primitives to support it.

The prime does not replace anything. The legacy stack remains the system of record. Clarity becomes the unified source of truth for the complete lifecycle, built entirely by Diode ingest from the tools the prime already has. The prime’s sunk investment is protected. Their existing workflows continue. Their certified personnel continue to operate the legacy tools. Their regulatory attestations are not invalidated. What changes is that the prime now has the digital thread and the decision intelligence they have been paying for (and not receiving) for twenty years.

2.3 The lightweight-replacement posture — for SMBs and Tier 2–4 suppliers

The replacement posture is Clarity deployed without a legacy stack underneath it. For an SMB, a Tier 2–4 supplier, a consultancy, or any organisation that does not already have an enterprise PLM, ERP, MES, MRO, or EAM stack, Clarity is the stack. The same Lx graph holds the engineering data directly. The same DeZolve scores the decisions directly. The same sixteen BOM views render from the same CI graph. The same Library holds the imported standards and the reusable templates. The same event-driven kernel runs the pipeline. There is no legacy system to Diode from, because there is no legacy system.

Five properties make this posture affordable and adoptable for SMBs:

Same-day deployment, zero systems integrator

The replacement posture deploys from a browser. The first engineering artefact — the first Problem Space, the first option set, the first BOM, the first requirement — can be produced on the same day the account is created. There is no twelve-to-twenty-four-month implementation timeline because there is no customisation layer to build. The kernel is already built. The customer uses it directly.

SaaS pricing in the low per-user-per-year range

The Clarity SaaS pricing for SMBs lives in a range that is a small fraction of the per-user-per-year cost of legacy PLM, without a mandatory systems-integrator engagement, without dedicated infrastructure, and without the standing headcount lines (configuration managers, PLM administrators, certified power users) that legacy deployments demand. The pricing range is deliberately positioned in the gap between the per-user-per-year cost of commercial productivity suites (which have no engineering data model) and the per-user-per-year cost of legacy PLM (which has the data model but requires the full expensive implementation apparatus to function). That gap has been unoccupied for forty years. Clarity occupies it.

No specialist staff required

The SMB does not need a dedicated PLM administrator, a dedicated configuration manager, or a dedicated change-management team to run Clarity. The Lx graph, the DeZolve scoring, the BOM views, and the change management all run structurally — the engineers authoring the content are simultaneously the users of the governance layer, because the governance layer is a property of the data, not a separate workflow that someone has to operate. The same engineers who would otherwise be maintaining spreadsheets on a shared drive are now authoring directly into the Lx graph, with every write carrying @source provenance and every decision carrying a DeZolve truth vector.

Templates and reuse from the Library on day one

The Clarity Library (see the DeZolve whitepaper §2.8) ships with canonical taxonomies for the major engineering frameworks (ISO 15288, TOGAF ADM, DoDAF, SAFe, CMMI, MIL-STD-882) and with template Problem Spaces for common decision journeys. A new SMB starting on Clarity does not begin with an empty graph; they begin with the Library’s templates applied to their specific programme, and they can adopt the frameworks their regulator requires by turning on the relevant overlays. The forty-year SMB problem of “we have to rebuild the same engineering scaffolding from scratch for every new programme” is structurally solved, because the scaffolding is already in the Library.

Full governance from day one

This is the most important property, and the one that makes Clarity’s replacement posture different from every other SMB-market attempt. The Clarity kernel is the same kernel that runs in air-gapped classified environments for defence primes. The full DeZolve decision intelligence, the full sixteen-BOM CI graph, the full three-ring invariant enforcement, the full @source provenance chain, the full audit trail, and the full cross-boundary Diode and Airlock connectors are all available to the SMB on day one. The SMB does not get a stripped-down version of Clarity. They get the same kernel the primes get. The only thing the SMB lacks is the legacy stack that the orchestrator posture ingests from — and they did not have that legacy stack anyway.

When the SMB later wins a contract with a tier-1 prime and needs to exchange engineering data across the prime’s classification boundary, the Airlock connector is already there. When the regulator asks for a requirements traceability matrix, the traceability is already structural. When the auditor asks for a decision history, the DeZolve truth vectors are already computed. The SMB that started on Clarity’s replacement posture does not outgrow the platform when they move into regulated markets — they already had the full platform from day one.

2.4 The topology is what differs — not the product

The critical architectural point is that nothing in the kernel changes between the two postures. The same Lambda functions run. The same Lx schema applies. The same DeZolve evaluator computes the same truth vectors. The same Library holds the same taxonomies. The same event-driven pipeline aggregates the same writes. The difference between the orchestrator posture and the replacement posture is which sources the Diode connectors are reading from, and which deployment topology the customer chose.

Four topologies are supported, and every customer selects one at deployment time:

TopologyEnvironmentTypical customerAI model availability
Multi-tenant SaaSClarity-hosted AWS, commercial cloudSMBs, consultancies, solo engineersFull Bedrock access
Own-account enterpriseCustomer’s own AWS account, commercial cloudMid-market, regulated Tier 1/2Full Bedrock access or customer-chosen model
Sovereign partnerCustomer AWS via accredited sovereign partnerGovernment, regulated primesCustomer-chosen model; no commercial Bedrock dependency
Multi-classification air-gappedAir-gapped AWS regions (US GovCloud, UK Secret, Australian Protected)Defence, intelligence, Five EyesCustomer-bundled or BYOM (no Bedrock)

Every topology runs the same kernel. Every topology supports both the orchestrator posture (with Diodes pointing at the customer’s legacy systems) and the replacement posture (with no Diodes at all). Every topology delivers the full DeZolve, full sixteen BOM views, full invariant enforcement, full @source provenance, full Lx graph, and full AI agent participation. A customer who starts in multi-tenant SaaS can move to own-account and then to sovereign partner as their classification requirements evolve, without changing the product, the data model, or the engineering content. The migration is a deployment re-topology, not a product re-platforming.

"Clarity is not an enterprise edition and an SMB edition.
It is one codebase, one data model, one kernel,
deployed in two postures at the customer's choice.
The posture is a topology decision, not a product decision.
That distinction is what no legacy vendor can match."

2.5 Incremental adoption for primes — the orchestrator path

A tier-1 prime adopting Clarity in the orchestrator posture does not rip and replace anything. They start with one programme, usually one that is already in some kind of trouble — a gate review that is struggling, a contract dispute that is looming, a post-incident investigation that cannot reconstruct the evidence chain, a cross-classification exchange that is becoming unmanageable. They stand up the orchestrator posture for that one programme, point the Diodes at the specific legacy sources they need, and run Clarity’s decision intelligence and digital-thread machinery over the unified view.

The first programme is the proof point. The prime’s engineering team sees the DeZolve truth vector for the troubled decisions, sees the sixteen BOM views over their own legacy-ingested data, sees the @source provenance chain that connects the programme’s intent at L0 to its implementation at L6, and recognises that they have never seen any of this before. The legacy stack, which they have been running for two decades, has not been producing it.

From that proof point, the prime expands. Another programme is added. Another Diode is configured. The Library starts accumulating templates that the prime’s internal programmes can reuse. The DeZolve truth vectors of the early programmes become reference data for the later ones. The digital thread, which began as a single-programme demonstration, becomes a portfolio-level capability. Within a year, the prime is running decision intelligence and unified BOM governance across their entire tier-1 programme portfolio, without having replaced a single line of their legacy stack.

This is the only adoption path that works for a customer with sunk legacy investment. Every attempt to replace legacy enterprise software by ripping and replacing has failed catastrophically — the history is littered with examples, some of which are referenced in the sibling whitepaper Event-Driven by Kernel (the Birmingham City Council Oracle transformation, the Super Seasprite naval programme). The orchestrator posture side-steps the rip-and-replace failure mode entirely by not replacing anything. The prime keeps the legacy stack. Clarity runs above it. Nothing breaks.

2.6 Same-day adoption for SMBs — the replacement path

An SMB adopting Clarity in the replacement posture follows an entirely different path, measured in hours rather than quarters. They sign up for the multi-tenant SaaS. They create their first Problem Space. They pull a template from the Library that matches their industry and their regulatory posture. They import their existing engineering content from whatever spreadsheets, SharePoint sites, and shared drives they have been using — the import does not need to be complete on day one, because the Lx graph is incremental and the SMB can keep adding content as they work.

By the end of the first day, the SMB has a functioning Lx graph with their first Problem Space populated, their first option set drafted, their first requirements imported, their first BOM view rendered, and their first decision record ready to commit. On day two, they start authoring new engineering content directly in Clarity. On day three, the old spreadsheets become read-only references. Within a week, Clarity is the primary engineering substrate for the SMB, and the shadow ecosystem has been reduced to an archive.

The time-to-value for the SMB is hours, not months. The first-year total cost of ownership is a small fraction of what a legacy PLM first-year implementation would cost. The ongoing operating cost is the SaaS subscription, with no mandatory SI engagement, no dedicated headcount, no infrastructure management, and no ongoing customisation. The SMB gets the same kernel the defence primes get — just deployed in a different topology, with no legacy stack underneath.

2.7 Why this is an architectural move, not a pricing move

It is tempting to describe the dual-posture capability as a “pricing move” — Clarity charges SMB rates for SMB deployments and enterprise rates for enterprise deployments, and therefore serves both markets. This framing is wrong in a specific and important way.

The dual-posture capability is architectural, not commercial. Legacy vendors could, in principle, discount their pricing to reach the SMB market at any time. Several of them have tried. None has succeeded, because the product cannot actually run affordably at SMB scale. The mandatory systems-integrator engagement, the dedicated infrastructure, the specialist staffing, and the ongoing customisation apparatus are not pricing choices — they are architectural necessities of the legacy-vendor product. Strip them out and the product stops working. Keep them in and the first-year TCO remains out of the SMB’s reach.

Clarity’s data-first kernel, by contrast, does not require any of these things. The customisation apparatus does not exist because the kernel is already configurable by schema rather than by SI implementation. The dedicated infrastructure does not exist because the platform runs on event-driven serverless components that scale from zero. The specialist staffing does not exist because the governance layer is a property of the data rather than a workflow someone has to operate. The ongoing customisation expense does not exist because the framework overlays are Library-managed taxonomies that customers turn on and off.

Remove the architectural necessities, and the pricing naturally settles in the gap between shadow-ecosystem free tools and enterprise PLM. That is not a pricing decision. It is the direct cost consequence of the kernel architecture, and it is why Clarity can make the move that no legacy vendor can.


Section 3 — Why this is the series bookend

This is the closing whitepaper in the Clarity technical series. The previous seven have each described one architectural primitive of the platform: the DIKW climb from Data and Information to Knowledge and Wisdom; the sovereign-AI bounding of 325 narrow agents by the DeZolve framework; the event-driven kernel that makes the digital thread possible; the thirteen Lx lifecycle phases from stakeholder intent to disposal; the sixteen BOM views on one configuration-item graph; the Diode and Airlock connectors for multi-classification exchange; and the DeZolve Decision Intelligence Framework itself, with fifteen years of cross-domain research distilled into a fifteen-node directed graph.

This section explains how those seven primitives compose in the dual-posture architecture, and why the Orchestrator-and-Replacement framing is the natural closing point for the eight-paper series.

3.1 The six primitives as the kernel of both postures

Every one of the six architectural primitives described in the previous whitepapers runs in both postures. The kernel is identical; the topology is the only thing that differs. Stated explicitly:

  • The Lx data substrate — runs in both. Three ingest paths populate it, and both user types use all three in practice. The first path is Diode ingest from legacy systems — primarily used by primes with an existing PLM, ERP, MES, MRO, or EAM stack to consume, but also available to SMBs who happen to have a single legacy tool they want to pull from. The second path is direct native authoring inside the Clarity UI, where Seekers, Solvers, and Deciders create Lx entities directly — primarily used by SMBs working greenfield, but also used by primes for programme-level decisions that were never authored in their legacy stack. The third path, and the one most often overlooked, is Library-driven extraction from uploaded documents: both user types can upload existing engineering content — ConOps documents, requirements specifications, supplier datasheets, standards, prior-programme archives, test reports, interface control documents, compliance findings, CAD assembly trees, spreadsheet BOMs, and anything else that carries structured or semi-structured engineering information — into the Clarity Library, and the Library’s auto-classification and extraction pipeline parses the content, applies the relevant taxonomies, identifies candidate Lx entities (needs, requirements, options, interfaces, parameters, change records, decisions), and populates the Lx layers directly with human-in-the-loop review. An SMB with a folder of legacy Word documents and Excel spreadsheets does not have to re-author any of it manually; they upload it to the Library and Clarity extracts the engineering content into the Lx graph. A prime with an archive of retired-programme documentation can do the same thing, populating an L0 lessons-learnt context or an L6 inherited-content baseline without writing a single ingest adapter. Same schema, same event-driven aggregation, same provenance model — three ingest paths, one graph.
  • The event-driven kernel — runs in both. Every write is an immutable JSON file, every aggregation is EventBridge-debounced, every pipeline is quiesce-protected, regardless of whether the writes originated in Teamcenter (orchestrator) or in the SMB engineer’s browser (replacement).
  • The DIKW Wisdom layer — runs in both. The thirteen verticals, the ten overlay groups, the @source back-propagation, and the lessons-learnt knowledge graph are all structural properties of the Lx graph, and they operate the same way whether the graph was built by legacy ingest or by native authoring.
  • The sixteen BOM views — run in both. The CI graph is the same CI graph. The eleven stored BOM view types across L6–L12 and the five filter/aggregation modes render the same way regardless of whether the CIs came from a PLM Diode or from an SMB’s direct import of their parts catalogue.
  • The DeZolve truth vector — runs in both. A prime’s committed decision at L5 gets the same fifteen-node, twenty-six-edge traversal as an SMB’s committed decision. The evidence chain is traversed the same way. The trust categories (verified, inferred, transitive, gap) apply the same structural tests. The classification states (good, incomplete, bad, conflicting) have the same thresholds.
  • The 325 bounded AI agents — run in both. The prime may use commercial Bedrock models in their own-account topology, or a customer-supplied model in their air-gapped topology. The SMB uses the shared Bedrock pool via multi-tenant SaaS. The Lx guardrails are identical across all cases. The validation pipeline, the @source lineage tagging, and the agent bounding are identical.
  • The Diode and Airlock connectors — run in both, but with different usage patterns. Primes use Diodes heavily for legacy ingest and Airlocks for cross-classification federation. SMBs initially use neither — but as soon as the SMB wins a contract that requires cross-boundary data exchange with a prime, the Airlock connector is already available in the platform they are already running. No migration, no retooling, no additional vendor.

The six primitives compose into a single coherent kernel, and the single kernel runs both postures. This is what no legacy vendor can match, and this is what the seven-whitepaper series has been building toward.

3.2 What the prime gets that the legacy stack could never deliver

For the tier-1 prime in the orchestrator posture, Clarity delivers a specific set of capabilities that the legacy stack has been unable to provide in twenty years of operation:

  • Unified view across all legacy tools — the first time the programme director can ask “what is the current state of the design?” and get a single answer in seconds rather than a multi-week reconciliation exercise.
  • Structural decision-intelligence scoring — DeZolve truth vectors computed over the legacy-ingested data, giving the prime the first real measure of how well each of their decisions is defended by the evidence their own legacy stack actually contains.
  • Digital thread forward and backward — L0 intent propagated forward to L12 disposal via structural references, L10 telemetry and L11 in-service feedback back-propagated to the originating L5 decisions via @source links.
  • Sixteen BOM views on unified CI graph — eBOM from the PLM, mBOM from the ERP, dBOM from the MES or deployment tracker, sBOM from the software repo, all rendered as filtered queries over a single CI graph rather than as separate trees in separate tools.
  • Invariant enforcement across tool boundaries — three-ring invariants (annotation, approval, solver) operating on the unified Lx graph, catching violations that no individual legacy tool could detect because the invariant depended on data spanning several tools.
  • Cross-classification exchange as a structural property — Airlock connectors handling cross-nation, cross-classification, cross-tenant federation with three independent enforcement layers, replacing the manual redaction and USB-stick processes that currently dominate.
  • Decision audit that is structural, not reconstructive — any decision’s DeZolve truth vector is available in minutes, not months, for regulatory audit or post-incident investigation.
  • AI participation that is structurally visible — every AI-generated contribution carries lineage tagging, and the truth vector reflects AI participation at its correct weight, giving safety reviewers a queryable view of where AI has contributed to safety-critical decisions.

None of these are available in any legacy PLM, ERP, MES, MRO, or EAM stack, from any vendor, at any price. The prime has been paying for twenty years of software that cannot produce them. Clarity produces them from day one of the orchestrator deployment, without touching the legacy stack.

3.3 What the SMB gets that the shadow ecosystem could never deliver

For the SMB in the replacement posture, Clarity delivers a different but equally structural set of capabilities that the shadow ecosystem has been unable to provide:

  • An actual engineering data model — Lx graph with thirteen typed layers, not SharePoint folders.
  • Structural requirements traceability — every requirement is a first-class node in L0 with forward links to L2 options and backward links to L0 needs, computed as a query rather than maintained by hand.
  • BOM management that scales — sixteen BOM views over a single CI graph from day one, even for a ten-person SMB.
  • Change management that cannot be bypassed — every change is an immutable write with @source provenance, and the three-ring invariant enforcement catches violations at annotation time rather than reconstruction time.
  • Audit trail admissible to tier-1 primes — the same audit trail the primes use, available to the SMB at SMB prices, so that the SMB can credibly pitch into regulated supply chains.
  • Template-driven programme startup — the Library’s shared templates, taxonomies, and framework overlays mean a new SMB programme starts from a populated scaffold rather than an empty spreadsheet.
  • Bounded AI participation — the same 325-agent architecture the defence primes use, available in multi-tenant SaaS for the SMB, with the same structural bounding and the same provenance visibility.
  • An upgrade path to regulated-market participation — when the SMB wins a contract with a tier-1 prime, the Airlock connector is already available in the platform they are already running, with no migration, no retooling, and no additional vendor engagement.

None of these are available in SharePoint, Excel, Jira, Confluence, or any combination of them. The SMB has been running the shadow ecosystem for forty years because the alternative was unaffordable. Clarity is the first alternative that is simultaneously affordable and structurally complete.

3.4 The commercial consequence — a market nobody has served

The commercial consequence of the dual-posture architecture is that Clarity addresses two markets simultaneously that no previous vendor has addressed with a single product:

  • The tier-1 prime market — currently served by Teamcenter, Windchill, ENOVIA, SAP, Oracle, DOORS, Polarion, Jama, and the specialist tools — where Clarity enters as a non-replacing orchestrator above the existing stack. The prime keeps their sunk investment, their certified personnel, their audited workflows, and their regulatory attestations. Clarity delivers the decision-intelligence, digital-thread, and unified-BOM capabilities the legacy stack has failed to produce in two decades.
  • The SMB and Tier 2–4 supplier market — currently served by SharePoint, Excel, shared drives, and Jira because the enterprise suites are structurally unaffordable — where Clarity enters as a same-day-deployment replacement at a fraction of the legacy first-year total cost of ownership. The SMB gets the same kernel the primes get, with no mandatory SI, no dedicated infrastructure, and no specialist headcount lines.

The tier-1 prime market has been well-served commercially for forty years — at the cost of architectural stasis and a growing gap between what the primes need and what the legacy stack produces. The SMB market has been commercially under-served for forty years because no vendor has been able to make the economics work. Clarity is the first product in the history of the category to address both markets with a single codebase, because Clarity is the first product in the history of the category to build the right kernel for the job.

3.5 Why legacy vendors cannot respond

This section is deliberately blunt, because the architectural point is load-bearing. The legacy enterprise-software vendors cannot respond to the dual-posture architecture for five specific structural reasons, each of which is a consequence of the process-first kernel decision they made in the 1980s and 1990s.

  • They cannot shed the customisation layer. The customisation apparatus — the configuration files, the workflow definitions, the specialist tooling, the SI engagement — is structurally load-bearing in their kernel. Removing it would break the product. Keeping it pricing-accessible to SMBs is impossible because the SI cost is the largest line item.
  • They cannot shed the systems-integrator dependency. The SI engagement is not a cost the vendor charges; it is a cost the customer pays to a third party, and it is the primary commercial channel through which the vendor sells into tier-1 primes. Disintermediating the SI channel would collapse the vendor’s enterprise sales motion.
  • They cannot run affordably at SMB scale because their infrastructure footprint is too large. Legacy vendors’ products assume dedicated infrastructure, dedicated staff, and dedicated training. None of those assumptions hold at SMB scale. The product’s minimum operating cost is above the SMB’s total engineering-software budget.
  • They cannot retrofit a data-first kernel underneath their process-first product. The sibling whitepaper Event-Driven by Kernel, Not by Feature describes this in full. The retrofit would require invalidating every customisation every existing customer has ever made, which is a commercial impossibility for a vendor with a tier-1 prime install base.
  • They cannot maintain two codebases indefinitely. Several legacy vendors have attempted to launch “lightweight” or “cloud-native” editions of their products for the SMB market. Every attempt has either converged back to the heavyweight kernel (because the lightweight version could not deliver the data model) or has been cancelled (because the two-codebase maintenance burden was unsustainable). Neither outcome solves the market-split problem.

The five reasons are not criticisms of the legacy vendors’ engineering teams. The engineers at the legacy vendors are competent; many of them are excellent. The reasons are structural consequences of a 1980s architectural decision that cannot be reversed without abandoning the customer base the vendors depend on. Clarity did not have that constraint, because Clarity was built post-COVID with forty years of hindsight about what the right kernel actually looks like. The kernel is the difference, and the kernel is what the seven-whitepaper series has been describing.

3.6 Closing the series

The seven whitepapers in the Clarity technical series were not written in an arbitrary order. They were written to build an argument, primitive by primitive, toward the commercial conclusion this paper presents. Each primitive stood on its own and could be read in isolation. Together they describe a single coherent kernel — the Lx substrate, the event-driven architecture, the DIKW climb to Knowledge and Wisdom, the sixteen-BOM CI graph, the thirteen lifecycle phases, the Diode and Airlock connectors, the DeZolve Decision Intelligence Framework, and the 325 bounded AI agents — and the argument of the series is that this kernel, and only this kernel, can deliver the dual-posture architecture that the forty-year split market requires.

Every prior whitepaper has pointed at the same structural answer in different language. Breaking the DIKW Ceiling argued that the engineering-software industry has been stuck at the Data and Information layers for four decades and that the Knowledge and Wisdom layers require a different kernel. Thirteen Lifecycle Phases, One Graph argued that phase-gate governance has to sit on typed data, not on workflow. Sixteen BOM Views on One CI Graph argued that the sixteen BOMs every programme needs are queries, not stored trees. Event-Driven by Kernel, Not by Feature argued that the digital thread is an architectural property of an event-driven kernel, not a module bolted onto a process-first stack. 325 AI Agents, Bounded by DeZolve argued that sovereign AI requires a typed data substrate to bound against. Diode & Airlock Connectors argued that cross-classification exchange is an infrastructure property, not a compliance feature. The DeZolve Decision Intelligence Framework argued that decision defensibility has to be a structural graph traversal, not a reconstructive narrative.

Every one of those arguments converges on the same conclusion: the kernel has to be rebuilt from zero, data-first, with the right primitives, and the kernel has to run in both postures simultaneously to serve both halves of the engineering market. Clarity is the rebuild. This whitepaper is the closing argument. And the seven-whitepaper series, taken as a whole, is the public record of what the next twenty years of engineering software is actually going to look like, for primes and SMBs alike.


Conclusion — one kernel, two postures, one market finally whole

For forty years the engineering-software industry has been split in half. The tier-1 primes have had enterprise PLM, ERP, MES, MRO, and EAM suites that cost hundreds of thousands to millions of first-year TCO, locked them into twelve-to-twenty-four-month implementation timelines, required standing armies of certified consultants, and failed to deliver the digital thread, the decision intelligence, and the unified evidence chain the primes have been paying for. The SMBs and Tier 2–4 suppliers have had nothing — no engineering data model, no structural governance, no audit admissibility, no template reuse, no cross-boundary exchange, because the enterprise suites were unaffordable and the shadow ecosystem was structurally incapable.

Nothing has bridged the two. Every attempt from the legacy-vendor side has failed for the same architectural reason: their kernels were built process-first in the 1980s and 1990s, and the process-first kernel cannot run affordably at SMB scale or flexibly enough to orchestrate above a tier-1 legacy stack without becoming a second product line.

Clarity is the architectural alternative. One kernel — the Lx data substrate, the event-driven pipeline, the DeZolve decision-intelligence layer, the Library subsystem, the three algorithmic engines, the 325 bounded AI agents, the Diode and Airlock connectors, the sixteen BOM views, the thirteen lifecycle phases — deployed in two postures. The orchestrator posture for tier-1 primes with sunk legacy investment, running alongside and above the existing PLM / ERP / MES / MRO / EAM stack, consuming legacy data through Diodes, delivering the digital thread and decision intelligence the legacy stack cannot produce. The lightweight-replacement posture for SMBs and Tier 2–4 suppliers without a legacy stack, running without any legacy systems underneath, same-day deployment, zero systems-integrator, at a tiny fraction of the legacy first-year total cost of ownership, and delivering the same kernel the defence primes get.

The two postures are not two products. They are one codebase, one data model, one kernel, one Lx substrate — deployed in two topologies at the customer’s choice. The kernel does not know which posture it is running in. The customer chooses the topology at deployment time, and Clarity runs. The architectural move is impossible for legacy vendors to replicate because the process-first kernel they built in the 1980s cannot be retrofitted into data-first shape without invalidating every customer they have, and the commercial model they sell through cannot be disintermediated without collapsing the enterprise sales motion. Clarity has neither constraint. Clarity was built post-COVID, with forty years of hindsight, and the kernel was built for exactly this job.

That is what The Orchestrator — and the Lightweight Replacement means. It is not a pricing strategy. It is not a product-line decision. It is the natural consequence of the data-first kernel the previous six whitepapers in this series have each described from a different angle. One kernel, two postures, one market finally whole — and the seven-whitepaper series is the public record of how Clarity got there and what the next twenty years of engineering software is going to look like for every organisation in the engineering profession, at any tier, in any industry, in any jurisdiction.

The market has been split for forty years. Clarity closes the split. The series ends here.

One thread. 13 verticals. 16 BOMs. 25 USPs.

The only complete digital thread for regulated programmes, powered by the patent pending DeZolve Decision Intelligence Framework. Sovereign deployment under your own AWS account and encryption keys — at 10× less than the enterprise alternatives.