Digital Thread

How Programmes Actually Work: Non-Linear Entry and the Lx Chain

Phase gate frameworks assume programmes start at requirements. Real programmes start in the middle, inherit decisions they didn't make, and must trace evidence both forward and backward simultaneously. Three decades of delivery across defence, nuclear, automotive, advanced manufacturing, and government shaped a data model designed for this reality — not the textbook version of it.

Published 26 April 2026 · 28 min read · Thread: Process & Lifecycle · Data & Provenance

TL;DR

Every phase gate framework in use today — the three-year and four-year automotive models (MDS, CDS, MMDS), defence through-life governance models, the Integrated Support Business Model at a major UK prime, commercial aerospace production gating, nuclear commissioning sequences, and dozens of programme-specific variants — encodes the same founding assumption: the programme starts at requirements.

That assumption is correct for approximately zero per cent of the programmes that Clarity is actually used on.

Real programmes start where they are. A brownfield system upgrade starts at L7 (as-built) because the product already exists. An AUKUS capability programme starts at L5 (imposed political decision) because the decision to acquire has already been made above the engineering team’s pay grade. A defence supply chain supplier starts at L2 (options study) because the prime’s request for proposal specifies the interface and asks for compliant options, not a blank-sheet design. A medical device manufacturer facing a field failure starts at L10 (as-operated telemetry) because reality has produced evidence that the L0 requirements and the L2 design trade study need revisiting.

The Lx chain was designed for all seven of these entry patterns simultaneously. Entry at any layer is structurally supported. Evidence propagates both forward (from entry point toward L12) and backward (from entry point toward L0, retrospectively tracing the evidence chain). The overlay system applies at every layer and every entry point — cost, schedule, TRL, regulatory, supplier, security, quality — because real programmes need all of those dimensions regardless of where the engineering thread begins.

The seven canonical non-linear entry patterns documented in this whitepaper are not edge cases. They are the modal case. Sequential L0-to-L12 programmes exist — but they are less common than any phase gate framework’s documentation acknowledges.

If you only read one sentence: the question is not “what phase is the programme in?” — it is “what does the programme know, and how do we build the evidence chain forward and backward from there?”


The bargain on offer

Thirty years of programme delivery across defence, nuclear, automotive, ecommerce, advanced manufacturing, and government on three continents produced one inescapable conclusion: the problem is never a shortage of tools. It is always a shortage of connection.

Eighteen simultaneous defence programmes on mandated PLM — 800 data objects, 2,000-plus requirements across 30 stakeholder groups — with users resorting to nightly spreadsheet exports because the tool could not generate the reports the business needed. A military systems integration project generating 13,000 nodes and relationships in under two hours using a MBSE tool that produced a beautiful model disconnected from every operational system the model was supposed to govern. A two-billion-dollar-per-year government ERP migration, assessed as too risky to touch, where the real risk was not in the systems themselves but in the absence of a connected model that could show what the systems depended on and what depended on them. An ecommerce digital thread at a Fortune 500 company where every answer was another fragmented dashboard built on untrusted data, and reverse-engineering the broken data model exposed formulas so convoluted that no individual had understood them for years.

In every one of these programmes, the structural problem was the same: no single model connected what was known to what needed to be decided, and no tool could build that connection from wherever the programme happened to be at the moment the model was needed.

“The lesson from thirty years is not which tool to use.
It is that the tool must meet the programme where it is —
not where the tool's documentation assumes it should be.”

The Lx chain was designed from the first commit to meet programmes where they are. This whitepaper has three sections:

  • Section 1 — the seven canonical non-linear entry patterns: what they are, which programmes they affect, and why every legacy tool fails them.
  • Section 2 — the Lx chain properties that make non-linear entry tractable: forward dependency, evidence back-propagation, the overlay system, and the structural difference between a data model and a process model.
  • Section 3 — three decades of field evidence: the PLM ceiling, the MBSE isolation problem, the ERP/MES/MRO/EAM vertical-slice trap, and the DIKW elevation that every legacy tool fails to make.

Section 1 — The seven canonical non-linear entry patterns

The following seven patterns cover the modal cases for non-linear programme entry. They are drawn from documented programme experience across defence, nuclear, aerospace, automotive, medical devices, ecommerce, and government. Each is a standard structural challenge that legacy tools handle badly — not because their developers were incompetent, but because they were designed for sequential entry and non-linear entry is genuinely architecturally different.

Programme PatternEntry LayerLx Chain DirectionWho it affectsThe legacy failure mode
Through-life support concept in bidL2 optionL0 → L2 → L3 → L5 → L10Support contracting; platform ops teamsL10 (as-operated) is a separate system with no connection to the design model; support concept cannot be traced to design decisions
Decommissioning plan mandated at designL2 optionL0 → L2 → L3 → L4 → L5 → L12Regulators; nuclear, offshore, and defence disposal planningL12 (as-disposed) does not exist in any legacy tool; regulatory constraints on disposal are disconnected from the design trade study
Long-lead item for future upgradeL2 optionL0 → L2 → L3 → L5 → L11Supply chain; upgrade programme leadsCannot commit procurement on bounded incomplete analysis; legacy tools assume full design must precede any procurement commitment
Mandated-buy COTS component or GFEL5 imposed decisionL5 → L2 → L1 → L0 → L3 → L4 → L6/L10/L11Integration architects; sustainment plannersExternal decisions imposed above the engineering team’s level are untraceable in legacy tools; no model for backwards flow from a political or procurement mandate
Incremental deployment capability gatesL2 optionL0 → L2 → L3 → L4 → L5 → L9 → L10Gate planners; operational release teamsIOC/FOC boundary is a process milestone, not a data entity; no structural model for incremental deployment readiness
Field failure feeding back to design revisionL10 observed stateL10 → L7 → L6 → L4 → L0 → L2 → L3 → L5In-service support teams; design authorityFeedback loop is severed; as-operated data lives in MRO/EAM systems that have no connection to the design authority’s requirements management tool
Inherited decision with mandated baseline (AUKUS)L5 political decisionL5 → L4 (mandated) → L2 → L0 (retrospective) → L3 (constrained)Strategic programme offices; alliance partnersLegacy MBSE tools assume L0 precedes L5; cannot model a programme where the highest-level decision (L5) precedes and constrains all lower-level engineering (L0–L4)

Pattern 1 — Through-life support concept in bid

A defence prime submitting a bid for a major platform upgrade is required by the procuring authority to include a through-life support concept as part of the tender. The platform has not been designed yet. The as-operated state — what the platform will look like in year 15 of its operational life, what its maintenance schedule will be, what its obsolescence risks are — does not exist. The design team is working at L2 (options) and L3 (analysis). The support team is working at L10 (operational model).

In a legacy PLM system, these are two separate workstreams with no structural connection. The support concept is a Word document that references the design trade study by name, not by data. Changes to the design trade study do not propagate to the support concept automatically. The final bid document is reconciled manually.

In Clarity, the L2 option carries l10SupportConcept attributes as part of the overlay assessment system. The support team’s operational model references the same L2 entities that the design team is populating. When the design trade study changes — when L2.3 (a service option) is replaced by L2.7 (an alternative with a different maintenance interval) — the support concept view updates automatically because it is computed from the same entities, not assembled from a separate document.

Pattern 2 — Decommissioning plan mandated at design

A nuclear new-build programme is required by the regulator to include a decommissioning plan as a formal deliverable in the design phase. The plant is forty years from decommissioning. The materials, waste streams, and disposal methods that will be relevant in forty years are partially uncertain. But the regulator requires the plan at design, and the plan must be traceable to the design decisions that create the decommissioning obligation.

No legacy PLM or MBSE tool has a native concept of L12 (as-disposed). Decommissioning plans are managed as documents, tagged to programme phases, and stored in a document management system that has no structural connection to the design model. If a design decision changes the decommissioning obligation — if the selected material changes from Steel A to Steel B and the waste classification changes as a result — updating the decommissioning plan requires a manual notification process, not a propagated change.

In Clarity, L12 is a first-class layer with its own typed schema. The @source provenance chain traces from the L12 disposal obligation back to the L2 design option that created it and the L0 regulatory constraint that defined the obligation. When the L2 design choice changes, the L12 disposal assessment is flagged for review — structurally, not by email.

Pattern 3 — Long-lead item for future upgrade

A complex electronic system is under development. The programme lead has identified that a specific ASIC — with a 36-month lead time — must be committed to procurement before the full system design is complete, or the programme will slip by nine months. The decision to commit procurement cannot wait for the L3 analysis to be completed, because the L3 analysis depends on subsystem designs that are still in progress.

Legacy tools cannot model this scenario because they assume procurement follows design. The procurement commitment is a decision (L5) that precedes the analysis (L3) that would normally justify it. The commitment constrains the remaining design options (L2) — removing any option that is incompatible with the committed ASIC. And the commitment creates a future upgrade obligation (L11) that must be tracked against the ASIC’s planned obsolescence date.

In Clarity, this is modelled as an L5 decision with a constraintType: 'procurement-commitment' attribute, linked to the L2 option set as a constraint, linked to the L11 update plan as a forward obligation. The incomplete analysis is represented by the truth vector coverage — the L5 decision record shows that it was made with partial L3 coverage and documents the residual risk explicitly. This is not a workaround. It is what L5 was designed for: decisions made under uncertainty, with their evidence coverage explicitly recorded.

Pattern 4 — Mandated-buy COTS component (GFE)

A satellite bus programme is told by the procuring authority that it must use a specific power management unit (PMU) as Government-Furnished Equipment. The PMU was selected for political reasons — it is manufactured in a Five Eyes nation that required a work-share commitment. The engineering team had not included this PMU in any of their L2 option studies. Its interface characteristics are now a fixed constraint on the rest of the design.

This is the most common form of non-linear entry in defence and government programmes. The L5 decision (the PMU commitment) precedes and constrains the L2 trade study, the L1 architecture, and the L0 interface requirements. In a legacy MBSE tool that assumes sequential entry, this scenario is handled by retroactively creating requirements that describe the PMU’s characteristics and adding them to the requirements database as if they had been there from the start. The decision that created those requirements — the political commitment — is not captured anywhere in the model. It lives in a meeting record that nobody will be able to find in ten years.

In Clarity, the L5 imposed decision is a first-class entity. It is traceable to its source (the procuring authority’s directive), it carries the authority weight that makes it non-negotiable, and it propagates forward through the L1, L2, and L0 layers as a set of typed constraints. The retrospective L0 requirements that formalise the PMU’s interface characteristics are explicitly linked to the L5 decision that created them — the provenance is structural, not reconstructed.

Pattern 5 — Incremental deployment capability gates

A defence platform is planned for incremental deployment: Initial Operating Capability (IOC) at 24 months with core functions, Full Operating Capability (FOC) at 48 months with full functionality. The procuring authority treats IOC and FOC as programme milestones. The engineering team treats them as separate configurations of the platform — each with its own as-deployed state (L9), its own operational configuration (L10), and its own configuration baseline (L4).

Legacy gate frameworks treat IOC and FOC as phase milestones — deliverable dates in a project plan. They do not treat them as distinct system configurations with their own data models. The result is that evidence for IOC is assembled from the same documents as evidence for FOC, with flags indicating which evidence items apply to which gate. The structural connection between the IOC configuration, its as-deployed state, and its operational profile is not captured anywhere.

In Clarity, IOC and FOC are separate L4 baseline records, each with their own lxLayersCoverage matrix recording which Lx layers were captured at each gate. The L9 (as-deployed) records for IOC and FOC are separate instances linked to their respective L4 baselines. The coverage gap between IOC and FOC — the functionality that is planned but not yet deployed — is visible in the L4 comparison view without any manual reconstruction.

Pattern 6 — Field failure feeding back to design revision

An in-service maritime platform experiences a recurring bearing failure in the propulsion system. The in-service support team logs the failure in the MRO system. The maintenance data identifies the bearing as a standard component specified in the L6 (as-designed) configuration. The design authority holds the L0 requirement that specified the bearing’s load rating. The configuration management team holds the L4 baseline that records the design decision to use this bearing.

In a legacy toolchain, these four pieces of information are in four separate systems — MRO, PLM, DOORS, and change management — with no structural connection. Reconstructing the evidence chain from the field failure to the original design decision takes weeks of investigative work. The person who made the original design decision may no longer be with the organisation. The design decision may exist only in a meeting record, not in the requirements database.

In Clarity, the @source provenance chain on the L6 as-designed record traces back to the L4 baseline that captured the specification, the L5 decision that approved the specification, and the L0 requirement that defined the load rating. The field failure is logged as an L10 observed-state record linked to the L7 as-built record for the affected component. The gap between the L10 observed state (failure under load X) and the L0 requirement (specified for load X) is visible in the traceability view without any manual investigation. The design review team can see, immediately, what the original design intent was and whether it was correctly specified.

Pattern 7 — Inherited decision with mandated baseline (AUKUS)

The AUKUS submarine programme begins with a political commitment at the highest level: the three nations will work together to deliver nuclear-powered submarines to Australia. This is an L5 decision of sovereign scope. It precedes any engineering analysis, any options trade study, any requirements document, and any architecture model. Everything that follows — the L4 mandated baseline (the selected submarine design), the L2 options (Australia’s specific capability configurations), the L0 requirements (the interface standards imposed by the alliance agreement), and the L3 analysis (the trade studies bounded by the mandated design) — is constrained by the L5 decision.

No existing MBSE tool can model this programme structure. They all assume L0 precedes L5. The founding design decision of the programme precedes the requirements that should have informed it. The engineering team’s job is not to determine whether the decision was correct — it is to build the evidence chain that demonstrates the decision is executable under the imposed constraints. That evidence chain must propagate backward (from L5 through L4 to the mandated baseline) and forward simultaneously (from L5 through L2, L1, L0 to the derived requirements) — and both directions must remain live and updateable as the programme evolves over decades.

In Clarity, the L5 political decision is the root entity. Every L2 option, every L1 architecture decision, every L0 requirement that follows is traceable to this root. The programme’s structural integrity — the evidence that every engineering choice is within the L5 mandate — is computable from the Lx model at any point in the programme’s life. In 2040, when the original L5 decision has been followed by fifteen years of L4 baseline changes, the question “is the current configuration still within the original mandate?” is answerable from the model, not from institutional memory.


Section 2 — The Lx chain properties that make non-linear entry tractable

2.1 Forward dependency and evidence back-propagation

The Lx chain has two structural properties that together make non-linear entry tractable. Forward dependency means that every entity in Lx layer N can declare dependencies on entities in Lx layers N+1 through N+k — creating a live dependency graph that propagates downstream when upstream entities change. Evidence back-propagation means that every entity in Lx layer N can carry @source links to the originating entities in layers below N — tracing the evidence path back to its origin regardless of whether the programme started there or arrived there retrospectively.

These two properties together allow a programme that starts at L5 (Pattern 7) to build its evidence chain in both directions simultaneously: forward through the L6–L12 implementation layers as the programme executes, and backward through L4–L0 as the retrospective analysis justifies the inherited decision. The evidence chain does not have to be complete to be useful. An L5 decision with partial L3 coverage is better than an undocumented decision with no coverage — because the partial coverage is visible, the gaps are identified, and the residual risk is structurally recorded rather than hidden in the narrative.

2.2 The overlay system at every layer and entry point

Every entry pattern benefits from the same set of cross-cutting assessment dimensions. A programme that starts at L7 (brownfield as-built) needs cost assessment (Lx.10), supply chain risk assessment (Lx.5), TRL assessment (Lx.4), and regulatory compliance mapping (Lx.2) — not just design plane data. These overlay dimensions apply regardless of the entry layer because they are orthogonal to the Lx chain, not embedded within any specific layer.

The overlay system’s orthogonality is the architectural property that makes it work for non-linear entry. An overlay assessment is not a property of the L2 options study — it is a property of any entity in any layer at any point in the programme’s life. A field failure at L10 carries a cost overlay (what is the cost of the failure and the correction?), a supply chain overlay (is the replacement component still in production?), a regulatory overlay (does the failure trigger a mandatory incident report?), and a programme management overlay (does the correction require a programme schedule change?). All four dimensions are available immediately, at the point of entry, without waiting for the full L0–L5 design plane to be populated.

2.3 Data-first, not process-first

The most important structural distinction between the Lx chain and a phase gate framework is that the Lx chain is a data model, not a process model. A phase gate framework defines states (phases), transitions (gates), and deliverables (artefacts that must exist before the gate can pass). It is a workflow engine. The data that the workflow engine governs lives somewhere else — in PLM, in DOORS, in the configuration management system, in SharePoint.

The Lx chain does not define states, transitions, or gate predicates. It defines typed entities with typed relationships, @source provenance on every field, and an overlay system for cross-cutting assessment. Phase gate frameworks — automotive CDS, defence ISBM, aerospace DO-178C, nuclear commissioning sequences — are views over the Lx data, not the data itself. A CDS gate passes when the evidence in the Lx model satisfies the gate predicate. The predicate is evaluated against the data, not against a milestone date or a deliverable folder.

This inversion — from process-as-primary to data-as-primary — is what enables non-linear entry. A process model blocks entry at any phase other than Phase 1. A data model has no such constraint: you enter by contributing data at whatever layer is appropriate, and the model computes what is known, what is missing, and what can be inferred from what is present.

2.4 Framework support without framework dependency

Because phase gate frameworks are views over the Lx data — not the data itself — Clarity can support any number of frameworks simultaneously without depending on any of them. The same Lx model can generate:

  • A CDS gate deliverable pack for the automotive procuring authority
  • A STANAG-aligned programme review for the NATO programme office
  • An ISBM evidence pack for the UK prime’s internal governance
  • An AS9100 traceability matrix for the quality management system
  • A Lx.2 regulatory overlay assessment for the data protection authority

All from the same data. All current. All traceable to the same @source provenance. None requiring a separate data entry operation.

This is the distinction between framework support and framework dependency. A system that depends on CDS can only support CDS programmes. A system that supports CDS as a view over a data model can support every other framework that the data model can describe — which is every framework, because every framework is ultimately a set of questions about the programme’s evidence state.


Section 3 — Three decades of field evidence

3.1 PLM — the configuration management problem, frozen in 1985

The experience that shaped the most fundamental insight behind the Lx chain was eighteen simultaneous defence programmes running on mandated PLM. Eight hundred data objects. Two thousand-plus requirements across thirty stakeholder groups. Users who had been trained on the system, who wanted to do the right thing, who were resorting to nightly spreadsheet exports because the tool could not generate the reports the business actually needed.

The PLM vendor’s response, when the limitation was identified, was to schedule a custom report development engagement. The engagement would take six months and cost more than the annual licence. The reports would be static views over the current state — they would not propagate when data changed. The underlying data model had not materially changed since the 1980s, when PLM was designed to solve the configuration management problem of the era: tracking engineering change orders against a bill of materials in a hierarchical tree. It solved that problem well. It was not designed for a world in which four digital threads — engineering, manufacturing, spares, and bidirectional change — needed four separate BOM views that the tree architecture had no way to unify.

A corporate mechatronics integration engagement years later exposed the same structural truth: PLM’s hierarchical tree is designed for CAD file management, not for programme intelligence. Variants require parallel trees. ECAD and software artefacts do not fit the tree model. Bulk items become phantom assemblies. Reconciling the engineering BOM with the manufacturing BOM with the spares BOM required a programme of its own — a reconciliation programme whose sole purpose was to extract the truth from three separate tree structures that had been populated by three separate teams with three separate conventions.

The Lx chain sidesteps the tree entirely. Every configuration item exists once. All 16 BOM views — from eBOM through decommissioning BOM — are queries on a single CI graph. Variants are graph filters, not parallel trees. ECAD and software artefacts are typed nodes with softwareType and hardwareType attributes — not attempted fits into a tree structure designed for mechanical parts. Bulk items link directly to ERP via the pBOM query — no phantom assemblies, no manual reconciliation step.

A global tech engagement that required reverse-engineering a PLM database for reporting — because the PLM vendor’s BI module had never been purchased, and when it finally was, it was slower than the spreadsheets it replaced — confirmed the same pattern. PLM solved the configuration management problem it was designed to solve in the 1980s. Adding AI to a PLM system does not change its data model. It makes it a faster PLM system. The DIKW level does not change.

3.2 MBSE — the right methodology, the wrong instrument

The methodology is sound. A military systems integration project generated 13,000 nodes and relationships in under two hours — Army force structures across twelve scenarios, eight capability groups, and eighteen stakeholders, completed in five weeks and briefed to the Chief of Army. The problem was not the notation. It was the isolation: a model in a specialist MBSE tool that could not speak to the DOORS requirements database, the PLM configuration record, or the operational systems that the model was supposed to govern.

The model was accurate and complete as an artefact. It was useless as a live operational tool because it had no structural connection to the systems that would consume its outputs. The moment the briefing ended, the model began to drift from reality. Changes to the force structure went into the DOORS database. Changes to the platform specifications went into PLM. The MBSE model was not updated because updating it required specialist tooling and a modeller who had been trained on it. By the next milestone, the model was a historical record of what had been true six months ago.

Delivering a master’s programme with 1,350 engineering students over four years confirmed the same pattern at smaller scale. The top quartile of students thought in systems — they saw the connections between requirements, architecture, options, and decisions and navigated them intuitively. The bottom quartile could not ask for help because the tool gave them no model to ask against. They knew the pieces existed but had no structural view of how the pieces connected. Removing the MBSE tool and replacing it with a connected data model — requirements as entities, architecture as typed relationships, options as alternative configurations — immediately improved the bottom quartile’s engagement because they could now see what they were navigating.

MBSE tools do not replace the need for a live, connected model. They produce a static representation of one that is perpetually out of date from the moment it is drawn. The Lx chain is a live model. Every L1 architecture diagram in Clarity is a view over the current state of the L1 entities — it cannot drift because it is not drawn; it is computed.

3.3 ERP, MES, MRO, and EAM — vertical slices with no shared floor

De-risking a two-billion-dollar-per-year government ERP migration that had previously been assessed as too risky to touch revealed the same structural problem at enterprise scale: a migration that was dangerous not because the systems were technically complex, but because nobody had a model of what the systems actually did. The stakeholders’ descriptions of the system were inconsistent. The system’s own documentation was inconsistent. Reverse-engineering the data model from the live database revealed formulas so convoluted that no individual understood them — they had accumulated over fifteen years of incremental customisation, each change made in isolation by a developer who understood only the specific change being made.

The consequence: the migration could not be de-risked by analysing the code. It had to be de-risked by building a model of what the code did — a model that connected inputs, transformations, outputs, and dependencies into a structured view that stakeholders could reason about. That model was the prototype of the Lx chain applied to an existing operational system: a representation not of the code, but of the knowledge encoded in the code, traceable back to the decisions that created it.

Owning the global ecommerce digital thread at a Fortune 500 company — where every answer was another fragmented dashboard built on untrusted data — produced the same insight at a different scale. SAP gave one BOM view. The MES gave one BOM view. The MRO gave one BOM view. They were not the same BOM. The entities did not share identity. Reconciling them required integration programmes that generated more complexity than the systems they connected. The closure of the operational loop — a field failure surfacing as active risk against the originating design requirement, in real time, without a meeting — did not exist in any of them. It was structurally impossible in a world where no two systems shared a data model.

3.4 The DIKW elevation — what every legacy tool fails to make

Co-authoring the AWS Cloud Adoption Framework and creating the Operational Excellence pillar for the Well-Architected Framework confirmed that the same structural failure exists in cloud architecture: organisations accumulate tools rather than building models. The absence of a shared knowledge layer means every team builds its own version of the truth. Adding AI to a filing cabinet makes a faster filing cabinet. It does not move the platform up the DIKW hierarchy.

The lesson from a 1998 conversation with the Challenger engineer who warned of the O-ring failure. Confirmed in a uranium facility where the traceability model proved non-compliance while the project documentation claimed the opposite. Confirmed in a naval programme where safety case approval ran to 214 days until the model shortened it to 31. Confirmed in every PLM deployment, every ERP migration, every ecommerce digital thread encountered across three decades:

Decisions fail not because evidence does not exist — but because it is scattered across disconnected tools, buried in documents nobody reads, and assembled too late for anyone to change the outcome.

The non-linear entry patterns documented in Section 1 are the most common structural form this failure takes. A programme that enters at L5 (an inherited decision) and has no model for tracing that decision backward through L0 requirements and forward through L6 implementation will make a series of sub-optimal downstream decisions because it cannot see how the inherited decision constrains the space of valid options. A programme that enters at L10 (a field failure) and has no model for tracing that failure backward to its L0 originating requirement will fix the symptom and miss the root cause.

The Lx chain provides the model. Non-linear entry is not the Lx chain’s exception case. It is the case the Lx chain was designed for.


The programme intelligence case — what changes when the model meets the programme where it is

The traditional project management conversation assumes sequence: requirements first, architecture second, design third, build fourth, operate fifth. Every phase gate framework encodes this assumption. Every professional certification in programme management teaches it. Every project plan template assumes it.

The Lx chain assumes nothing. It provides a typed data model, a provenance chain, an overlay system, and a set of relationship primitives that can represent any programme — sequential or non-linear, greenfield or brownfield, top-down or bottom-up, domestic or multi-national, single-phase or incremental — from wherever that programme is at the moment the model is applied.

The seven patterns in Section 1 are not workarounds for the Lx chain’s limitations. They are use cases the Lx chain was designed to support from the first commit. The field evidence in Section 3 is not anecdote. It is the thirty-year dataset that proved those patterns are the modal case, not the edge case.

“Phase gate frameworks govern the performance of programme management.
The Lx chain governs the evidence that determines whether the performance is real.
One cannot substitute for the other.”

For a programme entering the Lx chain at any of the seven patterns described in this whitepaper, the question to answer is not “which phase are we in?” — it is “what do we know, what do we not know, and what is the cost of the gap between them?” The Lx chain answers all three structurally. The phase gate framework answers none of them — it only answers “is the deliverable folder complete?”

The difference between those two questions is the difference between programme intelligence and programme compliance theatre. Clarity provides programme intelligence. The phase gate folder can be generated from it — as a computed view over the evidence the programme has actually built, not as an artefact assembled for the review.

That is the bargain on offer. Every programme already does the work. Most programmes do not keep the evidence. Clarity keeps it — structurally, from wherever the programme started, for as long as the programme runs.

One thread. 13 verticals. 16 BOMs. 25 USPs.

The only complete digital thread for regulated programmes, powered by the patent pending DeZolve Decision Intelligence Framework. Sovereign deployment under your own AWS account and encryption keys — at 10× less than the enterprise alternatives.