Digital Thread

Thirteen Lifecycle Phases, One Graph

The Lx model from stakeholder intent (L0) to disposal (L12). Forward dependency, evidence back-propagation, and why non-linear entry is the norm, not the exception. Clarity supports every phase-gate framework, agile flavour, and architecture meta-model — and depends on none of them.

Published 9 April 2026 · 32 min read · Thread: Data & Provenance · Lifecycle & Governance

TL;DR

For forty years, engineering programmes have been governed by phase gate frameworks — the three-year and four-year automotive phase gate frameworks of the transatlantic OEM lineage (MDS, CDS, MMDS), the through-life Integrated Support Business Model (ISBM) at a major UK aerospace and defence prime, the lifecycle model used by a major Australian defence contractor, commercial-aerospace production gating, nuclear reactor commissioning gates, and dozens of programme-specific variants. They exist because long-running, repeatable, safety-critical programmes at scale cannot be run any other way. They are not optional. They are not replaceable by agile ceremonies or architecture meta-models. They are the governance instrument for billion-dollar, decade-long, regulator-facing work.

But every phase gate framework has the same architectural weakness: it is a process model pretending to be a data model. Stages, gates, deliverables, and reviews are encoded as workflow artefacts in isolated tools — some in one major PLM, some in another, some in a requirements-management tool, some in an ERP change register, some in SharePoint, most in Excel. The data that the gates are supposed to govern lives somewhere else. When reality produces exceptions — and on a defence or nuclear programme reality always produces exceptions — the gate-holders reconstruct a defensible narrative from scattered evidence, after the fact, at each review.

Clarity inverts the arrangement. Phase gates, architecture frameworks, agile sprints, waterfall milestones, and regulator check-points are all overlays over a single authoritative Lx data graph. The graph has thirteen layers (L0 stakeholder intent through L12 disposal), typed entities, explicit relationships, full @source provenance on every field, and event-driven aggregation. A CDS gate, a SAFe PI, a DoDAF viewpoint, and a through-life ISBM evidence pack are views over the same data — not competing systems of record.

Six properties make this possible, and together they are the Clarity USP for lifecycle governance:

  1. Thirteen layers, not one pipeline. L0–L5 is the design plane (intent through decisions); L6–L12 is the implementation plane (as-designed through disposal). Every layer has its own typed schema, its own invariants, and its own provenance.
  2. Forward dependency and evidence back-propagation. L0 needs drive L2 options drive L3 analyses drive L5 decisions drive L6 as-designed drive L9 as-deployed. When L10 telemetry or L11 in-service feedback reveals a design flaw, the evidence back-propagates through @source links to the originating L4 change or L5 decision. The loop closes automatically.
  3. Non-linear entry is first-class. Real programmes do not start at L0 and march to L12. They start where the customer needs them to start — at L7 as-built for a brownfield audit, at L2 for a capability options study, at L6 for a library import, at L9 for deployment readiness — and propagate outward and backward. Clarity was designed for non-linear entry; every legacy framework treats it as an exception.
  4. Framework support without framework dependency. CDS phase stages, SAFe program increments, DoDAF viewpoints, TOGAF ADM phases, and ISO 15288 processes are all overlay mappings onto the Lx graph. None of them is required. All of them are supported. Swapping between them is a configuration choice, not a migration.
  5. Data-first, not process-first. The phase gate is not a state machine that blocks writes. It is a computed view over the data, evaluated against predicate-level invariants. If the evidence exists, the gate passes. If it does not, the missing evidence is surfaced structurally, not reconstructed at the review.
  6. Sixteen BOM views on one configuration-item graph. Eleven stored types across L6–L12 (eBOM, HBOM, SBOM, FBOM, OBOM, mBOM, tBOM, dBOM, cBOM, rBOM, deBOM) plus five filter / aggregation modes queryable at any layer (vBOM, aBOM, xBOM, iBOM, pBOM) — all are views over a single Lx-linked configuration-item graph. (This whitepaper touches on the 16-BOM architecture; a dedicated whitepaper, Sixteen BOM Views on One CI Graph, describes it in full.)

If you only read one sentence: phase gate frameworks are the right idea executed on the wrong substrate; the right substrate is a thirteen-layer typed data graph with provenance on every field, and every framework in the world becomes a view over it.


The bargain on offer

Every engineering manager running a serious programme has been here. The phase gate review is on Friday. The gate deliverable is a folder of PDFs, a PowerPoint, a dozen linked Excel workbooks, a PLM baseline export, a requirements-management-tool export, an ERP change-order register, a handful of supplier certificates, and somebody’s email confirming a verbal decision from the last Integrated Product Team meeting. None of those artefacts references the others by anything more reliable than a file-name convention and a common weak-link in the configuration-item numbering. Nobody can answer, in less than a day, the question “which of these evidence items actually satisfies which of the gate criteria?” — and the auditor on the other side of the table knows it.

The gate passes anyway. Not because the evidence is airtight, but because the programme has spent twenty people a week reconstructing a narrative that sounds airtight, and the review board does not have time to audit the narrative. Everybody on both sides of the table knows the exercise. Everybody performs it with straight faces. The data that actually closes the loop — the real evidence that the design at L2 still satisfies the intent at L0, that the analysis at L3 still holds against the as-built at L7, that the decision at L5 is still defensible given the telemetry at L10 — is either missing, stale, fragmented, or trapped in a tool whose license will expire next quarter.

"Every phase gate review is a performance.
The question is not whether the narrative is defensible.
The question is whether the data underneath it is real."

Clarity’s founders have run, governed, audited, recovered, and in two documented cases rescued from collapse, programmes sitting inside MDS, CDS, MMDS, the ISBM at a major UK aerospace and defence prime, the naval safety certification programme at a US defence prime’s Australian subsidiary, and a long tail of aerospace-and-defence PLM and requirements-management deployments. This whitepaper is the distilled conclusion of that experience.

It has three sections:

  • Section 1 — the lifecycle-governance landscape: what phase gate frameworks actually are, why they are non-negotiable for regulated work at scale, where agile and waterfall fit (and where they do not), what architecture meta-models (TOGAF, SAFe, DoDAF, MODAF, ArchiMate, ISO 15288) contribute, and what the Silicon Valley workarounds got right and wrong.
  • Section 2 — Clarity’s Lx model: thirteen layers, forward dependency, evidence back-propagation, non-linear entry, framework-as-overlay, and the anchoring of every phase gate to data rather than process.
  • Section 3 — why anchoring to data wins: the operational properties that fall out automatically, the specific failure modes it eliminates, and why no legacy phase gate tool can be retrofitted into this shape.

Section 1 — The lifecycle-governance landscape

Long-running, safety-critical engineering programmes are governed by phase gates, not by ceremonies. This is not a matter of taste. It is a matter of regulator acceptance, insurance pricing, contractual obligation, and the historical record of what happens when phase gates are abandoned. Section 1 maps the landscape: the phase gate frameworks in mainstream use, the honest limits of agile and waterfall, the architecture meta-models that overlap with lifecycle governance, and the patterns Silicon Valley evolved to try to escape the problem.

1.1 Phase gate frameworks — de rigueur for repeatable work at scale

A phase gate framework is a time-ordered sequence of stages, with explicit gate criteria between stages, entry and exit checklists, mandated reviews, named role-holders, and contracted deliverables. The stages have names that vary by industry — Concept, Development, Production, Operations, Retirement in some; Initial Operating Capability, Full Operating Capability, Through-Life Support, Disposal in others — but the structural shape is always the same. You cannot exit stage n until you have produced the artefacts that prove the system is ready for stage n+1, and a named governance body has to sign off the transition.

The frameworks that have shaped the modern engineering industry, and that Clarity’s founders have worked directly inside, include:

MDS — the parent framework

MDS is a four-year vehicle development framework from a premier European premium-segment automotive manufacturer, and the parent of the entire transatlantic automotive phase gate lineage of the late 1990s and early 2000s. It is the reference model from which both CDS (successfully, in the United States) and MMDS (unsuccessfully, in Japan) were derived. Its gate structure — with explicit specification-version control, cross-functional sign-off boards, regulatory conformance gates embedded in the flow, and a strong culture of evidence discipline at each review — set the standard that other premium-segment OEMs calibrated themselves against throughout that era. Clarity’s founder worked inside MDS at the European parent during the transatlantic automotive merger era, including on driving-simulator correlation work at a premier European simulator facility, SPMM-based kinematic and compliance library development, and the cross-Atlantic programme integration work that exposed exactly how differently the MDS pattern could land depending on the organisation adopting it.

CDS — the successful US implementation of MDS

CDS is the implementation of MDS adopted by a major US automotive manufacturer during the transatlantic automotive merger era, tuned for three-year US vehicle development cycles from concept through start-of-production. CDS is the case study of what happens when a phase gate framework is adopted with genuine organisational commitment: the US engineering organisation internalised the MDS evidence discipline, translated it into its own gate cadence, and used it as the operational backbone for the high-volume SUV and pickup platforms of the era. It was the framework that governed the $8B vehicle programme on which Clarity’s founder diagnosed a fundamentally flawed rear suspension design late in the cycle and executed a $35M emergency redesign at a specialist high-velocity prototyping and motorsport facility in a compressed window — preserving the phase gate integrity while compressing a twelve-month redesign into months and avoiding programme cancellation. The lesson from CDS is that a rigorous phase gate framework is not an obstacle to rapid recovery; it is the enabler of rapid recovery, because everyone in the recovery team knows exactly what deliverable each gate demands and exactly which evidence the gate cannot accept without.

MMDS — the failed Japanese implementation of MDS

MMDS was the implementation of MDS adopted by a major Japanese automotive manufacturer after partnership and equity relationships with the transatlantic automotive group introduced the framework into its engineering culture. Unlike CDS, MMDS was a failed implementation — the gate framework was adopted in form but not in evidence discipline, and the verification machinery behind each gate was allowed to drift from the current specification baseline. In 2002–2004, Clarity’s founder, working across the Japanese parent and its Australian subsidiary, conducted a forensic review of Australian-market vehicle releases and exposed a systemic pattern in which the Japanese parent was submitting Australian vehicles against backdated specification versions — exploiting phase gate verification gaps to pass Australian Design Rule (ADR) compliance that the current specifications would have failed. The finding halted the practice, forced adoption of current specs, and strengthened the MMDS phase gate framework by closing the verification loophole, but the deeper lesson was structural: the same parent framework (MDS) produced a successful implementation in one organisation (CDS) and a failed implementation in another (MMDS) because the success of a phase gate framework is never the framework itself — it is the data and evidence discipline underneath it. A framework cannot enforce what the data layer cannot represent. Clarity’s founder had the rare opportunity to work inside all three of MDS, CDS, and MMDS, and the contrast between them is one of the formative experiences behind Clarity’s position that lifecycle governance must be anchored to data, not to process.

ISBM — an excellent model with no implementation

A major UK aerospace and defence prime’s internal lifecycle framework is a through-life support (TLS) model designed for weapons platforms with 30–60 year operational horizons. The ISBM — the Integrated Support Business Model — was the version of this framework that Clarity’s founder, as a senior engineering manager inside the prime’s Australian subsidiary in the mid-2000s, iterated through seven versions and deployed as the winning differentiator in approximately A$450M of successfully captured defence bids, including validation against a live $30M/year trainer-aircraft through-life support programme. As an intellectual model of through-life support — mapping the design gate at L2, the test gate at L8, the deployment gate at L9, and the disposal gate at L12 onto a single mutually consistent evidence trail — the ISBM was excellent. It was one of the clearest conceptual pictures of what through-life engineering governance ought to look like that the industry has produced.

The ISBM was also, however, a model without an implementation. It was not an operational system. It was not a data layer. It was not a set of tools that a programme could install and run. It was a book — literally, a set of documented processes — and a bid-winning apparatus used to demonstrate to customers what through-life support could look like if the organisation ever operationalised it. The organisation did not. In the authors’ direct experience, of more than 1,350 documented business processes at the Australian subsidiary, the only process that could not be written down and agreed was the engineering design process itself — because no agreement could ever be reached across the engineering community on what a common engineering design process should be. Every other process in the organisation — logistics, procurement, training, finance, compliance, operations, disposal — was documented and controlled. The one at the structural heart of the business was the one nobody could agree on.

That gap is the most instructive single data point in the history of phase gate lifecycle governance. It says, as clearly as anything can, that the engineering design process is the one thing that cannot be standardised as a sequence of workflow steps, because it is not a process — it is a data graph of decisions, evidence, options, analyses, and trade-offs that different sub-disciplines walk in different orders for different reasons. Any attempt to write it down as a linear documented process fails, because the real shape of the work is a graph, not a sequence. The ISBM correctly modelled every other dimension of through-life support and ran into the same wall every framework runs into when it tries to encode the engineering design loop as a workflow.

Clarity’s Lx model does not inherit from the ISBM as an implementation, because there was no implementation to inherit from. It inherits from the ISBM as a specification of the problem — a clear articulation of what lifecycle governance should produce as an output — and then provides the data substrate that the ISBM itself could never find: a typed, versioned, provenance-carrying graph on which the engineering design graph can actually live, without having to pretend to be a linear documented process. The irony is neat: the one thing the prime could not document is the thing Clarity makes tractable, by refusing to document it as a process at all and modelling it as data instead.

An Australian defence contractor — naval armour specification and IP governance

A major Australian defence contractor (subsequently absorbed into a larger aerospace and defence prime) ran major Australian naval and land systems programmes under Australian Defence Standard and DEFCON contractual governance. In 2002–2003, Clarity’s founder, then inside that contractor, led the technical and commercial resolution of a $200M contract dispute with a continental European prime equipment supplier, authored the definitive hard-armour specification that became the baseline acceptance-test criterion for the programme, and negotiated the intellectual property and future-access rights that protected the long-term viability of the Australian-built platform. The contractor’s lifecycle framework itself — with its explicit SEIT (Systems Engineering and Integration Team) review governance, its PDR/CDR/SAR formal review gates, and its SAT/FAT acceptance events — is another instance of the pattern this whitepaper is about: a rigorous gate framework doing the right work on top of a data layer (specifications, IP records, test evidence) that was never unified across the programme’s tool stack.

In 2011–2012, Clarity’s founder, then operating as an independent consultant, recovered a stalled naval safety certification programme at the Australian subsidiary of a US defence prime. The programme was 90% through its budget with the work incomplete, governed under MIL-STD-882 System Safety and Australian Defence Standard. The recovery implemented a Master Data Management layer, re-engineered the safety certification workflows to align with MIL-STD-882 intent without the non-value-adding documentation overhead, and introduced real-time progress tracking. The certification lifecycle collapsed from 214 days to 21 days — a 90% reduction — and delivered within the original (nearly exhausted) budget. Two reusable Books of Knowledge (for ethanol/methanol handling and battery systems) were formally recognised by the Navy as best practice in safety engineering, enabling consistent 1-to-many referencing across hundreds of Hazmat and Health Hazard Analysis assessments. The lesson: the phase gate framework was not the problem. The data architecture under it was the problem. When the data was reorganised around a single authoritative source of truth, the same phase gate framework that had been strangling the programme became the instrument of its recovery.

An aerospace-and-defence PLM and the degree-symbol outage

A major aerospace-and-defence PLM platform was the backbone for eighteen major defence programmes that Clarity’s founder governed between 2005 and 2007. The daily reporting layer built on top of it extracted the meta-model into a single nightly master report, distributed to every programme by 9:00 AM each morning, and eliminated multi-day manual data gathering across engineering, programme management, and contract deliverables. That same PLM platform is also the source of one of the most instructive data-modelling failures in industry history: an outage caused by a degree symbol (°) in the ambient-temperature specification field that the underlying relational database could not encode correctly, which took down a production PLM deployment and produced days of reconstructive work. The lesson that every engineering data architect should carry forever is that encoding, typing, and character-set handling are not cosmetic concerns. They are load-bearing architectural decisions, and a tool that collapses on a degree symbol is a tool whose data model was never really typed at all.

Others

The frameworks named above are a handful of many. Other major commercial-aerospace primes operate their own integrated product development systems, other defence primes run their own V&V phase gating, propulsion OEMs maintain their own through-life engineering processes, nuclear regulators mandate their own commissioning gates, and a long list of national-defence-specific variants (the UK’s CADMID, Australia’s Defence Capability Lifecycle, the US DoD 5000 series) layer further governance on top. All of them share the same structural shape. Clarity’s founders have worked inside or against most of them. Every one of them encodes the same core insight — you cannot govern a long, safety-critical, multi-supplier, regulator-facing programme without explicit gates and explicit evidence — and every one of them encodes that insight on top of a process layer that has no principled data model underneath it.

"Phase gate frameworks are the right idea,
executed on the wrong substrate.
The idea is non-negotiable for regulated work.
The substrate is the reason nothing closes the loop."

1.2 Why phase gates are non-negotiable for long-running repeatable work

Critics of phase gate frameworks — mostly from the agile and lean-startup end of the field — argue that gates encode rigidity, slow velocity, and suppress learning. In consumer software development on a greenfield codebase, those critics are often right. In a regulated engineering programme delivering a nuclear instrumentation suite, a naval combat system, a medical implant, an aviation propulsion platform, or a large-scale industrial control system, they are wrong, and the reason they are wrong is worth being explicit about.

Five structural properties of regulated engineering work make phase gates mandatory rather than optional:

  • Multi-supplier, multi-authority, multi-jurisdiction. A naval programme involves a prime, ten–thirty major sub-contractors, government owners, regulators, airworthiness authorities, export-control authorities, insurance underwriters, and operator communities. None of these parties can work from shared understanding without explicit synchronisation points — and those synchronisation points are phase gates, whether they are called that or not.
  • Decade-plus time horizons. Decisions taken at L5 in year two must remain defensible in year fifteen when the system is being modified, in year twenty-five when the operator changes doctrine, and in year forty-five when the platform is being retired. Without gates and evidence packs, the rationale is lost to staff turnover and tool obsolescence within a decade.
  • Safety-of-life consequences. When a defect kills people, regulators and courts do not accept “we iterated to the best answer” as a defence. They accept “we made this decision at this gate, on this evidence, with these reviewers, and the evidence supported the decision at the time”. The second defence is what phase gates produce. The first defence is what agile ceremonies produce. One of the two is admissible in an inquiry.
  • Cost-of-rework asymmetry. Rework at L6 (as-designed) costs tens. Rework at L7 (as-built) costs hundreds. Rework at L9 (as-deployed) costs thousands. Rework at L11 (in-service) costs tens of thousands, in both dollars and operator dissatisfaction. Phase gates exist because the cost curve is exponential, and the best place to catch a defect is at the earliest gate where the evidence could have shown it.
  • Regulator and auditor expectation. The regulator does not audit your sprint retrospectives. The auditor does not read your Jira tickets. They audit your gate packs, your requirements traceability matrices, your verification and validation evidence, your change-control records, and the chain of approvals that connects them. If the evidence is not in phase-gate-shaped form, the programme cannot pass audit. Period.

Phase gates, then, are not a preference. They are the minimum governance for serious engineering work at scale. The question is never whether to have phase gates. The question is what substrate the gates sit on, and whether the evidence they demand can actually be produced without twenty people reconstructing it from spreadsheets the night before.

1.3 Agile and waterfall — the methodology wars, honestly assessed

The decade-long methodology war between agile (Scrum, XP, Kanban, SAFe, LeSS, DSDM, DAD) and waterfall is largely a category error. Agile and waterfall are work-organisation methods operating inside phase gate frameworks, not alternatives to them. A programme can run agile iterations inside a CDS stage, produce waterfall-style signed deliverables at the CDS gate, and neither choice changes the gate criteria or the evidence the gate demands. The choice is about how you organise the work between gates, not whether the gates exist.

Where the methodology wars went wrong, and where the scars are still fresh, is in two specific failure modes:

Agile at enterprise scale — the SAFe compromise

The Scaled Agile Framework (SAFe) is the most widely adopted enterprise scaling model, with Large-Scale Scrum (LeSS), Disciplined Agile Delivery (DAD), and Nexus as credible alternatives. SAFe’s genuine contribution is to acknowledge that enterprise programmes need cross-team synchronisation, release trains, program increments, and portfolio-level governance — in other words, it acknowledges that some of the phase gate apparatus is unavoidable at scale. Its structural weakness is that SAFe’s ceremonies (PI planning, system demos, inspect-and-adapt workshops) produce mostly process artefacts — meeting outputs, Jira states, Confluence pages, slide decks — and not the typed, provenance-carrying, regulator-admissible evidence that a real phase gate demands. SAFe works as long as the outputs are consumed by a workflow downstream; it does not work as a substitute for phase gate evidence when the regulator shows up.

Several of the high-profile enterprise agile failures of the last fifteen years have been failures of the evidence layer, not the ceremony layer. The work was done. The reviews happened. The retrospectives produced honest findings. But when the gate review arrived, the evidence pack was a screenshot of a Jira board, not a traceable claim substantiated by signed artefacts. A regulator cannot audit a screenshot.

Waterfall as a strawman

Waterfall — the linear requirements → design → build → test → deploy model — is usually set up as the villain in the methodology war, and it deserves some of the criticism. Rigid waterfall treats requirements as frozen at the start, prevents learning from reaching back into earlier stages, and punishes exception handling. But much of what is blamed on waterfall is really the absence of feedback loops, not the presence of sequential stages. A phase gate framework with strong back-propagation (L10 telemetry flowing back into L2 parameters, L11 in-service feedback flowing back into L4 change records, L7 as-built deviations flowing back into L5 decisions) looks nothing like the caricature of waterfall, even when the outward shape is sequential.

The honest assessment is this: agile without phase gates is ungovernable for regulated work; waterfall without back-propagation is brittle under real-world exceptions; neither is a substitute for a data architecture that lets evidence flow forward through dependencies and backward through provenance in the same graph.

1.4 Architecture meta-models — TOGAF, SAFe, DoDAF, MODAF, ArchiMate, ISO 15288

Parallel to the phase gate and methodology traditions, the enterprise-architecture community has produced a series of meta-frameworks that attempt to describe the thing being built rather than the process of building it. Each contributes something. None of them is a lifecycle framework. All of them sit above or beside the phase gate apparatus and inherit the same data-layer weaknesses.

TOGAF — The Open Group Architecture Framework

TOGAF is the most widely deployed enterprise architecture framework, with its Architecture Development Method (ADM) providing an iterative eight-phase cycle (preliminary, architecture vision, business architecture, information systems, technology architecture, opportunities and solutions, migration planning, implementation governance, architecture change management). Clarity’s founder used TOGAF as the categorisation framework during a 2008 global PLM benchmarking tour across seven major defence-prime sites in the United States and United Kingdom, capturing over 1,000 observations across technical, process, and cultural dimensions to inform an Australian defence subsidiary’s enterprise PLM strategy. TOGAF is genuinely useful as a classification scheme for architectural concerns. Its limitation is that it has no native data layer of its own — TOGAF outputs are usually stored in generic diagramming tools or office document formats, with all the provenance and traceability problems that implies.

DoDAF and MODAF — defence architecture frameworks

The US Department of Defense Architecture Framework (DoDAF) and the UK Ministry of Defence Architecture Framework (MODAF) are defence-specific meta-models that define a set of viewpoints (All Viewpoint, Capability Viewpoint, Operational Viewpoint, Systems Viewpoint, Services Viewpoint, Standards Viewpoint, Data and Information Viewpoint, Project Viewpoint) and the products that populate each viewpoint. They are mandatory on many large defence acquisition programmes. Their strength is the rigour of the viewpoint model. Their weakness is that the viewpoints are expressed as documents and diagrams produced by specialist architects in specialist tools (often IBM Rational System Architect or equivalents), with no shared data layer underneath. Two DoDAF views of the same system, produced by two different architects, will quietly disagree on the underlying facts, and nobody notices until the integration review fails.

ArchiMate — the modelling notation

ArchiMate is an open standard modelling notation published by The Open Group, providing a visual language for describing enterprise architecture across business, application, and technology layers. It is widely used inside TOGAF. It is a notation, not a data model — meaning the same critique applies: the rigour is in the diagrams, and the diagrams live in tools whose provenance and traceability story is weak.

ISO/IEC/IEEE 15288 — the systems engineering process standard

ISO 15288 is the internationally ratified systems engineering process standard, defining technical processes (stakeholder needs, requirements, architecture, design, implementation, integration, verification, validation, operation, maintenance, disposal), technical management processes, and agreement processes. It is the reference standard most other frameworks map onto. Clarity’s Lx model is directly compatible with ISO 15288 at the process level — L0 corresponds to stakeholder needs, L1 to context / boundary, L2 to architecture options, L3 to design analysis, L4 to configuration baselining, L5 to decisions, and L6–L12 to the implementation and operation processes. ISO 15288 is the specification of the problem. Clarity is one implementation of a solution that satisfies the specification by making every process an overlay on a shared data graph.

The common limitation

Every one of these architecture meta-models — TOGAF, SAFe, DoDAF, MODAF, ArchiMate, ISO 15288 — is a process and viewpoint description, not a data architecture. They describe what should be produced, by whom, at which stages, in which viewpoints. They do not describe how the resulting artefacts should be stored, related, versioned, or provenance-tracked. Every deployment of any of these frameworks, in every organisation in the world, has to solve the data-layer problem independently, and almost all of them solve it badly — with a tool stack of Visio, Excel, SharePoint, a PLM, a requirements tool, a workflow engine, and a small army of consultants maintaining the glue.

Clarity’s contribution is to provide the missing data layer — one that every architecture meta-model can overlay, without requiring any of them.

1.5 Silicon Valley workarounds — what the hyperscalers learned

The Silicon Valley response to the heavy-weight phase gate and architecture-framework traditions was to try to escape them altogether. The site-reliability and software-engineering cultures of the major hyperscalers, the working-backwards and two-pizza-team methods that emerged from the leading consumer-internet platforms, the chaos-engineering and loosely-coupled-services patterns popularised by streaming-media operators, and the broader DevOps and platform-engineering movements all share an underlying premise: if the product is internet-scale consumer software, you can replace phase gates with continuous delivery, fast rollback, A/B testing, feature flags, and automated observability.

For internet-scale consumer software, they are mostly right. For regulated engineering work, they are mostly wrong — but the techniques they developed are genuinely useful, and the mistake is to treat them as alternatives to phase gates rather than as operational primitives that phase gate frameworks can and should adopt.

What worked — and belongs in every engineering stack

  • Continuous integration and continuous deployment at the software layer, adapted with appropriate review gates for firmware on safety-critical systems.
  • Feature flags and progressive rollout for deploying new capability to subsets of the operator community before full release.
  • Observability as a first-class architectural concern, with telemetry, logging, and tracing designed in rather than bolted on.
  • Chaos engineering and resilience testing to prove systems survive the exceptions the specification never anticipated.
  • Customer-obsessed working-backwards documents (the PRFAQ discipline pioneered at a major consumer-internet platform) to clarify intent before committing engineering resources. Clarity’s founder used the PRFAQ mechanism to author a comprehensive strategic blueprint for a national digital-courts proposal to an Eastern European government in 2022 during active conflict conditions, and the mechanism works for regulated engineering as cleanly as it does for consumer software.
  • Standardised platform engineering so product teams are not each re-inventing infrastructure. Clarity’s founder was involved in authoring and deploying an enterprise cloud-adoption framework, a migration-readiness assessment methodology, the operational-excellence pillar of a widely-adopted cloud-architecture framework, and the global consulting training curriculum that scaled a major hyperscaler’s professional services practice from $50M to $350M across 1,200 consultants in 30 countries — all of which are platform-engineering mechanisms that translate cleanly into regulated-engineering governance.

What failed — and should not be copied into regulated work

  • The myth that phase gates are always bureaucratic overhead. Phase gates exist because of exponential rework costs and regulator expectation. Silicon Valley’s ability to A/B test a button colour does not translate to a submarine propulsion control system.
  • The myth that evidence can be reconstructed from logs after the fact. Logs are operational observability. They are not traceable provenance from L0 stakeholder intent to L11 in-service feedback, and they never will be.
  • The myth that distributed, loosely-coupled services remove the need for shared semantic models. They do not. Every distributed system eventually re-invents a shared schema, and the ones that refuse to do it up-front end up doing it badly, in production, under pressure.
  • The myth that move fast and break things is a governance model. It is a start-up slogan from a specific era of consumer software, not a strategy that survives exposure to safety-of-life regulators.

What was never attempted

Silicon Valley never attempted to build a data-first lifecycle governance platform for regulated engineering. It never needed to. Its customers were consumer-software platforms, not defence primes or nuclear operators. The gap between “we have Jira and a Confluence page” and “we have a 13-layer typed graph with provenance on every field and evidence back-propagation from L11 to L4” is the gap Clarity exists to fill, and it is a gap no hyperscaler has stepped into because the hyperscaler business model does not reward filling it.

1.6 The common thread — every framework has half the answer

Step back from the landscape and the pattern is clear. Every one of these frameworks — phase gates, agile variants, architecture meta-models, hyperscaler platform engineering — has half the answer, and the halves do not compose.

  • Phase gate frameworks have the governance half. They know what evidence a gate demands. They do not have the data half: the evidence is scattered across tools, reconstructed at each review, and provenance-free.
  • Agile and SAFe have the work-organisation half. They know how to keep teams moving and synchronised. They do not have the governance half: their outputs are ceremonies and process artefacts, not regulator-admissible evidence.
  • Architecture meta-models (TOGAF, DoDAF, MODAF, ArchiMate, ISO 15288) have the description half. They know what viewpoints a system requires. They do not have the data half: the viewpoints live in document tools with no shared backbone.
  • Hyperscaler platform engineering has the operational-primitives half. It knows how to deploy, observe, and recover software at scale. It does not have the lifecycle half: it was never designed for 30–60 year platforms with safety-of-life regulators.

Clarity’s position is that the only composition that works is the one that moves every framework onto a shared data layer with forward dependency, evidence back-propagation, provenance on every field, and thirteen explicit lifecycle layers. Section 2 describes how that data layer works.


Section 2 — The Lx model: thirteen layers, one graph

Clarity’s answer to the lifecycle-governance problem is an explicit, typed, versioned, provenance-carrying data graph with thirteen layers. The layers are not a process prescription. They are a semantic decomposition of the kinds of things an engineering programme needs to reason about, from stakeholder intent to final disposal, with relationships running forward (dependency) and backward (evidence). Every phase gate framework, every agile flavour, every architecture meta-model, and every hyperscaler operational primitive becomes a view or an overlay on this graph. None of them is required; all of them are supported.

2.1 The thirteen layers, end to end

The Lx model has two planes. The design plane (L0–L5) covers the reasoning from stakeholder intent through to formal decisions. The implementation plane (L6–L12) covers the physical and operational reality from as-designed through to disposal. The two planes are connected by @source provenance links that run in both directions, so a design decision at L5 can be traced forward to its as-built evidence at L7 and its in-service telemetry at L10, and a service-life failure at L11 can be traced backward to the originating assumption at L0.

LayerNamePurposeKey entity types
L0Stakeholder intentNeeds, goals, invariants, assumptions, constraints, questions, facts, risks — everything the stakeholders asserted before the engineering startedneeds, goals, invariants, assumptions, constraints, risks
L1System contextThe system boundary and its relationship to the external world: external actors, interfaces to the world, environmental conditionsboundary, actors, external interfaces
L2Architecture optionsOption sets, options, internal interfaces, parameters (Measures of Performance)option sets, options, interfaces, parameters
L3Scenarios & analysesWhat-if scenarios, Monte Carlo, Pareto, MCDA, analyses, Measures of Effectiveness overridesscenarios, analyses, MoE overrides
L4Change baselinesEngineering change requests, engineering change notices, configuration items, baselinesECRs, ECNs, CIs, baselines
L5DecisionsFormal decision records, evidence, trustworthiness, Measures of Successdecisions, evidence, trust vectors
L6As-designedThe design baseline: eBOM, HBOM, SBOM, FBOM, OBOM, interfaces, qualification intentCIs, eBOM/HBOM/SBOM, acquisition modes
L7As-builtThe physical realisation: mBOM, build deviations, serial numbers, manufacturing evidencebuilt items, mBOM, deviations
L8As-validatedV&V results, qualification test outcomes, certification evidencetest results, qualifications, certificates
L9As-deployeddBOM per site, deployment variants, commissioning recordsdeployed items, dBOM, sites
L10As-operatedTelemetry, Measures of Achievement, operational deviations from designtelemetry, MoA deltas
L11As-updatedIn-service BOM (sBOM), repair BOM (rBOM), change-on-change (cBOM), upgrade recordsin-service items, sBOM, upgrades
L12As-disposedDisposal manifests, deBOM, retirement records, lessons-learnt harvest back to L0disposal records, deBOM, closed loops

Every layer has an explicit typed schema, a set of allowed relationships, a set of overlays (financial, supply chain, technology readiness, regulatory, security, quality, risk, lifecycle, external systems, export control), and a set of invariants enforced at three rings (annotation, approval, solver). Every field in every entity carries a thirteen-field @source provenance record: lineage (human/ai/algorithm/import), timestamp, author, confidence, evidence references, tool attribution, validation status, supersession, export-control status, design-authority provenance, comments, knowledge-graph validation, and crowd signals.

2.2 Forward dependency — how intent propagates down the stack

Every entity in the Lx graph has explicit forward-dependency edges to the entities at later layers that it shapes. A stakeholder need at L0 shapes a set of capabilities at L0, which shape a set of requirements at L1, which shape a set of interfaces at L1 and options at L2, which shape a set of analyses at L3, which shape a set of decisions at L5, which shape a set of as-designed configuration items at L6, which shape a set of build instructions at L7, which shape a set of deployment packages at L9, which shape a set of operational procedures at L10, which shape a set of in-service updates at L11, which shape a set of disposal records at L12.

Forward dependency is not a diagram. It is a property of the graph. Every edge is a JSON reference with its own @source record, so the lineage from any late-layer entity back to the L0 intent that shaped it can be traversed in seconds — without joins across tools, without reconstructive spreadsheets, and without consulting the programme director’s memory.

2.3 Evidence back-propagation — how reality flows back up the stack

The dual of forward dependency is evidence back-propagation. When something happens at a late layer — a test result at L8, a deployment deviation at L9, a telemetry anomaly at L10, an in-service failure at L11 — the evidence back-propagates through the @source graph to every earlier-layer entity that depended on the condition being different.

Back-propagation is the property that closes the loop:

  • An L8 qualification test that reveals a design margin was too tight back-propagates to the L5 decision that approved the margin and the L2 option it sat inside, surfacing the margin failure as a structural event rather than an email.
  • An L10 operational telemetry anomaly that shows Measure-of-Achievement delta from the L3 Measure-of-Effectiveness prediction back-propagates to the L3 analysis, the L2 parameters, and the L0 invariants that constrained them, proposing a re-calibration rather than a silent drift.
  • An L11 in-service mod that fixes a recurring field fault back-propagates to the L4 change record, the L6 as-designed entity it modifies, and the L5 decision that accepted the original design, creating a traceable rationale for the mod that a future auditor can follow.
  • An L12 disposal record that identifies a recyclable-material compliance gap back-propagates to the L0 lessons-learnt knowledge graph, so the next programme’s L0 stakeholder intent incorporates the lesson automatically.

Back-propagation is the thing every legacy phase gate framework fails at. A CDS gate can tell you that a decision was taken. It cannot tell you, fifteen years later, that a subsequent operational telemetry anomaly has invalidated the evidence the decision sat on. Clarity’s back-propagation model does — as a structural property, not a feature.

"Forward dependency is how intent propagates down the stack.
Evidence back-propagation is how reality flows back up.
The loop closes because both run in the same graph,
over the same provenance, with no reconstruction required."

2.4 Non-linear entry — the norm, not the exception

Clarity was designed for the fact that real programmes do not start at L0 and march sequentially to L12. They start wherever the customer needs them to start, and they propagate in both directions.

Concrete non-linear entry patterns that Clarity supports natively:

Entry at L7 — brownfield audit of an existing build

A programme that inherits a physical system — a twenty-year-old naval platform, a legacy manufacturing line, an acquired facility — can start at L7 by importing the as-built configuration, then back-propagate through L6 as-designed, L4 baselines, L5 decisions, L3 analyses, L2 options, and L0 intent to reconstruct the decision rationale for future audit and modification purposes.

Entry at L2 — capability options study

A pre-concept options study that the customer asks for before any formal L0 intent exists can start at L2 with a candidate set of architecture options, propagate forward into L3 analyses to explore trade-offs, and back-propagate into L0 to tease out the implicit stakeholder needs from the option space.

Entry at L6 — library import

A supplier-provided parts library, a CAD-imported assembly tree, or a long-lead procurement BOM can enter at L6 as a standalone BOM (via the lx-bom.json sidecar architecture), before any L2 option set references it, and propagate forward into the option sets once the architecture catches up.

Entry at L9 — deployment readiness

An operator preparing to deploy an already-designed system into a new site can start at L9 with site-specific dBOM variants, pull forward the as-designed evidence from L6 and the as-validated evidence from L8, and use the combination to produce a deployment readiness pack that the regulator will accept.

Entry at L10 — digital twin bootstrap

An existing fleet that has never been digitally modelled can be bootstrapped at L10 by ingesting telemetry, building up the Measures of Achievement, and back-propagating to L3 to build the analyses and L2 to build the parameters that describe the fleet’s operational reality.

Entry at L12 — lessons-learnt capture

A disposal or retirement programme can start at L12, harvest the lessons-learnt into the L0 knowledge graph, and use the harvest to seed the next programme’s intent.

Every one of these entry patterns is a first-class operation in Clarity. In a legacy phase gate tool, every one of them would require bending the tool against its intended workflow, fighting the state machine that assumes sequential stage progression, and reconstructing evidence in the gap between where the data is and where the tool expects it to be.

2.5 Frameworks as overlays — supported without dependency

Because the Lx graph has explicit typed entities, explicit typed relationships, and explicit @source provenance, every phase gate framework, every agile scaling model, and every architecture meta-model can be expressed as an overlay — a mapping from the framework’s concepts onto Lx entities and relationships — without modifying the underlying graph.

FrameworkHow it overlays the Lx graph
CDSCDS stages map to gate-views over L0–L6; CDS deliverables map to evidence packs filtered from the @source records
MDS / MMDSSame as CDS with four-year cadence; specification version control maps to Lx change-baseline L4 records
ISBMThe ISBM’s through-life view specification renders directly as L6–L12 evidence-pack views over the Lx graph; TLS support cost models overlay via the financial overlay group. The ISBM always described the right output — Clarity is the first substrate that can actually produce it
ISO 15288Technical processes map directly onto Lx layers (stakeholder needs → L0, requirements → L1, architecture → L2, design → L6, integration → L6–L7, verification → L8, validation → L8, operation → L10, maintenance → L11, disposal → L12)
TOGAF ADMADM phases map to Lx layers (business architecture → L0/L1, information systems architecture → L2, technology architecture → L2/L6, implementation governance → L4/L5)
DoDAF / MODAFViewpoints map to filtered views over Lx entities (Capability Viewpoint → L0 capabilities, Operational Viewpoint → L1–L3, Systems Viewpoint → L2–L6, Standards Viewpoint → overlay references, Services Viewpoint → L1 external interfaces)
SAFeProgram Increments map to time-window filters over L2–L5 activity; ARTs map to team-scoped views; Inspect-and-Adapt outputs map to L0 lessons-learnt harvest
ArchiMateBusiness, application, and technology layers map to L0–L1, L2–L5, and L6–L9 respectively; ArchiMate notation becomes a renderer over the Lx data
CMMIProcess areas map to overlay groups; practices map to invariant predicates at L0; maturity evidence maps to filtered @source queries
MIL-STD-882 System SafetyHazard analyses and safety requirements map to L0 invariants; hazard controls map to L2 parameters; safety certification maps to L8 validation evidence

The critical property is that none of these frameworks is embedded in the Lx schema. They are all overlays that Clarity knows how to produce on demand. A customer running CDS can switch to SAFe without migrating any data; a customer running TOGAF can add a DoDAF overlay for a defence programme without forking their deployment; a customer with an ISO 15288 compliance mandate can produce the ISO 15288 view and the CMMI view and the MIL-STD-882 view from the same Lx graph in parallel.

2.6 The event-driven kernel underneath

The Lx layer model sits on top of the event-driven kernel described in the companion whitepaper, Event-Driven by Kernel, Not by Feature. Every write is an immutable JSON file in tenant-isolated S3. Every aggregation is debounced through EventBridge. Every schema is typed and versioned. Every field carries @source provenance. The quiesce pattern prevents burst-write inconsistency. The three rings of invariant enforcement catch violations at annotation, at change-approval, and at solver time.

This matters for lifecycle governance because it means the Lx graph has no workflow engine underneath it. Phase gates are not state machines that block writes. They are computed views over the data, evaluated against predicate-level invariants. If the evidence exists and the invariants hold, the gate is green. If they do not, the missing evidence or the violated invariant is surfaced structurally — with its full provenance chain — rather than reconstructed at the review. The gate-holder can then decide whether to accept the gap, escalate it, or send the programme back to close it. The decision is always a human decision. The evidence supporting the decision is always structural.

2.7 Sixteen BOM views on one configuration-item graph

Among the sharpest examples of the Lx model’s framework-as-overlay property is the BOM story. Engineering programmes do not need one bill of materials. They need sixteen, each answering a different question about the same underlying configuration items.

Clarity’s canonical set of sixteen BOM view types is split into two structural groups: stored types that live in per-layer BOM files across L6–L12, and filter / aggregation modes that are queried on top of the stored graph at any layer. The distinction matters because filter modes are not duplicated data — they are views the platform can render on demand, with no extra storage, no extra write paths, and no reconciliation problem.

Stored types (per-layer BOM files)

  • eBOM — Engineering BOM (L6, as-designed)
  • HBOM — Hardware BOM (L6 discipline partition)
  • SBOM — Software BOM (L6 discipline partition; aligned with SPDX and CycloneDX cybersecurity SBOMs)
  • FBOM — Firmware BOM (L6 discipline partition)
  • OBOM — Operational / Support BOM (L6, the as-acquired / as-inherited long-lead and support spine)
  • mBOM — Manufacturing BOM (L7, as-built)
  • tBOM — Test / Qualification BOM (L7 / L8, the qualified-configuration view)
  • dBOM — Deployment BOM (L9, as-deployed per site)
  • cBOM — Calibration / Configuration BOM (L11, the in-service configuration state)
  • rBOM — Repair / Overhaul BOM (L11, sparing and overhaul kits)
  • deBOM — Decommissioning BOM (L12, disposal and retirement manifests)

Filter / aggregation modes (queryable at any layer)

  • vBOM — Variant BOM (filter on variant-applicability edges)
  • aBOM — Alternate / Substitute BOM (filter on alternate-for and substitute-for edges)
  • xBOM — Export-Controlled BOM (filter on @source.exportControl classification for ITAR / EAR / NOFOR handling)
  • iBOM — Inherited / Gap BOM (aggregation surfacing inherited supplier content and unresolved gaps)
  • pBOM — Procurement / As-Ordered BOM (aggregation by CI; resolves the supernode problem for fasteners and bulk items)

Every one of these sixteen BOMs is a view over the same underlying Clarity configuration-item graph, with layer-appropriate filtering and overlay application. No separate database. No re-ingested data. No manual reconciliation between eBOM and mBOM. The eBOM lives at L6, the mBOM at L7, the tBOM at L7/L8, the dBOM at L9, the rBOM/cBOM at L11, and the deBOM at L12 — all linked through the same configuration items, all traceable by @source provenance from design intent to disposal. The five filter modes render at query time against that same graph without touching any stored data.

A full technical treatment of the sixteen-BOM architecture, the lx-bom.json sidecar schema, the forward dependency and evidence back-propagation paths across BOM views, and the anti-patterns it replaces in legacy PLM/ERP stacks is the subject of a dedicated companion whitepaper, Sixteen BOM Views on One CI Graph, which is in preparation. For the purposes of this whitepaper the key point is the same point that runs through the rest of the Clarity architecture: the BOM is a view, not a system of record. The system of record is the Lx graph.


Section 3 — Why anchoring to data wins

Sections 1 and 2 laid out the landscape and the architecture. Section 3 answers the harder question: what specifically do you gain by anchoring lifecycle governance to a typed data graph rather than to a process workflow, and why can no legacy tool be retrofitted into this shape?

3.1 Every phase gate becomes computable from evidence

The first and most important consequence is that gate status becomes computable. Rather than reconstructing a narrative at each gate review, the gate criteria are expressed as predicates over the Lx graph, evaluated continuously, and shown in the sidebar with their full provenance chain.

A concrete example. An L2 System Definition Review gate might demand:

  • Every L0 stakeholder need has at least one traceable L1 requirement.
  • Every L1 requirement has at least one L2 option that claims to satisfy it.
  • Every L2 option has at least one L2 parameter with human-verified @source lineage.
  • No active L0 invariant is violated at Ring 1 annotation.
  • Every L2 interface has an L1 boundary reference.

All five of these are predicate expressions over the Lx graph. The review board walks into the meeting with the evaluated predicates on the screen, the failing ones expanded to show the specific entities and the missing evidence, and the provenance chain for every passing one available in one click. The review takes an hour, not a day, and everybody leaves knowing exactly what was green, what was red, and why.

3.2 Regulator-admissible evidence packs are a filter, not a production

The second consequence is that regulator-admissible evidence packs stop being a production and become a filter. The auditor arrives, Clarity renders the evidence pack the auditor’s framework demands — ISO 15288, DoDAF, CMMI, MIL-STD-882, DO-178C, NQA-1, IEC 61508, whatever — and the pack is a provenance-tracked filter over the existing data, not a re-authored PDF. The artefacts are the same artefacts the engineering team has been working with continuously. The difference is that the framework overlay is computed at render time rather than maintained by hand.

This changes the economics of audit dramatically. A typical defence programme spends one to three person-years per audit cycle preparing evidence packs. That work disappears, because the evidence was never decoupled from the data in the first place.

3.3 Non-linear programmes stop being exceptions

The third consequence is that every non-linear entry pattern stops being an exception to be worked around. Brownfield audits, mid-lifecycle acquisitions, library imports, digital-twin bootstraps, disposal harvests — all of them are first-class operations against the Lx graph. No workflow engine to fight. No state machine to bypass. No tool-chain gymnastics to fit a real programme into the idealised L0-to-L12 sequence.

This matters because, in the authors’ collective experience across defence, automotive, aerospace, and manufacturing, most real programmes are non-linear. The linear L0→L12 pattern is the demonstration case in the vendor glossy. The actual work is a brownfield system with a partial rebuild in progress, an inherited supplier library, a pending retrofit, an operational fleet, and a disposal obligation, all running concurrently. Clarity assumes the real pattern.

3.4 Framework swap becomes a configuration choice

The fourth consequence is that switching or combining frameworks becomes a configuration choice, not a migration. A customer running CDS who acquires a division running SAFe does not migrate data. They turn on the SAFe overlay and the CDS overlay on the same Lx graph and both teams see their preferred view. A customer mandated to move from MODAF to NATO Architecture Framework (NAF) does not re-author any viewpoints. They swap the overlay. A customer whose regulator demands ISO 15288 evidence one year and MIL-STD-882 safety evidence the next renders both from the same data, continuously, at no incremental cost.

This is the property that every legacy enterprise-architecture deployment wants and none of them can deliver, because none of them separated the framework from the data.

3.5 Through-life evidence becomes the default

The fifth consequence is that through-life evidence becomes the default, not a separate practice. The ISBM model — which Clarity’s founder iterated through seven versions and used to win A$450M of defence bids — correctly articulated the premise that a supplier’s value proposition for a 30–60 year platform is the evidence they can still produce in year forty-five, not the evidence they produced in year two. The ISBM could not operationalise that premise because the data layer to support it did not exist. Clarity’s Lx model provides the data layer the ISBM always needed: every @source record is carried from the day it was written, and an L2 parameter set in year two, with a human-verified lineage and a provenance chain back to the L0 need it served, is still queryable in year forty-five as a first-class operation. The through-life story is not a feature. It is the data model — and it is the first time the through-life evidence promise has ever had a substrate that could actually keep it.

3.6 Why no legacy phase gate tool can be retrofitted

The sixth consequence is the negative one: no legacy phase gate tool can be retrofitted into this shape. The reason is the same reason the Event-Driven by Kernel whitepaper gave for legacy enterprise stacks: the decisions that make the Clarity architecture work are kernel-level decisions, not feature-level ones. Immutable writes. Typed schemas. @source on every field. Event-driven aggregation. Three rings of invariants. Workflow as view, not as cage. A legacy PLM, ERP, or workflow engine would have to rebuild all of those at the kernel to get there, and the rebuild would invalidate every customisation and every integration that sits on top — which is the business model the vendor lives on.

A legacy phase gate tool with an AI assistant bolted on top is still a legacy phase gate tool. A legacy PLM with a “digital thread module” bought by acquisition is still a legacy PLM. The module cannot do what Clarity does, because the data underneath the module still lives in the same business objects, the same in-place updates, and the same provenance-free fields as before. The bolt-on can produce a dashboard. It cannot answer the question “which L10 telemetry anomaly back-propagated through which L5 decision through which L2 option through which L0 invariant, with full provenance, in under one second” — because the back-propagation was never in the data model.

3.7 The Clarity USP in one sentence

Every point in Section 3 cashes out into a single positioning claim that the Clarity product can make honestly, in front of any phase gate auditor, any architecture reviewer, any regulator, and any procurement committee:

Clarity is the only lifecycle-governance platform that anchors phase gates, architecture frameworks, and through-life evidence to a typed data graph — thirteen explicit layers from stakeholder intent to disposal, with forward dependency and evidence back-propagation in the same graph, full @source provenance on every field, and every framework in the world supported as an overlay without being depended on.

That is the Clarity USP for lifecycle governance. It is not a slogan. It is a structural consequence of the decisions described in Section 2, and every other platform on the market either does not make those decisions or makes them on the wrong side of the line.


Conclusion — thirteen layers, one graph, every framework supported

Phase gate frameworks will remain non-negotiable for regulated engineering work. They exist for good reasons: multi-supplier synchronisation, multi-decade time horizons, safety-of-life consequences, exponential rework costs, and regulator expectation. No amount of agile ceremony, architecture viewpointing, or hyperscaler platform engineering removes the need for explicit gates with explicit evidence.

What has to change is the substrate. The phase gate cannot sit on top of a workflow engine that stores the evidence in business objects and reconstructs the narrative at the review. It has to sit on top of a typed, versioned, provenance-carrying data graph that knows what the evidence is, where it came from, what it depends on, and what depends on it — across all thirteen lifecycle layers, from stakeholder intent through disposal, with forward dependency and evidence back-propagation running in the same graph.

Clarity is that substrate. It supports every phase gate framework the authors have worked inside — MDS, CDS, MMDS, the through-life lifecycle framework at a major UK aerospace and defence prime, the ISBM, the naval safety certification model at the Australian subsidiary of a US defence prime, the reporting discipline built on top of a major aerospace-and-defence PLM, other major commercial PLM and requirements-management platforms, and the long tail of national defence and nuclear frameworks — as overlays on a shared data layer. It supports agile work organisation inside the gates, it supports architecture meta-models (TOGAF, SAFe, DoDAF, MODAF, ArchiMate, ISO 15288, CMMI, MIL-STD-882) as overlay mappings, and it supports the hyperscaler operational primitives (observability, progressive rollout, chaos engineering, working-backwards documents) as first-class practices. It depends on none of them.

The result is a lifecycle-governance platform on which the phase gate is computable, the regulator-admissible evidence pack is a filter, the non-linear programme is the default, the framework swap is a configuration choice, the through-life evidence is continuous, and the digital thread closes because — at the kernel, with thirteen layers, on one graph — it was never open in the first place.

That is what thirteen lifecycle phases, one graph means. It is not a diagram on a slide. It is the architecture every regulated engineering programme of the next thirty years is going to need, and it is the one Clarity was built from day one to provide.

This whitepaper forms part of the Clarity technical series. See also: Breaking the DIKW Ceiling, Event-Driven by Kernel, Not by Feature, 325 AI Agents, Bounded by DeZolve, and the 25-USP matrix. A dedicated companion whitepaper, Sixteen BOM Views on One CI Graph, is in preparation.

Buyer journeys: Systems Engineer · Auditor · Programme Director

One thread. 13 verticals. 16 BOMs. 25 USPs.

The only complete digital thread for regulated programmes, powered by the patent pending DeZolve Decision Intelligence Framework. Sovereign deployment under your own AWS account and encryption keys — at 10× less than the enterprise alternatives.