Decision Intelligence

The DeZolve Decision Intelligence Framework

Fifteen years of cross-domain research — from the original 2012 DeZolve whitepapers through to the live Clarity implementation — distilled into a fifteen-node, twenty-six-edge directed graph that scores decision trustworthiness from committed decision back to original evidence. Four engines, three phases, twenty issues, four needs, four trust categories, and why legacy audit trails are reconstructed fictions. Patent pending.

Published 9 April 2026 · 35 min read · Thread: AI & Decision Intelligence · Data & Provenance

TL;DR

In 2012 the founder of Clarity published two Commercial-in-Confidence whitepapers under the Consult4you banner, titled Why Can’t We Make Good Decisions and How Can We Make Better Decisions Faster. The first paper identified twenty structural issues in real-world decision making, observed across automotive vehicle dynamics, defence programmes, academic engineering education, and industry consulting work going back to the mid-1990s. The second paper proposed DeZolve as the integrated response — a decision-making paradigm built around four needs, four algorithmic engines (three of which remain as engines in the current implementation, with the fourth subsumed by the Library subsystem), three lifecycle phases, eight named roles, and a clean separation of Decision Space, Problem Space, and Solution Space.

Fifteen years later, DeZolve is no longer a specification. It is a live, deployed, patent-pending decision-intelligence framework embedded at the kernel of the Clarity platform. The 2012 vision has been validated and operationalised across three additional domains the original papers did not reach — enterprise cloud infrastructure, large-scale AI agent bounding, and long-lifecycle regulated engineering — and implemented as a fifteen-node, twenty-six-edge directed graph with a reverse-traversal truth-vector evaluator that computes decision defensibility in real time from a committed decision back through its supporting evidence chain.

Six structural properties make the current implementation work, and together they are the Clarity USP for decision intelligence.

  1. The original 2012 needs are the current needs. NEED1 (decision quality), NEED2 (decision speed), NEED3 (evidence-based decision making), and NEED4 (efficiency through reuse) remain the four architectural goals the framework is built to satisfy. Every property below is an implementation of one or more of the four needs.
  2. Decision Space / Problem Space / Solution Space separation is enforced structurally at the Lx layer boundaries. The Problem Space lives in L0–L1 (intent, needs, requirements, context, questions). The Solution Space lives in L2–L3 (options, analyses, conclusions). The Decision Space is the union of the two plus L4–L5 (baselines, evidence, decisions). The three spaces share a graph but segregate the work, so that discovery cannot bias analysis and analysis cannot pre-empt selection.
  3. Fifteen canonical node types across the Lx layers. Need, Goal, Requirement, Assumption, Fact, Question, Challenge, Idea, Option, Context, Analysis, Conclusion, Evidence, Decision, and Data. Every engineering artefact an auditor cares about has a canonical home in the taxonomy, and every instance is simultaneously an Lx entity and a DeZolve node.
  4. Twenty-six typed edges with four trust categories. The edges describe relationships (Assumption validated by Fact, Decision requires Options, Requirement verified by Evidence, Analysis consumes Data). Each hop in a traversal is scored as verified (an explicit field link exists), inferred (a structural connection is derivable), transitive (an indirect path is present), or gap (a required link is missing). The categories are categorical, not continuous — they pass a structural test or they do not.
  5. Three-phase decision journey — Discovery, Analysis, Selection — with named roles. The 2012 role taxonomy (Seeker, Solver, Decider, Subject Matter Expert, Moderator, Mentor, Observer, Gardener) is preserved as first-class RBAC roles in the current Clarity platform. Seekers drive Discovery in the Problem Space. Solvers drive Analysis in the Solution Space. Deciders commit in the Decision Space. SMEs, Mentors, Moderators, Observers, and Gardeners support across all three phases.
  6. Three algorithmic engines plus the Library subsystem. A Prioritisation engine (a patent-protected decision-analysis method with a novel Confident / Unsure / Don’t Know belief model that tolerates gaps and partial pair-compare matrices). A Wickedness engine that classifies each problem as Tame, Partially Wicked, or Wicked, with different treatment paths for each. A Complexity engine that manages information visibility and tracks the decision space as it matures from disorder to clarity. And the Clarity Library — a full cross-cutting subsystem that subsumes the 2012 Structure engine with substantially more functionality: a typed document catalogue, a taxonomy repository, a template library for reusable Problem Spaces, an auto-classification pipeline, an ICD extraction layer for BOM harmonisation, and the RAG feed underneath every AI agent in the platform. The three engines are patent-protected in their specific implementations; the Library is the cross-cutting data substrate that all three engines and every other Clarity component consume.

If you only read one sentence: the decision chain is a graph, the graph has to be authored at decision time, and the only reason the industry has not been doing this for forty years is that no one had built the substrate — Clarity is that substrate, and DeZolve is the scoring layer that runs on top of it.


The bargain on offer

Every engineering manager who has ever sat through a contract dispute, a gate review, or a post-incident investigation has watched the same scene unfold. An auditor, a regulator, a lawyer, or an inquiry chair asks a deceptively simple question: “which decision was this, when was it taken, on what evidence, by whom, against which requirements, and why did it appear defensible at the time?”

The room goes quiet. Somebody fetches a folder. The folder contains a signed decision record with a date, a list of attendees, and a three-paragraph summary. The summary references “the design review”, “the analysis”, and “the supporting evidence”. None of those references is a link. None of them is a pointer. None of them is a query. They are words on a page, and the words were written by a human under time pressure two weeks after the decision was taken, based on that human’s memory of a meeting and their interpretation of documents that may or may not still exist in the form they were in at the time.

The team then spends the next three to six months — or three to six years, depending on how serious the inquiry is — reconstructing the evidence chain by hand. They pull documents from a PLM that has been upgraded twice since the decision. They pull analyses from an analyst’s laptop that was decommissioned when she left the company. They pull emails from an archive that only retains inbound messages. They piece together a narrative. The narrative is defensible. The narrative is mostly true. The narrative was not the narrative that the decision was actually made on, and nobody in the room can tell the difference anymore, because the people who made the decision have rotated, retired, or forgotten.

"Legacy audit trails are reconstructive narratives.
They are assembled from a pile of documents
that were never designed to connect to each other,
usually days before a gate review or years after an incident.
They are not verifiable, because the evidence chain
was never recorded as a chain."

The industry has accepted this for forty years. It is the way engineering audit has always worked, and the cost — in disputed programmes, successful lawsuits, failed inquiries, and programmes cancelled because their evidence chains could not be reconstructed — has been baked into the price of doing business.

Clarity does not accept it. The DeZolve Decision Intelligence Framework is the structural alternative. It was first formally articulated in two 2012 whitepapers by Clarity’s founder, has been refined and validated across fifteen years of cross-domain engineering practice, and is now live and patent pending inside the Clarity platform. The rest of this paper explains how it works and where it came from.

The paper has three sections:

  • Section 1 — the 2012 foundation: the twenty issues in real-world decision making, the four needs of decision makers, the wicked/tame/partially-wicked classification, the cognitive science underneath, and the fifteen-year arc that took the 2012 research from specification to live production platform.
  • Section 2 — the DeZolve framework as implemented today: the Decision Space / Problem Space / Solution Space separation; the fifteen canonical node types and twenty-six typed edges; the reverse-traversal engine from Decision to Data; the four trust categories and classification states; the three-phase Discovery / Analysis / Selection journey with the eight named roles; the three surviving algorithmic engines and the Library subsystem that replaced the original Structure engine; and the patent boundary that distinguishes what is publicly described from what is patent-protected.
  • Section 3 — what the framework unlocks: decisions scored on defensibility rather than outcome, structural visibility of AI participation, evidence coverage visible before the decision is taken, computable gate reviews, post-incident investigations that become queries, and through-life decision traceability at decade-plus horizons.

Section 1 — The 2012 foundation and the fifteen-year arc

1.1 Where DeZolve came from

In April 2012, operating under the Consult4you consultancy, Clarity’s founder published the first of two Commercial-in-Confidence whitepapers titled “Why Can’t We Make Good Decisions”. The paper opened with a question the author had been unable to stop asking since the late 1990s: why, given the extraordinary investment the engineering profession has made in tools, techniques, standards, and training, do we continue to produce outrageous and far-reaching decision-making failures?

The paper used the Space Shuttle Challenger disaster, the 2008 Global Financial Crisis, the Airbus A380 delivery delays, and the bushfire-related deaths in Victoria in 2008 as touchstones, but the author’s real interest was the recurring pattern underneath. Every one of these failures had, in retrospect, a decision chain whose evidence was either not captured at the time, not connected to the decision, or both. Every one of them had a reconstructive post-mortem that reached sensible conclusions years too late. Every one of them had, at the moment the critical decision was taken, evidence in existence that, if it had been visible and queryable, would probably have prevented the outcome.

The most instructive case in the 2012 paper was Roger Boisjoly’s attempt to stop the Challenger launch the night before the accident. Boisjoly had correct, quantitative, hard-won evidence about the O-rings in the solid rocket boosters failing at low temperatures. The evidence existed. The chain that would have connected it to the launch-decision authority did not. The decision was taken at a level of the organisation that did not see the data in the form Boisjoly had prepared it. The post-accident inquiry reconstructed the whole story in detail. The real-time decision chain the night before the accident did not exist in any form the decision-makers could query.

The 2012 paper identified this as the founding pattern. The decision chain had to be a structural property of the decision, captured the moment the decision was taken, not a narrative assembled afterwards. Everything else — tools, workflows, gate reviews, audit practices — was secondary. The chain was primary, and if you could not compute it at decision time, you did not have it at all. Fifteen years of subsequent work have not dislodged that insight; they have only deepened the evidence for it.

1.2 The twenty issues of real-world decision making

The 2012 paper did not stop at the Challenger framing. It distilled the recurring causes of poor decision making into a structured set of twenty issues, teased out across sections on complexity, systems thinking, wicked problems, frameworks, information management, cognitive science, and human behaviour. The twenty issues are worth re-stating because every one of them is still operative, fifteen years later, in the engineering programmes the authors still encounter weekly.

The twenty issues, preserved from the original 2012 numbering:

  1. The human view of complexity does not equate with the system measure of complexity.
  2. Emergent properties have a high probability of occurrence and impact but a low probability of being observable before a decision is made.
  3. Too many tame systems are allowed to degenerate into wicked systems.
  4. Decision-making tools and techniques are not intuitive to use.
  5. Decision-making tools and techniques are not well suited to real-time or iterative and evolving decision making.
  6. Legislative compliance is not an enabler of good decision making.
  7. Decision making as a process has not been successfully codified.
  8. Many frameworks exist that may be adapted for tame planning and decision-making activities — but none for wicked ones.
  9. A lack of consistent terminology leads to incorrect assumptions during collaborative decision making.
  10. Decision making is more than simply rational or logic thought. Emotions play an important role.
  11. Human behaviour often affects the decision-making process in a non-optimum manner.
  12. It is more intuitive to make evolving decisions than to have to commit to early decisions where future-state information is not yet available.
  13. Decision information is ephemeral and difficult to manage within existing tools. Recall is frequently impossible.
  14. Decision makers — and most people — do not intuitively archive information in ways that facilitate reuse and recall.
  15. Important information may or may not be captured.
  16. Stored data is not archived, categorised, or related to other information in a consistent manner.
  17. Lack of appropriate stakeholder engagement leads to non-optimum decisions.
  18. Too much analysis — analysis paralysis — contributes to poor decision making.
  19. Cost, as a decision discriminator, should not be considered until the benefits and utilities of the options are known.
  20. Information can be used in very different ways than the information creator intended.

Any engineer who has ever run a real programme will recognise every one of these. They are not theoretical concerns; they are the recurring friction of engineering decision work. The 2012 paper’s contribution was to name them, count them, categorise them, and propose a framework specifically designed to address all twenty simultaneously rather than piecemeal. That framework was DeZolve, and the second 2012 paper described how it was intended to work.

1.3 The four needs of decision makers

The second 2012 paper, “How Can We Make Better Decisions Faster”, took the twenty issues and mapped them to a hierarchy of four needs that the proposed DeZolve platform was designed to satisfy. The four needs are the architectural anchors of the framework, and they remain the anchors of the current Clarity implementation.

NEED1 — Improve Decision Quality

Decisions have to be demonstrably better on the metrics that matter: correctness under real-world conditions, alignment with stakeholder value, robustness to emerging information, and defensibility under later scrutiny. Quality is not outcome-dependent — a good decision can have a bad outcome and vice versa — but quality is measurable in terms of the evidence chain that supported the decision at the moment it was taken.

NEED2 — Make Decisions Faster

Decision speed is almost always the binding constraint in real-world engineering work. Every failure mode the twenty issues describe is made worse by time pressure. The framework has to make decisions faster, not just better — and specifically, it has to get the decision-maker from problem recognition to committed action on a timescale that respects the real pace of engineering work, not the idealised pace of academic decision theory.

NEED3 — Evidence-Based Decision Making

The decision chain has to be captured, structurally, at the moment the decision is taken, with enough provenance that the chain survives staff turnover, tool obsolescence, and institutional memory loss. Evidence has to be reusable across decisions, queryable in real time, and attributable to its original authority source.

NEED4 — Efficiency Through Reuse

Every engineering programme rediscovers the same patterns, the same risks, the same mitigations, and the same lessons that every previous programme already learned. The framework has to make the reuse of prior decision work — problem spaces, templates, taxonomies, lessons-learnt, and known-good evidence chains — a first-class property, not a retrospective archive.

Mapping the twenty issues to the four needs produced an explicit design constraint for the original 2012 DeZolve framework: every issue had to be addressed, and every need had to be satisfied, by a single integrated architecture. Not by a committee of bolt-on tools. Not by a workflow engine with a dashboard. By a coherent decision-intelligence substrate that treated the twenty issues as connected problems with a shared structural cause, and that treated the four needs as a hierarchy whose satisfaction required all four, not just the popular ones.

1.4 Tame, partially wicked, and wicked problems

The 2012 paper drew heavily on Horst Rittel’s work on wicked problems — problems that cannot be solved by the standard step-wise problem-solving techniques taught in schools and applied in most organisations. Rittel’s ten criteria for wickedness (no definitive formulation, no stopping rule, solutions are better-or-worse not true-or-false, every solution is a one-shot operation, every wicked problem is unique, every wicked problem is a symptom of another problem, and so on) describe the decision conditions every serious engineering programme eventually faces.

The 2012 paper combined Rittel’s work with the Cynefin framework of David Snowden to produce a three-way classification of every problem a decision-maker might face:

  • Tame. Cause and effect are clear. Step-wise techniques work. The problem is solvable by an individual or a small team using established methods. Example: calculating the square root of an integer.
  • Partially wicked. Some aspects are tame; others are not. The relationship between cause and effect can only be perceived in retrospect, not in advance. The decision-maker can make progress but cannot guarantee correctness. Example: most real engineering trade studies.
  • Wicked. Cause and effect are disconnected at the systems level. The problem cannot be fully formulated before it is attempted. Every solution is a one-shot operation with significant consequences. The decision-maker has no right to be wrong. Example: achieving peace in the Middle East; the global financial crisis; most strategic engineering decisions under uncertainty.

The original 2012 framework treated the wickedness classification as a first-class field on every problem entering the decision space. A tame problem could be handled quickly by an individual. A partially wicked problem required structured engagement with subject-matter experts and mentors. A wicked problem required the full collaborative Discovery / Analysis / Selection journey described in §2.6 below, with explicit tooling for managing the stakeholder conflict that wicked problems always produce.

The wickedness classification is one of the specific elements of the DeZolve framework that is patent-protected and not surfaced on the public Clarity website. The classification itself is describable publicly (Tame / Partially Wicked / Wicked is a standard frame from the academic literature); the DeZolve mechanism that detects, classifies, and routes problems across the three categories is implementation detail, held under NDA with customers whose use cases require it.

1.5 Cognitive science — primitive brain and modern brain

The 2012 paper devoted significant space to the cognitive science underneath decision making, because the authors had realised — across fifteen years of running decision workshops with real engineers on real programmes — that the human brain does not make decisions in the way decision theory says it does.

The simplified model the paper proposed distinguished between the primitive brain (evolutionarily older, pattern-matching, fast, massive parallel processing, subconscious, capable of Kasparov-beats-Deep-Blue intuitive pattern recognition) and the modern brain (evolutionarily younger, rational, slow, bandwidth-limited to Miller’s seven-plus-or-minus-two rule, conscious, capable of deliberate calm under pressure but not of the pattern-matching feats the primitive brain performs effortlessly).

The practical implication was important: decision-making tools that require the decision-maker to operate in their modern brain alone are fundamentally mis-calibrated for the task. The modern brain cannot hold enough parameters in working memory to reason about a real engineering decision. The primitive brain can, but only when it is supplied with enough structured context to recognise the patterns it already knows. A framework that ignores the primitive brain — that treats decision making as pure rational analysis over tables of numbers — is a framework that will produce exhausted, biased, and wrong decisions under time pressure, every time.

This is the cognitive-science grounding for what the 2012 paper called the gut-data-gut principle: decisions emerge from intuition, are sharpened by data, and are committed by intuition. The framework’s job is to make the data phase efficient enough that the two intuition phases can dominate the time budget, because it is in the intuition phases that high-quality decisions actually happen. Pure analysis — what the 2012 paper named analysis paralysis — is the failure mode, not the goal.

1.6 The fifteen-year validation arc

The 2012 papers were a specification and a research programme, not a product. The author explicitly stated at the time that a prototype existed but that significant further work was required to turn the framework into commercial reality. The program of work currently under development referenced in the 2012 acknowledgements was the starting gun for the fifteen-year arc that followed.

That arc, in brief, running through the domains the authors worked inside over the period 2012–2026:

  • Continued defence and academic research (2012–2014). University teaching of the framework to engineering undergraduates. Consulting engagements with defence primes on safety-case acceleration, contract dispute resolution, and enterprise tool consolidation. Multiple iterations of the DeZolve prototype code under the Consult4you banner.
  • Original patent filing (2012). The first formal DeZolve patent was filed during this period, establishing priority on the core innovations — the pair-compare belief model, the wickedness engine, the Decision Space / Problem Space / Solution Space separation, and the role-taxonomy architecture.
  • Cloud infrastructure and enterprise transformation (2014–2023). A decade inside the world’s largest cloud provider, running enterprise migration programmes, co-authoring the Cloud Adoption Framework Maturity Model, authoring the Well-Architected Framework’s fifth pillar (Operational Excellence), and training over a thousand APJC consultants and instructors. The DeZolve ideas did not have a commercial home during this period, but the practical observation of how real enterprise decisions were being made — or not being made — at hyperscaler scale dramatically refined the framework’s requirements.
  • AI and foundation-model emergence (2020–2024). The arrival of capable foundation models changed the DeZolve requirements in two ways. First, the provenance problem became acute: distinguishing AI-authored content from human-verified content is now a structural necessity, not a policy preference. Second, the AI models themselves became available as an implementation tool for the DeZolve engines, particularly for the Structure and Wickedness engines where classification at scale had previously required heroic manual effort.
  • Clarity platform development (2023–present). The 2012 framework was finally given its proper substrate: the Lx model, the event-driven kernel, the @source provenance architecture, and the typed engineering data graph that DeZolve had always needed. The framework was re-implemented from the ground up against the Lx substrate, the patent was refreshed and extended to the new architecture, and the result is what is now running in production inside the Clarity platform.

Fifteen years. Six domains. The same structural observation repeated at every stage: the evidence chain is what matters, the evidence chain has to be authored at decision time, and no existing tool stack had the substrate to make that possible. The 2012 framework was the specification. The 2026 Clarity implementation is the realisation. The rest of this paper describes how the implementation works.

1.7 DIKW and the Wisdom layer gap

Before leaving the historical section, it is worth explicitly restating a point the authors made in the 2012 papers and have restated every year since: the DeZolve framework is best understood as the Wisdom layer of the Data-Information-Knowledge-Wisdom hierarchy that information science has used as a descriptive framework for decades.

Data is raw measurements. Information is structured data with metadata. Knowledge is causal relationships between pieces of information. Wisdom is ethical framing, comparative analysis, and cross-domain pattern recognition applied to knowledge to produce decisions.

The engineering-software industry has been excellent at the Data and Information layers and close to absent at the Knowledge and Wisdom layers. Every commercial PLM, ERP, MES, MRO, and EAM system is a Data or Information tool. They store artefacts, index records, apply metadata, and return search results. None of them build a typed graph of causal relationships between the stored items. None of them score decisions on trustworthiness. None of them maintain a structural distinction between a human-verified fact and an AI-inferred suggestion. None of them can answer the auditor’s question in real time, at the moment the decision is taken, with any structural confidence.

DeZolve is the Wisdom layer that the industry has been missing. The sibling whitepaper Breaking the DIKW Ceiling describes the framing in full. The positioning is the same here: the first four decades of engineering software built the bottom of the hierarchy. The next decade is about the top of it, and the top of the hierarchy is where the decision-intelligence value is concentrated.


Section 2 — The DeZolve framework as implemented today

The DeZolve Decision Intelligence Framework, as running in the Clarity platform today, is a directed graph model with fifteen node types, twenty-six typed edges, four trust categories, a reverse-traversal evaluator, a three-phase decision journey, eight named roles, three algorithmic engines, and a cross-cutting Library subsystem that replaced the original 2012 Structure engine with substantially more functionality. The taxonomy is canonical and proprietary. Several of the algorithmic specifics are patent-protected. The structural shape is describable publicly, and that is what this section describes.

2.1 Decision Space, Problem Space, Solution Space — the 2012 separation, enforced by the Lx substrate

The single most architecturally important move from the 2012 framework was the explicit separation of three conceptual spaces: the Decision Space (the overall container), the Problem Space (where the problem is discovered and formulated), and the Solution Space (where options are analysed and trade-offs evaluated).

The separation matters for four specific reasons, each of which corresponds to a structural failure mode the 2012 paper had observed in practice:

  • Discovery cannot bias analysis. If the people discovering the problem are also the people analysing the options, the analysis will be biased toward the options that justify their discovery framing. The separation enforces a handover between roles at a specific phase boundary.
  • Analysis cannot pre-empt selection. If the people analysing the options are also the people committing the decision, the analysis will be truncated at the first option that looks acceptable to the commit authority. The separation keeps Solvers and Deciders in distinct roles.
  • Problem spaces can be reused across decisions. Because the Problem Space does not contain the Decision (only the needs, requirements, context, and questions), a well-formed Problem Space can be saved as a template and instantiated in a new Decision Space for a similar future problem. This is the NEED4 (efficiency through reuse) property made structural.
  • Evidence can be moved, archived, and audited without violating integrity. The separation means the history of decisions can be preserved independently of the problems and solutions that fed them, so that auditors can query the decision record without traversing the full problem formulation, and vice versa.

In the current Clarity implementation, the three spaces map directly onto the Lx layer hierarchy. The Problem Space lives in L0 and L1 — intent, needs, requirements, context, external boundary, questions, assumptions, goals, and the L1 system boundary against which options will be evaluated. The Solution Space lives in L2 and L3 — option sets, options, internal interfaces, parameters, scenarios, analyses, conclusions, and the Measures of Effectiveness that emerge from running analyses against scenarios. The Decision Space is the complete span L0–L5, adding L4 (change baselines and evidence records) and L5 (formal decision records with the DeZolve truth vector attached).

The 2012 conceptual separation is now an architectural property of the data substrate, enforced by the Lx schema, not by a workflow engine on top of the data. That is the difference between a specification and an implementation, and that is the difference between the 2012 papers and the 2026 Clarity platform.

2.2 The fifteen canonical node types

Every engineering artefact that can participate in a decision chain maps to one of fifteen canonical DeZolve node types. The taxonomy was developed during the original 2012 patent work and has been refined across the fifteen-year cross-domain arc described in Section 1.

The fifteen nodes group naturally into the three spaces of the 2012 framework: the Problem Space (where the problem is discovered and formulated), the Solution Space (where options are analysed), and the Decision Space (where committed decisions, their supporting evidence, and their underlying source data are recorded). The grouping matches the three-phase Discovery / Analysis / Selection journey described in §2.6 and the role-separation of Seekers / Solvers / Deciders described in §2.7.

A critical point for understanding the taxonomy: most node types are not confined to a single Lx layer. The Lx mapping given for each node below indicates the primary home — the layer at which the node is most commonly authored and consumed — but most nodes can legitimately exist at several layers. An assumption expressed at L0 by a stakeholder is a different artefact from an assumption expressed at L7 by an integrator about a supplier’s as-built tolerances, but both are assumptions in the DeZolve sense, both carry human lineage in their @source, and both participate in the same truth-vector traversal. The same is true of most of the other fourteen node types.

Problem Space — seven nodes authored during Discovery

The seven Problem Space nodes describe the problem the programme is trying to solve, not the options being considered or the decisions being taken. They are authored predominantly by Seekers and Subject Matter Experts during the Discovery phase.

  1. Need — a stakeholder’s expression of what they want, why, and at what priority. Primary home L0; refined expressions at L1 context and L3 scenario level are also valid when a need only becomes meaningful in a specific context. Always human-authored.
  2. Goal — a higher-level outcome that one or more needs roll up to. Primary home L0; goals are generally programme-level intent and rarely decomposed below L1.
  3. Requirement — a formal, verifiable statement derived from needs and goals. Primary home L0; related requirement-like statements also live at L1 (interface requirements), L2 (performance parameters expressed as requirements), and L3 (scenario-specific constraints that behave as requirements). Usually human-authored or algorithm-extracted from source documents and then human-verified before becoming authoritative.
  4. Assumption — a claim taken as true without direct evidence, pending validation. Always human-authored. Can live at any Lx layer from L0 through L12 — a stakeholder’s assumption about desired outcomes at L0, an architect’s assumption about an interface behaviour at L2, an analyst’s assumption about a scenario condition at L3, an integrator’s assumption about as-built tolerances at L7, an operator’s assumption about a telemetry baseline at L10. Every assumption carries human lineage in its @source record, and its journey from assumption to fact via validation against evidence is one of the most important structural signals in the DeZolve truth vector.
  5. Fact — a claim that has been validated against evidence. Can live at any Lx layer — L0 foundational facts (from standards or prior work), L3 analytical facts (derived from analysis results), L8 validation facts (confirmed by V&V tests), L10 operational facts (observed in telemetry). Lineage can be human, algorithm, or AI-derived-and-human-approved. Every fact must have at least one Evidence node substantiating it; without evidence, it is still an assumption.
  6. Question — an unresolved point that the programme needs to answer. Can live at any Lx layer from L0 through L12 — stakeholder questions at L0, architecture questions at L2, analysis questions at L3, change questions at L4, decision questions at L5, implementation questions at L6 through L12. Questions are the most layer-agnostic node type in the taxonomy; they exist wherever something is not yet known.
  7. Challenge — a known obstacle, risk, or objection to an assumption, requirement, option, or decision. Can live at any Lx layer where an existing node is being challenged — most commonly L0 risks and L3 analytical challenges, but also L7 build challenges, L8 qualification challenges, L10 operational challenges, and L11 in-service challenges.

Solution Space — five nodes authored during Analysis

The five Solution Space nodes describe the candidate solutions being considered and the analytical work performed to evaluate them. They are authored predominantly by Solvers and SMEs during the Analysis phase.

  1. Idea — an early-stage candidate solution or approach, not yet formalised as an option. Can appear at any layer where novel thinking is happening — most commonly L0 (problem-space brainstorming) and L2 (early option exploration), but ideas can emerge at L6 (alternative designs), L10 (operational improvements), or L11 (in-service modification proposals) as well. Usually human-authored, often in collaboration with AI-drafted candidates that human Solvers review and promote.
  2. Option — a formalised candidate architecture, configuration, or approach that a decision will choose among. Primary home L2 as design-plane options; also L6 (as-designed alternatives), L7 (as-built variants), L9 (deployment-site-specific options), L11 (in-service upgrade options), and L12 (disposal-route options). An option is distinct from an idea in that it has been structured well enough to be analysed formally.
  3. Context — the scenario, situation, variant, or configuration under which an option is being evaluated. Primary home L3 as analytical scenarios; also L1 (system context), L9 (deployment context), and L10 (operating context). A context is not just a label — it is a structured set of conditions that an Analysis consumes.
  4. Analysis — a structured evaluation of an option in a context, producing a conclusion. Primary home L3; also L8 (as-validated analyses using real test data rather than analytical models) and L10 (operational analyses of telemetry data against modelled predictions). Always has at least one Evidence or Data node as input and at least one Conclusion as output.
  5. Conclusion — the output of an analysis, supporting or refuting a claim about an option. Primary home L3; also L8 (validation conclusions) and L10 (operational conclusions). Every conclusion is attached to the analysis that produced it and to the option it is evaluating.

Decision Space — three nodes of the committed record

The three Decision Space nodes are the committed record: the decision itself, the evidence that supports it, and the raw data underneath the evidence. They are authored during the Selection phase by Deciders (for the Decision itself) and accumulated throughout the decision journey from Library imports, test results, supplier deliveries, and operational observations (for Evidence and Data).

  1. Evidence — a formal artefact (test result, qualification certificate, baseline snapshot, regulatory finding, supplier certification, audit report, commissioning record, operational telemetry summary) that substantiates a Fact or validates a Requirement. Lives at many Lx layers depending on the type of evidence: L4 change-baseline snapshots, L5 decision-attached evidence packs, L7 as-built qualification certificates, L8 V&V test results (the primary home for validation evidence), L9 deployment commissioning evidence, L10 operational telemetry summaries, L11 in-service audit findings, L12 disposal verification records. The Lx layer of an evidence record indicates when in the lifecycle the evidence was produced, not where it is being consumed. A piece of L8 validation evidence can support a decision at L5, a change at L4, a parameter at L2, and an original requirement at L0, all simultaneously, via different traversal paths.
  2. Decision — a formal, committed choice among options, with a record of the chosen option, the rationale, the supporting evidence chain, and the approving authority. Primary home L5 for formal programme decisions; also L4 (change approvals), L7 (build deviation decisions), L9 (deployment decisions), L11 (in-service modification decisions), and L12 (disposal decisions). Every decision carries its own DeZolve truth vector, computed at the moment of commit.
  3. Data — the raw source material from which facts, evidence, and analyses are derived. Primary home the Library layer (ConOps documents, standards, supplier datasheets, regulatory filings, prior-programme archives), but also L7 (as-built sensor data and inspection records), L10 (operational telemetry streams), and L12 (disposal manifests and environmental reports). Data is the lowest-level artefact in the taxonomy — the thing that Evidence substantiates, and the point at which the reverse-traversal eventually bottoms out at the end of the hop chain.

Every one of these fifteen node types has an explicit home in the Lx schema. Every entity in a Clarity programme — every need, every requirement, every option, every analysis, every decision — is simultaneously an instance of its Lx entity type and a node in the DeZolve graph. The two representations are the same data, viewed through two lenses.

The three-space grouping is not cosmetic. It maps directly onto the three-phase Discovery / Analysis / Selection journey (§2.6), onto the role separation of Seekers, Solvers, and Deciders (§2.7), and onto the Lx layer hierarchy (the Problem Space primarily anchored at L0–L1, the Solution Space primarily at L2–L3, the Decision Space primarily at L4–L5 but with Evidence and Data spanning L6–L12 and the Library layer). The separation is enforced structurally, not by convention, and it is the single most important architectural move the 2012 framework made — and the current Clarity implementation preserves.

2.3 Twenty-six typed edges — the graph structure

The relationships between node types are described by twenty-six typed edges. Each edge has a source node type, a relationship descriptor, and a target node type. The edges are not free-text annotations; they are canonical relationship types whose semantics are defined in the DeZolve taxonomy and whose instances are first-class entities in the Lx graph with their own provenance records.

The edges fall into several categories:

  • Intent edgesNeed is relevant with a particular Goal; Goal is formally defined by Requirements; Decision defines a use for a Need. These edges connect the intent plane to the decision plane.
  • Verification edgesAssumption is validated by Fact; Requirement is verified by Evidence; Fact is substantiated by Evidence; Evidence is the basis for Data. These edges connect claims to their evidence.
  • Analysis edgesAnalysis leads to Conclusions; Analysis consumes Evidence; Analysis consumes or produces Data; Analysis verifies Requirement; Analysis builds Option. These edges connect the analytical work to its inputs and outputs.
  • Context edgesQuestion is based on a particular Context; Context is derived from Conclusion; Context has a value relevant to a particular Option. These edges situate a decision in a scenario.
  • Decision edgesDecision requires Options; Option is a potential instantiation of a Decision; Option is validated by Requirement. These edges connect a decision to the options it chose among and the requirements they had to satisfy.
  • Epistemic edgesQuestion generates Ideas; Idea can be relevant in a Decision; Idea may result in or from a Challenge; Challenge refutes Assumption; Challenge leads to Assumption. These edges describe the process by which unknowns become knowns.

The full twenty-six edges are enumerated in the canonical DeZolve taxonomy registry, and the registry is part of the proprietary payload that travels with every Clarity deployment. The taxonomy is not marketing material and is not surfaced in public artefacts at the individual-edge level, but the categorical structure above is sufficient for a reader to understand how the graph is shaped.

2.4 Reverse traversal — from Decision to Data

The DeZolve truth-vector evaluator always starts at a committed decision (an L5 Decision node) and traverses backwards through the graph. The direction matters: the question the framework exists to answer is “on what basis was this decision made?”, and the only way to answer that question is to start at the decision and walk back through its supporting chain.

A typical traversal, for a generic engineering decision:

  1. Start at the Decision node at L5. Read its optionId field — the option the decision chose.
  2. Walk to the Option node at L2 that the decision selected. Read the option’s requirementIds — the requirements the option claims to satisfy.
  3. For each requirement, walk to the Requirement node at L0. Read the requirement’s analysisIds — the analyses that verified it.
  4. For each analysis, walk to the Analysis node at L3. Read the analysis’s contextId (the scenario the analysis ran under), its evidenceIds (the evidence it consumed), and its conclusionId (the conclusion it produced).
  5. For each evidence record, walk to the Evidence node at L4. Read the evidence’s dataSourceIds — the raw data the evidence was derived from.
  6. For each data source, walk to the Data node in the Library layer. Confirm the source document, its version, its authoring authority, and its provenance.

At each hop, the traversal checks whether the expected link is present, whether it resolves to a real node, whether the resolved node’s content supports the claim the previous hop was making, and whether any @source.lineage record indicates the hop was authored by a human, an AI agent, or an algorithm. Every hop produces a trust classification: verified, inferred, transitive, or gap.

2.5 The four trust categories

Every hop in a DeZolve traversal is classified into exactly one of four categories. The categories are not continuous scores and they are not fuzzy judgements. They are categorical, and each category has a structural test that decides which one applies.

Verified

An explicit, direct field link exists in the Lx data graph. The previous hop claimed that an Option satisfies a Requirement; the Option’s requirementIds field contains the Requirement’s ID; the Requirement exists and its status is active. This is the strongest trust category — the link is in the data, not derived or inferred.

Inferred

The direct field link does not exist, but a structural connection is derivable from adjacent entity relationships. For example, the Option’s requirementIds field is empty, but the Analysis that consumed the Option references a Requirement, and the Requirement’s scope overlaps the Option’s scope. The link can be inferred from the surrounding graph structure. This is weaker than verified but stronger than transitive.

Transitive

The connection exists only through one or more intermediate nodes, without a direct or structurally-inferable link between the two endpoints. For example, the Option is linked to an Analysis that is linked to a Context that mentions the Requirement in prose, but no typed edge connects the Option to the Requirement directly or via a single structural inference. The evidence of a connection is present but diffuse.

Gap

The expected link is missing. No direct field, no structural inference, no transitive path. A hop that is required for the traversal to make sense cannot be made. This is the most important category for practical purposes, because gaps are the thing a decision-maker needs to see before the decision is committed, not after.

The four categories are categorical and auditable. A reviewer looking at a DeZolve truth vector can see, for every hop, which classification applied and why — not as a confidence percentage but as a structural claim. “The Option-to-Requirement link was verified because the Option’s requirementIds field contains the Requirement’s ID.” “The Requirement-to-Evidence link was a gap because no Evidence entity references this Requirement.” The auditor does not need to trust the score; they can trust the trace.

2.6 Discovery, Analysis, Selection — the three-phase decision journey

The 2012 framework organised every decision journey into three phases, and the same three phases are implemented in the current Clarity platform as first-class lifecycle states for every decision.

Discovery — in the Problem Space

Seekers enter the Problem Space with a problem they cannot immediately solve. They ask questions. They articulate needs and goals. They capture assumptions, facts, and challenges. Subject Matter Experts join the conversation and contribute their domain knowledge. Mentors guide the seeker through the structural shape of the problem. Gardeners keep the emerging Problem Space organised and free of clutter. Moderators resolve disputes. The phase is deliberately open, generative, and tolerant of ambiguity — the goal is not to reach an answer but to reach a well-formed problem.

The Discovery phase exits when the Problem Space has enough structural coverage for analysis to begin. The wickedness classification (Tame / Partially Wicked / Wicked) is computed at the exit gate; tame problems may move straight to Selection, partially wicked problems require Analysis, and wicked problems require the full collaborative journey.

Analysis — in the Solution Space

Solvers take the structured Problem Space and generate candidate Options. SMEs contribute domain-specific analyses. Contexts and Scenarios are authored for the options to be evaluated against. Analyses consume Evidence and produce Conclusions. The Solution Space grows until every candidate Option has been analysed against every relevant Context, and the Conclusions have been compared, ranked, and debated.

The Analysis phase exits when the Solution Space has sufficient fidelity for a decision to be committed. Specifically: every candidate Option has at least one analysis against at least one context; every Conclusion has an evidence chain that is structurally traversable; and the DeZolve truth vector can be computed for each candidate Option with a classification at or above the threshold for the decision’s risk level.

Selection — in the Decision Space

The Decider — a user with explicit authority to commit the decision — reviews the fully-populated Solution Space, sees the DeZolve truth vectors for every candidate Option, examines the gaps that were surfaced during Analysis, and commits the Decision. The commit is a single structural act: a new Decision entity at L5 references the chosen Option, carries the evidence chain as provenance, and triggers the real-time computation of the final truth vector for the committed decision.

Selection is deliberately atomic. The phase does not allow further analysis once committed; later changes require a new Decision, not a revision of the existing one. This is how the audit trail stays immutable over the long lifecycle of an engineering programme.

2.7 The eight roles

The 2012 framework identified eight distinct roles that every collaborative decision journey requires, and the current Clarity platform implements each of them as first-class RBAC roles with specific permissions, UI affordances, and audit-trail contributions.

Primary roles

  • Seeker — the user who brings the problem. Has full write access to the Problem Space; is the primary author of Needs, Questions, Goals, and Assumptions. Cannot commit decisions.
  • Solver — the user who owns the analysis work. Has full write access to the Solution Space; authors Options, runs Analyses, consumes Evidence, produces Conclusions. Cannot author Problem Space content (separation of concerns) and cannot commit decisions.
  • Decider — the user with authority to commit. Has read access to the full Decision Space and write access only to the Decision entity itself. The single structural act of committing a Decision is the Decider’s exclusive responsibility.

Secondary roles

  • Subject Matter Expert (SME) — domain specialist. Can contribute Facts, validate Assumptions, supply Evidence, and review Analyses. Works across Problem and Solution Spaces.
  • Moderator — dispute resolver. Can suspend contested edits, surface conflicting claims for review, and trigger escalation to human consensus-building. Works across the full Decision Space.
  • Mentor — process coach. Guides Seekers through Discovery, Solvers through Analysis, and Deciders through Selection. Does not author content directly but can comment on and suggest improvements to existing content.

Tertiary roles

  • Observer — read-only participant. Typically a stakeholder who needs visibility into the decision journey without the authority to contribute. Often a precursor role for future Seekers or Deciders who are learning the framework.
  • Gardener — graph hygiene. Keeps the emerging Decision Space organised, identifies and resolves duplication, tags content for reuse, and maintains the health of shared templates. Critical for Problem Space reuse (NEED4).

Every contribution to a Decision Space carries the role the user held at the time of contribution, preserved in the @source provenance record. An auditor querying a decision five years later can see, for every hop in the truth vector, which role each contribution came from — and the truth vector weights the contribution accordingly.

2.8 Three algorithmic engines — and the Library

The 2012 framework identified four distinct algorithmic engines that the DeZolve platform needed to run under the surface: Prioritisation, Structure, Wickedness, and Complexity. Fifteen years of implementation experience have kept three of them as algorithmic engines in the current Clarity implementation. The fourth — the Structure engine — has been subsumed by the Clarity Library, a full subsystem that provides everything the 2012 Structure engine was specified to do and significantly more. The migration from engine to subsystem is one of the most important architectural refinements the framework has seen since the original papers were published.

The three surviving algorithmic engines remain patent-protected in their specific implementations; their existence and interfaces are describable publicly.

  • Prioritisation engine. A patent-protected decision-analysis method built around pair-compare ranking, extended with a novel Confident / Unsure / Don’t Know belief model that allows users to contribute partial pair-compare matrices without forcing the “expert-only” assumption that conventional decision-analysis methods make. Incomplete matrices, disagreement between users, and varying levels of confidence are all handled as structural features rather than as obstacles. The engine produces ranked options and consistency measures that Solvers and Deciders can use to understand where the decision space is robust and where it is still uncertain.
  • Wickedness engine. Classifies every problem entering the Decision Space as Tame, Partially Wicked, or Wicked, using a set of detection heuristics that examine the Problem Space’s completeness, the disagreement among Seekers and SMEs, and the presence of conflicting Requirements or Constraints. The output is a routing decision — tame problems go fast-track to Selection, partially wicked problems enter the full Analysis phase, and wicked problems trigger the collaborative consensus-building workflow with Moderators and Mentors actively engaged.
  • Complexity engine. Tracks the structural complexity of the Decision Space as it matures, using measures from the Cynefin framework (Simple / Complicated / Complex / Chaotic / Disorder) to surface the state of the space to all participants. The engine’s output drives the Discovery-to-Analysis-to-Selection phase transitions and provides the progress signal that tells Seekers, Solvers, and Deciders whether their work is converging or drifting.

The Library — what the Structure engine became

The 2012 Structure engine was specified as an algorithmic component that would organise the growing Decision Space by applying typed taxonomies — a Question Taxonomy, an Option Taxonomy, a Decision Taxonomy, domain-specific taxonomies imported from external frameworks — along with free-form tag clouds for emergent structure. It was a single engine with a focused mandate.

In the current Clarity implementation that mandate has grown into the Library — a full cross-cutting subsystem orthogonal to the L0–L12 design plane, with substantially more functionality than the original Structure engine was specified to provide. The Library in Clarity is not a document store. It is:

  • A typed document catalogue with auto-tagged, classification-aware, provenance-carrying entries for every piece of source material a programme imports (ConOps documents, standards, supplier datasheets, regulatory filings, prior-programme archives, ICDs, test reports).
  • A taxonomy repository carrying every framework overlay the platform supports — DeZolve’s own canonical taxonomies (the fifteen nodes and twenty-six edges), the ten overlay groups (financial, supply chain, technology readiness, regulatory, lifecycle, security, risk, external interaction, quality, external systems), the phase-gate framework overlays (CDS, MDS, MMDS, ISBM, ISO 15288, TOGAF ADM, DoDAF / MODAF, SAFe, CMMI, MIL-STD-882), and any customer-specific taxonomy uploaded under NDA.
  • A template library for reusable Problem Spaces, Solution Spaces, and complete decision journeys from prior programmes — the structural mechanism that makes NEED4 (efficiency through reuse) work across an entire organisation’s portfolio.
  • An auto-classification and structural-tagging pipeline that applies AI-assisted tagging to newly-imported documents, links them to the canonical taxonomies, cross-references them against the lessons-learnt knowledge graph, and makes the full corpus queryable by Seekers, SMEs, and Solvers as RAG context for every generation pipeline in the platform.
  • An ICD extraction layer that reads formal Interface Control Documents and produces structured interface specifications, feeding the BOM harmonisation machinery described in the Sixteen BOM Views whitepaper at the highest authority weight (0.95).
  • A Gardener-curated shared-template subsystem that lets the Gardener role (see §2.7) promote validated Problem Spaces to the library for reuse by other programmes, complete with provenance pointing to the original decision journey that produced them.

Every one of the properties the 2012 Structure engine was specified to provide is now a property of the Library. The typed taxonomies live in the Library. The free-form tagging lives in the Library. The domain-specific taxonomy imports live in the Library. The template reuse lives in the Library. What changed is that the Library turned out to need to do much more than just organise the graph — it also had to be the provenance-carrying source of the raw Data nodes that sit at the bottom of every DeZolve traversal, the RAG feed for every AI agent in the platform, the ICD authority source for BOM harmonisation, and the template repository for Problem Space reuse. All of that is one subsystem now, not several. It is a cleaner architecture than the 2012 Structure engine would have produced, and it is one of the most important things the fifteen-year implementation arc taught the framework’s designers.

The three algorithmic engines and the Library work together, not separately. A new Problem Space starts in the Disorder state of the Complexity engine, with no wickedness classification and no prioritisation. As Seekers and SMEs contribute, the Library organises the content and applies the relevant taxonomies, the Wickedness engine classifies the emerging problem, the Complexity engine tracks the maturing state of the space, and the Prioritisation engine begins to rank the emerging options. By the time the Discovery phase exits, the Library and all three engines have converged on a coherent picture of the problem — and by the time the Analysis phase exits, they have converged on a committable decision.

2.9 Classification thresholds and the conflicting state

A truth vector aggregates the hop classifications into an overall decision classification. The aggregation produces one of four states:

  • Good — the evidence chain is almost entirely verified, with few or no gaps. The decision is well-defended. An auditor asking “was this decision taken on solid evidence?” gets a structural yes.
  • Incomplete — the evidence chain has identifiable gaps or transitive hops that matter. The decision may still be correct, but its defensibility is reduced by the missing links. The decision-maker has the choice to close the gaps before committing or to commit anyway with an explicit acknowledgement of the exposure.
  • Bad — the evidence chain cannot be traversed. Too many gaps, too few verified hops, or critical nodes missing entirely. The decision is undefended by the graph at the time it was taken, and any later attempt to defend it will be reconstructive.
  • Conflicting — the traversal detects an active invariant violation. A hard rule of the programme (from the L0 invariants layer) is being breached by the decision. The classification is set to conflicting regardless of the coverage score, because a decision that breaches an invariant is not a question of defensibility — it is a hard block until the invariant is addressed.

The exact thresholds that separate good from incomplete from bad are part of the patent-protected scoring layer. What is publicly describable is that the classifications are structural, categorical, and computable in real time — not narrative summaries, not confidence percentages, and not produced after the fact by an auditor.

"DeZolve does not ask whether a decision was right.
It asks whether a decision was defensible at the time it was taken,
given the evidence available and the chain by which
that evidence reached the decision-maker.
The two questions are different,
and legacy audit trails have been confusing them
for forty years."

2.10 Provenance at every hop and the AI participation problem

Every hop in a DeZolve traversal carries a lineage tag distinguishing three kinds of authorship:

  • Algorithm — the node or edge was produced by deterministic code (an aggregator, a rule-based solver, a schema validator). Trust is high because the operation is reproducible and inspectable.
  • AI — the node or edge was produced by a foundation-model-based agent (see the sibling whitepaper 325 AI Agents, Bounded by DeZolve for the agent architecture). Trust is moderate because the output is probabilistic, and the human review status of the field matters.
  • Human — the node or edge was authored or approved by a named human user. Trust is highest for human-verified claims, because the human’s approval is itself the evidence of provenance.

The lineage tags compose with the trust categories. A verified link authored by an algorithm is stronger than a verified link authored by an AI agent without human review. An AI-drafted option that has been human-approved flips its lineage to human and re-computes the contribution to the truth vector. An AI-drafted option that has not been human-approved contributes under the AI lineage, and the truth vector reflects that the programme is relying on AI-generated content without human verification — a fact the auditor is usually more interested in than the number itself.

This is the structural property that prevents AI-generated content from masquerading as human-verified content. The distinction is not a policy. It is a field on every node and every edge, and the DeZolve evaluator enforces the distinction on every traversal.

2.11 Real-time evaluation — the audit trail is the graph

A DeZolve truth vector is not a report. It is not a gate-review artefact. It is not a PDF produced by an auditor. It is a live structural computation performed against the Lx data graph at the moment a decision is committed, cached as an immutable record attached to the decision, and available for query by any authorised consumer at any future time.

The implementation consequence is important. A decision made on 14 April 2026 has a truth vector computed on 14 April 2026 against the state of the evidence graph as it existed on 14 April 2026. Ten years later, in 2036, an auditor asking “what was the state of the evidence this decision relied on?” does not have to reconstruct anything. The truth vector is already there. It was cached when the decision was taken. The hop-by-hop breakdown, the lineage tags, the classification, and the full provenance chain are all in the cache.

If the underlying evidence graph has changed since the decision — because later analyses have superseded earlier ones, because evidence has been withdrawn, because an invariant was violated by a subsequent event — the DeZolve framework can compute a new truth vector against the current state of the graph and compare the two. The difference between the decision-time truth vector and the current-state truth vector is itself a first-class finding, and it surfaces as a decision drift signal that the programme can use to prioritise which decisions should be re-examined.

No legacy audit trail can do this. A legacy audit trail is a static narrative. It cannot be re-computed against a new state, because it was never a computation in the first place.

2.12 The patent boundary — what is public, what is protected

The DeZolve Decision Intelligence Framework is patent pending, building on the original 2012 DeZolve patent filed by Clarity’s founder during the framework’s first-era development. The patent covers the specific algorithmic innovations that make the framework work at scale — the scoring formula that aggregates hop classifications into a single classification, the wickedness taxonomy that classifies problems as Tame, Partially Wicked, or Wicked, the pair-compare decision-analysis method with its Confident / Unsure / Don’t Know belief categories and partial-matrix handling, the guardian-signal detection logic that flags decisions for role-based review, and the specific numerical weights assigned to different edge types.

What is publicly describable — and what this whitepaper therefore describes — is the structural shape of the framework:

  • The four needs (quality, speed, evidence, reuse) and the twenty issues from the 2012 papers
  • The Decision Space / Problem Space / Solution Space separation and its mapping onto the Lx layer hierarchy
  • The fifteen canonical node types and their Lx layer mappings
  • The twenty-six typed edges and their categorical structure
  • The reverse-traversal direction from Decision to Data
  • The four trust categories and their structural tests
  • The four classification states (good / incomplete / bad / conflicting)
  • The three-phase Discovery / Analysis / Selection lifecycle
  • The eight role taxonomy (Seeker / Solver / Decider / SME / Moderator / Mentor / Observer / Gardener)
  • The three surviving algorithmic engines (Prioritisation, Wickedness, Complexity) and the Library subsystem that replaced the original 2012 Structure engine, and their interfaces
  • The lineage taxonomy (algorithm / AI / human) and its role in the truth vector
  • The real-time evaluation model and the decision-drift signal

What is not publicly described — and what remains protected by the patent-pending status — is the specific algorithmic implementation: how hops are weighted, how the classifications are aggregated, how the wickedness taxonomy modulates the score, how the Confident / Unsure / Don’t Know belief model handles incomplete pair-compare matrices, and how the guardian-signal detection uses the truth-vector output to trigger role-based review workflows.

The distinction matters because an engineer reading this paper should be able to reason about what DeZolve is and what it does, without being handed the proprietary recipe. That is the right balance for a whitepaper that needs to inform technical decision-makers without publishing the intellectual property the framework depends on. Customers who need the deeper layer — the scoring specifics, the wickedness model, the guardian-signal catalogue, the pair-compare belief mechanics — engage under NDA.


Section 3 — What DeZolve unlocks in practice

The framework described in Section 2 is a structural substrate. Section 3 walks through the specific operational properties that fall out of it — the things that become trivially possible under DeZolve and that are structurally impossible in any legacy audit-trail architecture. Each of these maps back to one or more of the four 2012 needs and addresses a specific subset of the twenty 2012 issues.

3.1 Decisions scored on defensibility, not outcome (NEED1 — Quality)

The most important property DeZolve unlocks is the separation of defensibility from outcome. These are two different questions, and legacy audit practice routinely conflates them.

Defensibility is the question “was the evidence chain available at the time of the decision sufficient to justify it, given the information the decision-maker had?”. A decision-maker who had complete, verified, human-approved evidence and chose the option the evidence supported is defending a defensible decision, regardless of whether the option turned out well or badly in the field. A decision-maker who had incomplete evidence and chose anyway, without acknowledging the gap, is defending an undefensible decision, regardless of whether the option turned out well or badly.

Outcome is the question “did the choice turn out to be correct in hindsight?”. Outcomes are driven by factors that the decision-maker could not know at the time — supplier failures, environmental variance, adversary actions, black swans. A well-defended decision can still have a bad outcome; a poorly-defended decision can still have a good outcome. Conflating the two produces unfair post-incident blame and unfair gate-review rubber-stamping.

DeZolve scores defensibility. An auditor asking “was this decision defensible at the time?” gets a structural answer immediately. An auditor asking “did this decision lead to a good outcome?” gets a different analysis over a different time window against different data. The framework keeps the two questions separate, and that separation is the precondition for honest post-incident investigation.

3.2 Structural visibility of AI participation (NEED1 — Quality)

As AI-generated content enters engineering workflows at increasing scale, the distinction between AI-drafted artefacts and human-verified artefacts becomes operationally critical. A regulator, an auditor, or a safety reviewer needs to know — structurally, not as a policy claim — which parts of a decision chain were produced by an AI agent and which were reviewed or authored by a human.

DeZolve makes this visible as a first-class property of every hop. An Option with AI-drafted parameters that have not been human-approved appears in the truth vector with its lineage tagged as AI. The overall trust score reflects the AI contribution at its correct weight. A reviewer looking at the truth vector can see, at a glance, which nodes in the chain have been human-verified and which are still AI-drafted. Human approval on any node flips its lineage to human and re-computes the truth vector contribution upward.

This is the only architecturally-sound way to manage AI participation in safety-critical decision workflows. Every alternative — policy-based review, tool-based flags, separate AI-content databases — relies on processes the reviewer has to trust rather than data the reviewer can traverse. DeZolve provides the data, and the data is always queryable.

3.3 Evidence coverage visible before the decision is taken (NEED2 — Speed)

A DeZolve truth vector can be computed before a decision is committed, not only after. A decision-maker facing a pending L5 decision can ask the framework to evaluate the truth vector against the current state of the evidence graph, see the classification and the hop breakdown, and use the result to decide whether to commit now or close identified gaps first.

This changes the gate-review process from “after-the-fact narrative defence” to “before-the-fact evidence coverage check”. The reviewers walk into the review with the truth vector already computed. They spend the meeting discussing the specific gaps that were identified and how to close them, rather than spending it arguing about whether the narrative is complete. The outcome of the review is either “proceed — the chain is sufficient” or “hold — these gaps must be closed”, and the hold is always accompanied by a specific list of the gaps that need closing.

No legacy review process works this way, because no legacy system computes the evidence chain in time for the review. The reviewers discover the gaps in the meeting, usually through intuition and experience, and the closing of gaps is a retrospective activity. DeZolve makes it prospective — and that prospective visibility is what lets decision-makers move faster, not slower, under rigorous governance.

3.4 Gate reviews become computable (NEED2 — Speed, NEED3 — Evidence)

The logical extension of §3.3 is that gate reviews themselves become computable. A gate’s pass criteria can be expressed as a set of predicates over the DeZolve truth vector:

  • All critical requirements must have verified hops from the Requirement node to at least one Evidence node.
  • No active invariant must be in the conflicting state.
  • The overall classification must be good or incomplete, with explicit sign-off on any incomplete state.
  • No more than 10% of hops in the chain may be AI-lineage without human approval.
  • Every evidence record must have a lineage that resolves to a Library Data node with an active authority source.

These predicates are expressible as queries against the DeZolve taxonomy and the Lx graph. They are evaluated automatically, in real time, before the review meeting starts. The meeting then exists to discuss the specific predicates that failed, not to re-create the truth vector from scratch. The gate becomes an evidence-coverage checkpoint, not a narrative-defence performance.

The companion whitepaper Thirteen Lifecycle Phases, One Graph describes the broader lifecycle-governance implications of making gate reviews computable. DeZolve is the evaluation engine underneath that computability.

3.5 Post-incident investigations become queries, not excavations (NEED3 — Evidence)

When an engineering incident happens and an investigation is launched, the investigation’s job is to determine whether the decisions that led to the incident were defensible at the time, and if not, what should be learned. In a legacy audit environment this takes months — months of document retrieval, interviews, reconstruction, and eventually a narrative that is usually partial, contested, and prone to hindsight bias.

In a DeZolve-equipped environment, the first question the investigator asks is structural: “show me the truth vectors for every decision that touched this subsystem, ordered by date”. The result is a list of decisions, each with its decision-time classification, its hop breakdown, and the specific gaps that existed when the decision was taken. The investigator can identify, in minutes rather than months, which decisions were well-defended and which were not. The well-defended decisions get cleared; the investigation focuses on the poorly-defended ones. The narrative the investigator eventually writes is grounded in structural evidence, not reconstructive guesswork.

This does not eliminate the investigator’s judgement — human judgement is still required to interpret the structural findings, weigh their significance, and recommend corrective action. What it eliminates is the reconstructive part of the investigator’s work, which is the part that takes the longest and is the most prone to error. An investigation that would have taken six months now takes six weeks, and the six weeks are spent on judgement rather than archaeology.

3.6 Problem Space reuse across programmes (NEED4 — Reuse)

The 2012 framework’s emphasis on Problem Space reuse is one of its most operationally valuable properties, and the current Clarity implementation preserves it structurally. A well-formed Problem Space — a set of Needs, Requirements, Contexts, Questions, and Assumptions, without the Options, Analyses, or Decisions that would make it programme-specific — can be saved as a template and instantiated in a new Decision Space whenever a similar problem arises.

This is how the accumulated decision intelligence of a programme, an organisation, or an industry becomes a first-class asset rather than a retrospective archive. Every decision journey contributes to the template library. Every template instantiation benefits from the accumulated learnings of prior journeys. The Wickedness engine remembers that a particular kind of Problem Space has previously required a particular kind of collaborative treatment, and routes new instances accordingly. The Library applies the same taxonomies across reused templates, so that the Seekers, Solvers, and Deciders on a new programme are working with the shared vocabulary that their predecessors refined.

Legacy audit trails do not reuse. Every programme re-creates the same structural scaffolding from scratch, because there is no shared substrate to carry it forward. DeZolve eliminates the re-creation work, and the eliminated work is the single largest source of schedule risk in early-phase engineering programmes.

3.7 Through-life decision traceability at decade-plus horizons (NEED3 — Evidence)

Long-lifecycle engineering platforms — defence weapons systems, nuclear reactors, medical devices, aerospace platforms, civil infrastructure — operate on horizons of thirty to sixty years. A decision taken in year two of such a programme may be audited in year forty-five, by people who were not born when it was taken, using tools that did not exist when it was committed, against regulatory regimes that did not exist at the time.

Legacy audit trails do not survive forty-five years. The documents are in obsolete formats. The tools have been replaced four times. The people have retired. The programme office has been reorganised twice. The reconstructive narrative is, at year forty-five, a historical exercise rather than an audit.

DeZolve truth vectors do survive, because they are structured JSON records in immutable storage, with provenance preserved from the day they were written. A truth vector written in 2026 is readable in 2071 with the same tooling, against the same schema, with the same classification rules. The data that the 2026 evidence chain referenced is still in the Library layer with its original provenance. The hop-by-hop breakdown is still computable. The auditor in 2071 can ask “was this decision defensible at the time?” and get a structural answer, not a reconstruction.

This is what long-lifecycle engineering actually requires, and it is what no legacy audit architecture has ever delivered.

3.8 Why this cannot be retrofitted into a legacy stack

Every property in §3.1 through §3.7 depends on architectural decisions that were made at the substrate: the typed Lx data graph, the fifteen-node DeZolve taxonomy, the twenty-six canonical edges, the immutable provenance model, the lineage tagging on every field, the real-time traversal evaluator, the Decision Space / Problem Space / Solution Space separation, the eight role taxonomy, the three-phase lifecycle, the three algorithmic engines, and the Library subsystem underneath them. None of these are features that can be added. They are the shape of the system.

A legacy PLM with an “audit module” bolted on top cannot produce a DeZolve truth vector, because the underlying data model does not carry the edges the traversal would need to walk. A document-management system with an “AI-powered audit” feature cannot distinguish human from AI authorship at the field level, because the data model has no lineage tags. A “digital thread” module sold as an upgrade to an existing stack cannot retroactively record the evidence chain at decision time, because the decision time was years ago and the chain was never captured. A workflow engine cannot enforce the Decision Space / Problem Space / Solution Space separation, because workflow engines are built around state machines that route artefacts, not around typed graphs that carry evidence.

The retrofit is architecturally impossible. The fix is to move the substrate — to put a typed Lx graph underneath the engineering work from day one, and to make the DeZolve framework a structural property of that graph rather than an analytical layer on top of it. That is what Clarity does, and it is why the DeZolve framework works in Clarity and in nowhere else the authors are aware of.


Conclusion — fifteen years from specification to substrate

In April 2012, a consulting engineer with fifteen years of experience across automotive vehicle dynamics, defence naval programmes, academic engineering education, and enterprise IT transformation published two Commercial-in-Confidence whitepapers identifying twenty structural issues in real-world decision making and proposing DeZolve as the integrated response. The framework had four needs, four engines, three phases, eight roles, a Decision Space / Problem Space / Solution Space separation, and a wickedness classification that routed different problems through different collaborative pathways. The papers described a specification, a prototype, and a research programme. They did not describe a shipping product, because the substrate DeZolve needed — a typed, versioned, provenance-carrying engineering data graph — did not yet exist in any form the framework could be built on.

Fifteen years later, the substrate exists. It is called the Lx model, and it is the foundation of the Clarity platform. The DeZolve framework has been re-implemented against that substrate as a fifteen-node, twenty-six-edge directed graph with a reverse-traversal truth-vector evaluator, a three-phase decision journey, eight first-class RBAC roles, three algorithmic engines, the Clarity Library as the cross-cutting data subsystem that replaced the original 2012 Structure engine, and real-time evaluation against every committed decision. The four 2012 needs are the architectural anchors of the current implementation. The twenty 2012 issues are each addressed by a specific property of the current architecture. The wickedness classification, the pair-compare belief model, the scoring formula, and the guardian-signal detection logic remain patent-protected and available only under NDA. The structural shape — everything this paper has described — is publicly readable, because the structural shape is what customers and auditors need to understand in order to trust the framework, and trusting the framework is the first step toward using it.

DeZolve is the Wisdom layer the engineering-software industry has been missing for the entire Data-and-Information era of the last forty years. It is the reason every other whitepaper in the Clarity series — Breaking the DIKW Ceiling, 325 AI Agents, Bounded by DeZolve, Event-Driven by Kernel, Thirteen Lifecycle Phases, Sixteen BOM Views, Diode & Airlock Connectors — repeatedly points at the same structural answer: that the audit trail has to be the graph, and the graph has to be authored at decision time, and the only way to get there is to rebuild the substrate from the kernel.

That is what DeZolve means, now, in 2026. It is not a report. It is not a module. It is not a 2012 specification any more. It is the shape of a typed engineering decision graph with provenance and trust scoring built in, running in production inside Clarity, backed by fifteen years of cross-domain validation and a patent portfolio that spans the original 2012 filing and the substantial refinements that fifteen years of implementation experience have contributed. It is the one thing the next thirty years of long-lifecycle regulated engineering work is going to depend on, and it is the framework the authors have been waiting, patiently and not so patiently, for the industry to be ready for.

The industry is ready now. DeZolve is live.

One thread. 13 verticals. 16 BOMs. 25 USPs.

The only complete digital thread for regulated programmes, powered by the patent pending DeZolve Decision Intelligence Framework. Sovereign deployment under your own AWS account and encryption keys — at 10× less than the enterprise alternatives.