How ConstructiVision traces every bug and every feature back to a pre-committed risk register — with measurable prediction accuracy that almost no SDLC achieves.
Every software project has a bug tracker. Most have a risk log of some kind. Almost none have them structurally connected — where every bug is required to reference a specific risk register entry, the risk register is measurably validated by the bugs that emerge from testing, and the failure mode predictions can be scored for accuracy after the fact.
The DFMEA (failure mode analysis) is written before any development session begins. Every bug discovered in testing must match a DFMEA entry — or trigger a new one, permanently expanding the model's coverage. The result: a risk register that is empirically validated by real test data, with a measurable accuracy rate of 86% across 21 bugs logged over 3 build iterations.
Most SDLC tools operate with three disconnected worlds: a feature backlog (what we're building), a bug tracker (what went wrong), and a risk register (theoretical hazards, reviewed at kickoff and archived). In ConstructiVision, these three form a single connected graph — navigable in any direction, version-controlled, and subject to a coverage metric you almost never see in software engineering practice.
DFMEA (Design Failure Mode & Effects Analysis) is a structured pre-development analysis borrowed from manufacturing engineering. For each function the product performs, you systematically ask: what could fail here? How severe would that be? How often might it occur? How hard would it be to detect before it reaches the customer?
| Tool | Typical Usage (Industry) | ConstructiVision Usage |
|---|---|---|
| Bug Tracker | Issues logged reactively after discovery; closed on fix; no structural linkage to risk model | Every bug must cite a DFMEA ID and confirmed RPN. Bug cannot close without the DFMEA cross-reference being updated in doc 32. |
| Risk Register | Generated at project kickoff; rarely updated; treated as a compliance artifact and archived | Living document — each new bug that doesn't match an existing entry creates a new DFMEA row with S/O/D ratings. Detection (D) improves when a fix adds observable controls. |
| Feature Backlog | Separate from risk; feature "done" when code ships and tests pass | Features are the subjects of DFMEA rows. A feature shipped without DFMEA coverage for its failure modes is definitionally incomplete — tracked as tech debt. |
| GitHub Issue | Title, description, labels, assignee — no required fields beyond title | Enforced template includes a ## DFMEA Reference section with DFMEA ID, RPN, and match classification (Yes / NEW). Missing this section = issue not properly filed. |
Every significant failure mode in ConstructiVision travels a documented chain from design intent to closed defect. The chain can be followed forward (feature → risk → bug → fix) or backward (bug → risk → feature). Both directions are always populated because filing a bug without a DFMEA link is not a valid final state.
Starting from Bug #12 leads immediately to FM-07 and the weld connection feature. Starting from the feature leads to all associated DFMEA rows. The risk register is always a live, navigable map of the codebase's known failure modes — not a snapshot frozen at project start.
The DFMEA is not written after bugs are found. It is written at the start of a design phase, before code editing begins. This is a deliberate engineering constraint — not process overhead.
An AI agent optimizing toward task completion will minimize perceived severity of obstacles. If allowed to retroactively define what constitutes a significant defect, an AI could:
Pre-committed DFMEA entries — with S=10 locked in version control before the first line of code was written — prevent this rationalization completely. The severity was defined by a human engineer analyzing the physical consequences. It is not a field the AI session can redefine.
These are real entries from the project DFMEA, not illustrative examples. ConstructiVision runs inside AutoCAD 2000 to generate structural specifications and panel books for concrete tilt-up construction. The top failure modes reflect that physical context directly.
| RPN | S | Failure Mode | Worst-Case Effect | Current Controls |
|---|---|---|---|---|
| 500 | Windows Help subsystem removed from Win10/Win11 | All users on modern Windows lose in-application help — critical for field engineers who cannot pause construction activity for web searches | Identified at project start; requires alternative help delivery in modernization. No mitigation currently deployed. Action required before release. | |
| 360 | Weld connection dialog — wrong hardware type specified | Incorrect weld hardware specification reaches fabrication drawings → physical structural failure during concrete panel tilt on job site | AutoIT+OCR validation verifies dialog content on every build. UI-level hardware combination validation under active development in wc_dlg.lsp. | |
| 315 | Dimensional data entry — uncaught input error propagation | Dimension typo propagates silently through panel book to fabrication → defective physical panel shipped to job site | mp_dlg validation callbacks via chrchk.lsp; range checks at dialog level. No downstream cross-check at drawing-generation level — Detection (D) currently elevated. | |
| ≥200 (pre-fix) |
Startup timing race — plugin loads before AutoCAD subsystem ready | AutoCAD crash (0xC0000005) at LocalizeReservedPlotStyleStrings+533 on application start — product completely unusable on affected VM configuration | Fixed — Bug 20. VLX load timing adjusted. RPN now below 200 threshold. Confirmed by full four-VM validation matrix run. | |
| Closed | Missing Menus\Group1\Pop13 registry keys | setvars crashes on load; all CSV menu items fail; product non-functional from first launch | Fixed — Bug 19. Registry keys added to Configure-ConstructiVision.ps1. Detection improved — deployment script now validates key existence. Confirmed VM 102, 104, 108, 109. | |
| Closed | Missing Project Settings\CV\RefSearchPath registry key | Site drawing file not found on project load — partial product failure; Edit Existing Drawing workflow broken | Fixed — Bug 21. Key added to deployment script; verified on VM 104 with SSH reg query comparison against VM 102 golden baseline. |
Any failure mode with RPN ≥ 200 requires a documented Recommended Action in the DFMEA table. That action must be completed and the Detection (D) rating updated before the build can be declared releasable. This is the mechanism that prevents a "known issue, low priority" backlog from holding safety-adjacent items indefinitely — the threshold is defined in the risk model, not negotiated at release time.
The requirement isn't documented in a style guide that can be ignored. It is embedded in the AI agent's mandatory operating instructions — the file that governs every AI-assisted development session. A GitHub Issue that lacks a ## DFMEA Reference section has not satisfied the definition of "properly filed."
setvars Function cancelled
immediately after CSV menu launch. All menu functions fail. Product non-functional on this configuration.
HKCU\SOFTWARE\Autodesk\AutoCAD\R15.0\ACAD-1:409\Profiles\<<Unnamed Profile>>\Menus\Group1HKCU\...\Menus\Pop13reg query diff.
Configure-ConstructiVision.ps1. Script now creates keys during VM setup.
Verified passing on VM 102, 104, 108, 109 via full OCR comparison against VM 103 golden baseline.
Because every bug must link to a DFMEA entry, it is possible to ask a question that most projects never can: what fraction of the bugs that actually appeared were predicted before testing began?
The answer, across 21 bugs logged over 3 build iterations: 86% prediction coverage.
The DFMEA was written before any code was edited. Fourteen percent of the bugs that showed up were genuinely not anticipated — failure modes the analysis had not predicted. Each of those 3 bugs created a new DFMEA row, permanently improving the model's coverage for future development cycles. A risk register that updates when it's wrong is fundamentally different from one that gets archived after project kickoff. The 86% figure is not a target — it is a measurement of how well the initial risk analysis characterized the actual failure space of the product.
When testing discovers a bug with no matching DFMEA entry, the protocol is explicit and mandatory:
This means the risk model grows more accurate over time. A product that has run 3 validation campaigns across 4 VM configurations, discovered 21 bugs, and maintained 86% prediction coverage is producing a DFMEA that is empirically validated — not a theoretical exercise from a pre-release planning meeting.
The DFMEA methodology is a core tool of Design for Six Sigma (DFSS) — specifically the CDOV process (Concept → Design → Optimize → Verify) from Creveling, Slutsky & Antis (2003). DFSS treats quality as a measurable engineering property: Y = f(X), where Y is "the product generates structurally correct specifications" and X is the set of parameters under control. The DFMEA is the mechanism that identifies which X values carry S=10 consequences when they fail — so those parameters receive engineering-grade controls, not just hope. The Quality & Validation page covers the complementary detection infrastructure.
← Quality & Validation Deployment Pipeline →
SimpleStruct — Powered by ConstructiVision | Quality-First AI-Assisted Engineering