AI Governance Control-Pack
Deterministic governance, locked by regression.
NIST AI 600-1 EU AI Act OWASP LLM Top 10 SOC 2 CC ISO 27001 ISO 42001

Deterministic AI Governance, Evidence-first. Audit Ready.

Repeatable evaluation runs that produce machine-readable artifacts, defensible evidence, and audit-ready reports.

Same inputs, same pack, same result - every time.

114
Executable Controls
Current Pack Status
385
Gold Regression Cases
6
Compliance Frameworks
The Problem

AI Governance Today Is Broken

Current approaches verify intentions, not implementations. That gap is where risk lives.

Design-Only Review

Audits review static documentation, not runtime behavior. Design docs don't prove what's actually implemented.

Silent Drift & Failures

LLMs drift, hallucinate, or regress after changes without detection. Risk signals scatter across repos, configs, and tooling.

No Repeatable Verification

Teams lack repeatable verification and defensible evidence artifacts. Audits are slow, inconsistent, and hard to reproduce.

How It Works

From Messy Inputs to Defensible Evidence

Three engines run in sequence. Each consumes structured input and produces named, versioned artifacts. The pipeline is deterministic - same inputs, same pack, same result.
ICB
Input Contract Builder
Takes a GitHub repo, file upload, or preset source and normalizes it into a structured declaration contract. System and inventory declarations are converted into a manifest schema with pinned version metadata. Produces deterministic "what's missing" lists - fields and evidence paths required for evaluation. Low-confidence indicators are explicitly declared, not guessed.
repo / upload / preset -> manifest.json gap_list stubs[]
SIG
Repo Signals Scan
Static analysis scans the codebase for code-pattern indicators: network calls, execution primitives, credential handling, data flows. Each signal carries a severity level (HIGH / MEDIUM / LOW), match count, and evidence lines with path:lineno snippets. Signal counts are injected directly into the manifest, enriching it for evaluation.
manifest.json -> repo_signals.json manifest.json (enriched)
This is what your auditor wants: show me what you saw and where. Not runtime proof, but strong review guidance with evidence samples.
FDY
Foundry Evaluation (Policy Pack)
The enriched manifest is evaluated against 114 executable controls. Each control produces a deterministic outcome - MEETS, REVIEW, or FAIL - with rationale and required evidence paths. The engine compares declared posture against observed indicators: if you declare tools disabled but repo signals show execution primitives, the result is REVIEW, not a false pass.
manifest.json (enriched) -> decision.json enriched.sarif.json citations.json
Credibility feature: when network is declared "none" but SIG-NETWORK count is 3, the outcome is intentionally "requires review." We don't overclaim governance.
RPT
Report Generation
All artifacts are assembled into an audit-ready DOCX with table of contents, executive summary, evidence tables, and manual review list. Posture is declared as GREEN / YELLOW / RED / INCOMPLETE. If contract declarations are missing, the report explicitly states posture cannot be asserted - no false confidence.
decision.json + signals + meta -> Report v2 (DOCX) meta.json
Evaluation Outcomes

Three Statuses - No Ambiguity

Every control produces one of three outcomes. REVIEW means human verification needed - never an automated pass.
MEETS

No inconsistency detected between declared posture and observed indicators.

REVIEW

Potential inconsistency detected. Requires human verification before posture can be asserted.

FAIL

Hard failure against required evidence or policy threshold. Blocks green posture.

Source of truth: decision.json - deterministic and reproducible on every run
Run Artifacts

What Auditors Actually Get

Every run produces a complete set of machine-readable, run-scoped artifacts. Nothing is ephemeral - everything is reviewable and re-executable.
manifest.json
Normalized declaration contract with pinned schema version
repo_signals.json
Signal evidence: severity, match counts, path:lineno snippets
decision.json
114 control outcomes with rationale and required evidence paths
enriched.sarif.json
SARIF findings with crosswalk mapping overlays per framework
citations.json / meta.json
Full provenance chain, run metadata, and version pinning
Report v2 (DOCX)
TOC, executive summary, evidence tables, manual review list
Crosswalk Library

Traceability, Not Certification

Current Control-Pack
FrameworkTargetsControls Mapped
NIST AI 600-113114
OWASP LLM Top 10 (2025)1047
EU AI Act3062
SOC 2 (CC subset)2177
ISO 27001 Annex A (2022)9374
ISO 420011684
The Gold Suite

Regression-Tested Policy Engine

Current Control-Pack
This isn't a checklist. It's a tested policy engine. We can add controls fast without breaking prior decisions.
385
Gold Test Cases
114
Controls Covered
6
Frameworks Mapped
Input manifest with structured evidence and declarations
Expected audit outcome - overall status + per-control statuses
PASS + FAIL/REVIEW scenarios for every control
Edge cases: prod vs non-prod, risk tiers, tools enabled/disabled
Run on every change to prevent policy drift
Every policy change is validated against all 385 cases before release.
// Gold Case - consistency check triggers REVIEW { "case_id": "CASE-GOLD-COV-NET-MISMATCH", "input": { "manifest": { "repo_signals_counts": { "SIG-NETWORK": 3, "SIG-NETWORK-RESTRICTED": 1 }, "network": { "mode": "none" } } }, "expected": { "controls": [{ "control_id": "CP-CONSIST-NET-001", "status": "manual_review" }], "overall": "yellow" } }

Deterministic. Evidence-First. Audit-Ready.

Current loaded pack