Initialize repository snapshot

This commit is contained in:
Vault Sovereign
2025-12-27 00:10:32 +00:00
commit 110d644e10
281 changed files with 40331 additions and 0 deletions

View File

@@ -0,0 +1,155 @@
# GitLab → Console Integration Setup
This guide walks through wiring a real GitLab project to VaultMesh Console.
## Prerequisites
1. **VaultMesh Console HTTP bridge running**:
```bash
cd /root/work/vaultmesh
python3 scripts/console_receipts_server.py &
```
2. **Network access** from GitLab runners to your Console bridge
- If runners can't reach your host directly, expose via Tailscale/ngrok/etc.
## Step 1: GitLab CI/CD Variables
In your GitLab project: **Settings → CI/CD → Variables**
| Variable | Value | Example |
|----------|-------|---------|
| `VAULTMESH_CONSOLE_BASE` | Console bridge URL | `http://your-host:9110/v1/console` |
| `VAULTMESH_CALLER_DID` | GitLab service DID | `did:vm:service:gitlab-ci` |
| `VAULTMESH_APPROVER_DID` | Default approver | `did:vm:human:karol` |
| `VM_ENV` | Environment | `dev`, `staging`, or `prod` |
## Step 2: Add Helper Script
Copy `scripts/gitlab_console_session.sh` to your repository:
```bash
cp scripts/gitlab_console_session.sh /path/to/your/repo/scripts/
chmod +x /path/to/your/repo/scripts/gitlab_console_session.sh
git add scripts/gitlab_console_session.sh
git commit -m "Add VaultMesh Console helper"
```
## Step 3: Update .gitlab-ci.yml
Add Console session jobs to your pipeline:
```yaml
stages:
- console
- build
- test
- deploy
- console-end
# Session start (first job)
console:session-start:
stage: console
script:
- ./scripts/gitlab_console_session.sh start
# Your existing jobs...
build:
stage: build
script:
- ./scripts/gitlab_console_session.sh cmd build 0
- make build # your actual build
test:
stage: test
script:
- ./scripts/gitlab_console_session.sh cmd test 0
- make test # your actual tests
# Gated deploy
deploy:prod:
stage: deploy
when: manual
script:
- ./scripts/gitlab_console_session.sh request_approval deploy_prod
# If we get here, approval was already granted
- ./scripts/deploy.sh prod
# Session end (always runs)
console:session-end:
stage: console-end
when: always
script:
- ./scripts/gitlab_console_session.sh end
```
## Step 4: (Optional) GitLab Webhooks
For richer event tracking (MRs, pushes), add a webhook:
**GitLab → Settings → Webhooks**
- URL: `http://your-host:9110/gitlab/webhook`
- Triggers: Push events, Merge request events, Pipeline events
## Step 5: Verify
Run a pipeline and check Console:
```bash
# List sessions
vm console sessions
# See pipeline story
vm console story gitlab-pipeline-<id>
# Check dashboard
open http://127.0.0.1:9110/console/dashboard
```
## Approval Flow
When a deploy job requests approval:
1. Job calls `request_approval deploy_prod`
2. Job fails with approval ID
3. You approve:
```bash
export VAULTMESH_ACTOR_DID="did:vm:human:karol"
vm console approvals
vm console approve approval-XXXX --reason "Deploy approved"
```
4. Retry the deploy job in GitLab UI
## Environment-Based Policies
Set `VM_ENV` per job or globally:
| Environment | Requires Approval For |
|-------------|----------------------|
| `dev` | `git_force_push`, `rm -rf` |
| `staging` | Above + `deploy_staging`, `db:migrate` |
| `prod` | Above + `deploy_prod`, `docker push`, everything dangerous |
Override per-job:
```yaml
deploy:staging:
variables:
VM_ENV: staging
script:
- ./scripts/gitlab_console_session.sh request_approval deploy_staging
```
## Troubleshooting
**Bridge unreachable from runner**:
- Check firewall rules
- Try `curl $VAULTMESH_CONSOLE_BASE/health` from runner
**Approvals not working**:
- Verify `VAULTMESH_APPROVER_DID` matches your actor DID
- Check `vm console approvals` shows the pending request
**Dashboard not updating**:
- Bridge may need restart after code changes
- Check `/tmp/console_bridge.log` for errors

View File

@@ -0,0 +1,907 @@
# VAULTMESH-AUTOMATION-ENGINE.md
**Civilization Ledger Workflow Primitive**
> *Every workflow has a contract. Every execution has a receipt.*
Automation is VaultMesh's orchestration layer — managing n8n workflows, scheduled jobs, event-driven triggers, and multi-step processes with complete audit trails and cryptographic evidence of execution.
---
## 1. Scroll Definition
| Property | Value |
| --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Scroll Name** | `Automation` |
| **JSONL Path** | `receipts/automation/automation_events.jsonl` |
| **Root File** | `ROOT.automation.txt` |
| **Receipt Types** | `auto_workflow_register`, `auto_workflow_execute`, `auto_workflow_complete`, `auto_schedule_create`, `auto_trigger_fire`, `auto_approval_request`, `auto_approval_decision` |
---
## 2. Core Concepts
### 2.1 Workflows
A **workflow** is a defined sequence of automated steps that can be triggered manually, on schedule, or by events.
```json
{
"workflow_id": "wf:daily-compliance-check",
"name": "Daily Compliance Check",
"description": "Run Oracle compliance queries and alert on gaps",
"version": 3,
"status": "active",
"created_at": "2025-10-01T00:00:00Z",
"updated_at": "2025-12-01T00:00:00Z",
"created_by": "did:vm:user:sovereign",
"trigger": {
"type": "schedule",
"cron": "0 6 * * *",
"timezone": "Europe/Dublin"
},
"steps": [
{
"step_id": "step-1",
"name": "Query Oracle for GDPR compliance",
"type": "mcp_tool",
"tool": "oracle_compliance_answer",
"params": {
"question": "What is our current GDPR compliance status?",
"frameworks": ["GDPR"]
}
},
{
"step_id": "step-2",
"name": "Query Oracle for AI Act compliance",
"type": "mcp_tool",
"tool": "oracle_compliance_answer",
"params": {
"question": "What is our current EU AI Act compliance status?",
"frameworks": ["EU_AI_ACT"]
}
},
{
"step_id": "step-3",
"name": "Analyze gaps",
"type": "condition",
"condition": "steps['step-1'].result.gaps.length > 0 OR steps['step-2'].result.gaps.length > 0",
"on_true": "step-4",
"on_false": "step-5"
},
{
"step_id": "step-4",
"name": "Alert on compliance gaps",
"type": "notification",
"channels": ["slack:compliance-alerts", "email:compliance-team"],
"template": "compliance_gap_alert"
},
{
"step_id": "step-5",
"name": "Log success",
"type": "log",
"level": "info",
"message": "Daily compliance check passed"
}
],
"error_handling": {
"on_step_failure": "continue",
"max_retries": 3,
"retry_delay": "5m",
"notify_on_failure": ["slack:ops-alerts"]
},
"metadata": {
"category": "compliance",
"tags": ["daily", "gdpr", "ai-act", "oracle"],
"owner": "compliance-team"
}
}
```
**Workflow types**:
- `scheduled` — cron-based execution
- `event_triggered` — fires on system events
- `manual` — operator-initiated
- `webhook` — external HTTP triggers
- `chained` — triggered by other workflow completion
### 2.2 Executions
An **execution** is a single run of a workflow with full context and results.
```json
{
"execution_id": "exec-2025-12-06-001",
"workflow_id": "wf:daily-compliance-check",
"workflow_version": 3,
"status": "completed",
"triggered_by": "schedule",
"triggered_at": "2025-12-06T06:00:00Z",
"started_at": "2025-12-06T06:00:01Z",
"completed_at": "2025-12-06T06:02:34Z",
"duration_ms": 153000,
"steps": [
{
"step_id": "step-1",
"status": "completed",
"started_at": "2025-12-06T06:00:01Z",
"completed_at": "2025-12-06T06:01:15Z",
"duration_ms": 74000,
"result": {
"compliance_score": 0.94,
"gaps": ["Missing DPO appointment documentation"]
}
},
{
"step_id": "step-2",
"status": "completed",
"started_at": "2025-12-06T06:01:15Z",
"completed_at": "2025-12-06T06:02:20Z",
"duration_ms": 65000,
"result": {
"compliance_score": 0.87,
"gaps": ["Risk assessment incomplete for high-risk AI system"]
}
},
{
"step_id": "step-3",
"status": "completed",
"result": {"condition_result": true, "next_step": "step-4"}
},
{
"step_id": "step-4",
"status": "completed",
"started_at": "2025-12-06T06:02:21Z",
"completed_at": "2025-12-06T06:02:34Z",
"result": {
"notifications_sent": ["slack:compliance-alerts", "email:compliance-team"]
}
}
],
"input": {},
"output": {
"gdpr_score": 0.94,
"ai_act_score": 0.87,
"total_gaps": 2,
"alert_sent": true
},
"context": {
"node": "did:vm:node:brick-01",
"environment": "production"
}
}
```
### 2.3 Schedules
**Schedules** define when workflows should run automatically.
```json
{
"schedule_id": "sched:daily-compliance",
"workflow_id": "wf:daily-compliance-check",
"cron": "0 6 * * *",
"timezone": "Europe/Dublin",
"enabled": true,
"created_at": "2025-10-01T00:00:00Z",
"created_by": "did:vm:user:sovereign",
"next_run": "2025-12-07T06:00:00Z",
"last_run": "2025-12-06T06:00:00Z",
"last_status": "completed",
"run_count": 67,
"failure_count": 2,
"constraints": {
"max_concurrent": 1,
"skip_if_running": true,
"maintenance_window_skip": true
}
}
```
### 2.4 Triggers
**Triggers** define event-driven workflow activation.
```json
{
"trigger_id": "trig:security-incident",
"name": "Security Incident Response",
"workflow_id": "wf:incident-response-initial",
"trigger_type": "event",
"event_source": "offsec",
"event_filter": {
"type": "offsec_incident",
"severity": ["critical", "high"]
},
"enabled": true,
"created_at": "2025-11-15T00:00:00Z",
"created_by": "did:vm:user:sovereign",
"fire_count": 3,
"last_fired": "2025-12-06T03:47:00Z",
"debounce": {
"enabled": true,
"window": "5m",
"group_by": ["incident_id"]
}
}
```
**Trigger types**:
- `event` — fires on VaultMesh events (receipts, alerts, etc.)
- `webhook` — fires on external HTTP POST
- `file_watch` — fires on file system changes
- `mesh_event` — fires on mesh topology changes
- `approval` — fires when approval is granted/denied
### 2.5 Approvals
**Approvals** gate workflow continuation on human decisions.
```json
{
"approval_id": "approval-2025-12-06-001",
"workflow_id": "wf:production-deploy",
"execution_id": "exec-2025-12-06-002",
"step_id": "step-3-deploy",
"title": "Approve Production Deployment",
"description": "Deploy Guardian v2.1.0 to production nodes",
"status": "pending",
"requested_at": "2025-12-06T10:00:00Z",
"requested_by": "did:vm:service:ci-pipeline",
"required_approvers": 2,
"approvers": ["did:vm:user:sovereign", "did:vm:user:operator-alpha"],
"current_approvals": [],
"current_rejections": [],
"expires_at": "2025-12-06T18:00:00Z",
"context": {
"version": "2.1.0",
"commit": "abc123...",
"changelog": "https://github.com/vaultmesh/guardian/releases/v2.1.0",
"test_results": "all passed",
"affected_nodes": ["brick-01", "brick-02", "brick-03"]
},
"notification_channels": ["slack:approvals", "email:approvers"]
}
```
---
## 3. Mapping to Eternal Pattern
### 3.1 Experience Layer (L1)
**CLI** (`vm-auto`):
```bash
# Workflow management
vm-auto workflow list
vm-auto workflow show wf:daily-compliance-check
vm-auto workflow create --from workflow-def.json
vm-auto workflow update wf:daily-compliance-check --from workflow-def-v2.json
vm-auto workflow enable wf:daily-compliance-check
vm-auto workflow disable wf:daily-compliance-check --reason "maintenance"
vm-auto workflow delete wf:deprecated-workflow
# Manual execution
vm-auto run wf:daily-compliance-check
vm-auto run wf:onboarding --input '{"user": "new-operator"}'
# Execution monitoring
vm-auto exec list --workflow wf:daily-compliance-check --last 10
vm-auto exec show exec-2025-12-06-001
vm-auto exec logs exec-2025-12-06-001
vm-auto exec cancel exec-2025-12-06-003 --reason "testing"
# Schedules
vm-auto schedule list
vm-auto schedule show sched:daily-compliance
vm-auto schedule pause sched:daily-compliance --until "2025-12-10"
vm-auto schedule resume sched:daily-compliance
# Triggers
vm-auto trigger list
vm-auto trigger show trig:security-incident
vm-auto trigger test trig:security-incident --event test-event.json
# Approvals
vm-auto approval list --status pending
vm-auto approval show approval-2025-12-06-001
vm-auto approval approve approval-2025-12-06-001 --comment "Reviewed and approved"
vm-auto approval reject approval-2025-12-06-001 --reason "Not ready for production"
# History
vm-auto history --workflow wf:daily-compliance-check --from 2025-12-01
vm-auto history --status failed --last 7d
```
**MCP Tools**:
- `auto_workflow_list` — list workflows
- `auto_workflow_run` — execute workflow
- `auto_execution_status` — get execution status
- `auto_approval_pending` — list pending approvals
- `auto_approval_decide` — approve/reject
- `auto_schedule_next` — next scheduled runs
**Portal HTTP**:
- `GET /auto/workflows` — list workflows
- `POST /auto/workflows` — create workflow
- `GET /auto/workflows/{id}` — workflow details
- `PUT /auto/workflows/{id}` — update workflow
- `POST /auto/workflows/{id}/run` — execute workflow
- `GET /auto/executions` — list executions
- `GET /auto/executions/{id}` — execution details
- `POST /auto/executions/{id}/cancel` — cancel execution
- `GET /auto/schedules` — list schedules
- `GET /auto/triggers` — list triggers
- `GET /auto/approvals` — list approvals
- `POST /auto/approvals/{id}/approve` — approve
- `POST /auto/approvals/{id}/reject` — reject
---
### 3.2 Engine Layer (L2)
#### Step 1 — Plan → `automation_workflow_contract.json`
**Workflow Registration Contract**:
```json
{
"operation_id": "auto-op-2025-12-06-001",
"operation_type": "workflow_register",
"initiated_by": "did:vm:user:sovereign",
"initiated_at": "2025-12-06T09:00:00Z",
"workflow": {
"id": "wf:treasury-reconciliation",
"name": "Treasury Reconciliation",
"version": 1,
"steps": ["..."],
"trigger": {
"type": "schedule",
"cron": "0 0 * * *"
}
},
"validation": {
"syntax_valid": true,
"steps_valid": true,
"permissions_valid": true
},
"requires_approval": false
}
```
**Execution Contract** (for complex/sensitive workflows):
```json
{
"operation_id": "auto-op-2025-12-06-002",
"operation_type": "workflow_execute",
"workflow_id": "wf:production-deploy",
"workflow_version": 5,
"triggered_by": "did:vm:service:ci-pipeline",
"triggered_at": "2025-12-06T10:00:00Z",
"trigger_type": "webhook",
"input": {
"version": "2.1.0",
"commit": "abc123...",
"target_nodes": ["brick-01", "brick-02", "brick-03"]
},
"requires_approval": true,
"approval_config": {
"required_approvers": 2,
"approver_pool": ["did:vm:user:sovereign", "did:vm:user:operator-alpha", "did:vm:user:operator-bravo"],
"timeout": "8h"
},
"risk_assessment": {
"impact": "high",
"reversibility": "medium",
"affected_services": ["guardian"]
}
}
```
#### Step 2 — Execute → `automation_execution_state.json`
```json
{
"execution_id": "exec-2025-12-06-002",
"workflow_id": "wf:production-deploy",
"status": "awaiting_approval",
"created_at": "2025-12-06T10:00:00Z",
"updated_at": "2025-12-06T10:30:00Z",
"steps": [
{
"step_id": "step-1-build",
"name": "Build artifacts",
"status": "completed",
"started_at": "2025-12-06T10:00:01Z",
"completed_at": "2025-12-06T10:05:00Z",
"result": {
"artifact_hash": "blake3:abc123...",
"artifact_path": "builds/guardian-2.1.0.tar.gz"
}
},
{
"step_id": "step-2-test",
"name": "Run integration tests",
"status": "completed",
"started_at": "2025-12-06T10:05:01Z",
"completed_at": "2025-12-06T10:15:00Z",
"result": {
"tests_passed": 147,
"tests_failed": 0,
"coverage": 0.89
}
},
{
"step_id": "step-3-deploy",
"name": "Deploy to production",
"status": "awaiting_approval",
"approval_id": "approval-2025-12-06-001",
"started_at": "2025-12-06T10:15:01Z"
},
{
"step_id": "step-4-verify",
"name": "Verify deployment",
"status": "pending"
},
{
"step_id": "step-5-notify",
"name": "Notify stakeholders",
"status": "pending"
}
],
"approval_status": {
"approval_id": "approval-2025-12-06-001",
"required": 2,
"received": 1,
"approvals": [
{
"approver": "did:vm:user:sovereign",
"decision": "approve",
"timestamp": "2025-12-06T10:30:00Z",
"comment": "Tests passed, changelog reviewed"
}
]
},
"context": {
"node": "did:vm:node:brick-01",
"trace_id": "trace-xyz..."
}
}
```
**Execution status transitions**:
```
pending → running → completed
↘ failed → (retry) → running
↘ awaiting_approval → approved → running
↘ rejected → cancelled
↘ cancelled
↘ timed_out
```
#### Step 3 — Seal → Receipts
**Workflow Registration Receipt**:
```json
{
"type": "auto_workflow_register",
"workflow_id": "wf:treasury-reconciliation",
"workflow_name": "Treasury Reconciliation",
"version": 1,
"timestamp": "2025-12-06T09:00:00Z",
"registered_by": "did:vm:user:sovereign",
"step_count": 5,
"trigger_type": "schedule",
"workflow_hash": "blake3:aaa111...",
"tags": ["automation", "workflow", "register", "treasury"],
"root_hash": "blake3:bbb222..."
}
```
**Workflow Execution Start Receipt**:
```json
{
"type": "auto_workflow_execute",
"execution_id": "exec-2025-12-06-002",
"workflow_id": "wf:production-deploy",
"workflow_version": 5,
"timestamp": "2025-12-06T10:00:00Z",
"triggered_by": "did:vm:service:ci-pipeline",
"trigger_type": "webhook",
"input_hash": "blake3:ccc333...",
"node": "did:vm:node:brick-01",
"tags": ["automation", "execution", "start", "deploy"],
"root_hash": "blake3:ddd444..."
}
```
**Workflow Execution Complete Receipt**:
```json
{
"type": "auto_workflow_complete",
"execution_id": "exec-2025-12-06-002",
"workflow_id": "wf:production-deploy",
"workflow_version": 5,
"timestamp_started": "2025-12-06T10:00:00Z",
"timestamp_completed": "2025-12-06T11:30:00Z",
"duration_ms": 5400000,
"status": "completed",
"steps_total": 5,
"steps_completed": 5,
"steps_failed": 0,
"output_hash": "blake3:eee555...",
"approvals_required": 2,
"approvals_received": 2,
"tags": ["automation", "execution", "complete", "deploy", "success"],
"root_hash": "blake3:fff666..."
}
```
**Schedule Creation Receipt**:
```json
{
"type": "auto_schedule_create",
"schedule_id": "sched:treasury-reconciliation",
"workflow_id": "wf:treasury-reconciliation",
"timestamp": "2025-12-06T09:00:00Z",
"created_by": "did:vm:user:sovereign",
"cron": "0 0 * * *",
"timezone": "UTC",
"first_run": "2025-12-07T00:00:00Z",
"tags": ["automation", "schedule", "create"],
"root_hash": "blake3:ggg777..."
}
```
**Trigger Fire Receipt**:
```json
{
"type": "auto_trigger_fire",
"trigger_id": "trig:security-incident",
"workflow_id": "wf:incident-response-initial",
"execution_id": "exec-2025-12-06-003",
"timestamp": "2025-12-06T03:47:00Z",
"event_type": "offsec_incident",
"event_id": "INC-2025-12-001",
"event_severity": "high",
"debounce_applied": false,
"tags": ["automation", "trigger", "fire", "incident"],
"root_hash": "blake3:hhh888..."
}
```
**Approval Request Receipt**:
```json
{
"type": "auto_approval_request",
"approval_id": "approval-2025-12-06-001",
"workflow_id": "wf:production-deploy",
"execution_id": "exec-2025-12-06-002",
"step_id": "step-3-deploy",
"timestamp": "2025-12-06T10:15:01Z",
"title": "Approve Production Deployment",
"required_approvers": 2,
"approver_pool": ["did:vm:user:sovereign", "did:vm:user:operator-alpha", "did:vm:user:operator-bravo"],
"expires_at": "2025-12-06T18:00:00Z",
"context_hash": "blake3:iii999...",
"tags": ["automation", "approval", "request", "deploy"],
"root_hash": "blake3:jjj000..."
}
```
**Approval Decision Receipt**:
```json
{
"type": "auto_approval_decision",
"approval_id": "approval-2025-12-06-001",
"execution_id": "exec-2025-12-06-002",
"timestamp": "2025-12-06T10:45:00Z",
"decision": "approved",
"approvers": [
{
"did": "did:vm:user:sovereign",
"decision": "approve",
"timestamp": "2025-12-06T10:30:00Z"
},
{
"did": "did:vm:user:operator-alpha",
"decision": "approve",
"timestamp": "2025-12-06T10:45:00Z"
}
],
"quorum_met": true,
"workflow_resumed": true,
"tags": ["automation", "approval", "decision", "approved"],
"root_hash": "blake3:kkk111..."
}
```
---
### 3.3 Ledger Layer (L3)
**Receipt Types**:
| Type | When Emitted |
| ------------------------- | ------------------------------- |
| `auto_workflow_register` | Workflow created/updated |
| `auto_workflow_execute` | Execution started |
| `auto_workflow_complete` | Execution completed (any status)|
| `auto_schedule_create` | Schedule created/modified |
| `auto_trigger_fire` | Trigger activated |
| `auto_approval_request` | Approval requested |
| `auto_approval_decision` | Approval granted/denied |
**Merkle Coverage**:
- All receipts append to `receipts/automation/automation_events.jsonl`
- `ROOT.automation.txt` updated after each append
- Guardian anchors Automation root in anchor cycles
---
## 4. Query Interface
`automation_query_events.py`:
```bash
# Workflow history
vm-auto query --workflow wf:daily-compliance-check
# Failed executions
vm-auto query --type workflow_complete --filter "status == 'failed'"
# Approvals by user
vm-auto query --type approval_decision --filter "approvers[].did == 'did:vm:user:sovereign'"
# Trigger fires by event type
vm-auto query --type trigger_fire --filter "event_type == 'offsec_incident'"
# Date range
vm-auto query --from 2025-12-01 --to 2025-12-06
# By workflow category
vm-auto query --tag compliance
# Export for analysis
vm-auto query --from 2025-01-01 --format csv > automation_2025.csv
```
**Execution Timeline**:
```bash
# Show execution timeline with all steps
vm-auto timeline exec-2025-12-06-002
# Output:
# exec-2025-12-06-002: wf:production-deploy v5
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
# 10:00:00 ▶ STARTED (triggered by ci-pipeline via webhook)
# 10:00:01 ├─ step-1-build: STARTED
# 10:05:00 ├─ step-1-build: COMPLETED (5m) ✓
# 10:05:01 ├─ step-2-test: STARTED
# 10:15:00 ├─ step-2-test: COMPLETED (10m) ✓
# 10:15:01 ├─ step-3-deploy: AWAITING APPROVAL
# 10:30:00 │ └─ sovereign: APPROVED
# 10:45:00 │ └─ operator-alpha: APPROVED (quorum met)
# 10:45:01 ├─ step-3-deploy: STARTED
# 11:15:00 ├─ step-3-deploy: COMPLETED (30m) ✓
# 11:15:01 ├─ step-4-verify: STARTED
# 11:25:00 ├─ step-4-verify: COMPLETED (10m) ✓
# 11:25:01 ├─ step-5-notify: STARTED
# 11:30:00 ├─ step-5-notify: COMPLETED (5m) ✓
# 11:30:00 ■ COMPLETED (1h 30m total)
```
---
## 5. Design Gate Checklist
| Question | Automation Answer |
| --------------------- | ---------------------------------------------------------------- |
| Clear entrypoint? | ✅ CLI (`vm-auto`), MCP tools, Portal HTTP |
| Contract produced? | ✅ `automation_workflow_contract.json` for registrations/executions |
| State object? | ✅ `automation_execution_state.json` tracking step progress |
| Receipts emitted? | ✅ Seven receipt types covering all automation events |
| Append-only JSONL? | ✅ `receipts/automation/automation_events.jsonl` |
| Merkle root? | ✅ `ROOT.automation.txt` |
| Guardian anchor path? | ✅ Automation root included in ProofChain |
| Query tool? | ✅ `automation_query_events.py` + execution timeline |
---
## 6. n8n Integration
### 6.1 VaultMesh n8n Nodes
Custom n8n nodes for VaultMesh integration:
```typescript
// VaultMesh Trigger Node
{
name: 'VaultMesh Trigger',
description: 'Trigger workflow on VaultMesh events',
inputs: [],
outputs: ['main'],
properties: [
{
displayName: 'Event Type',
name: 'eventType',
type: 'options',
options: [
{ name: 'Receipt Emitted', value: 'receipt' },
{ name: 'Alert Fired', value: 'alert' },
{ name: 'Anchor Complete', value: 'anchor' },
{ name: 'Mesh Change', value: 'mesh' }
]
},
{
displayName: 'Filter',
name: 'filter',
type: 'json'
}
]
}
// VaultMesh Action Node
{
name: 'VaultMesh',
description: 'Interact with VaultMesh APIs',
inputs: ['main'],
outputs: ['main'],
properties: [
{
displayName: 'Operation',
name: 'operation',
type: 'options',
options: [
{ name: 'Oracle Query', value: 'oracle_query' },
{ name: 'Emit Receipt', value: 'emit_receipt' },
{ name: 'Treasury Transfer', value: 'treasury_transfer' },
{ name: 'Mesh Node Status', value: 'mesh_status' },
{ name: 'Identity Verify', value: 'identity_verify' }
]
}
]
}
```
### 6.2 Workflow-to-Receipt Mapping
Every n8n workflow execution produces VaultMesh receipts:
```
n8n Workflow Execution
┌─────────────────────────┐
│ VaultMesh Automation │
│ Engine Wrapper │
│ │
│ • Intercepts start │
│ • Tracks step progress │
│ • Captures outputs │
│ • Handles approvals │
│ • Emits receipts │
└─────────────────────────┘
JSONL + Merkle
```
### 6.3 n8n Credential Storage
VaultMesh credentials for n8n stored securely:
```json
{
"credential_id": "n8n-cred:vaultmesh-api",
"type": "vaultmesh_api",
"name": "VaultMesh Production",
"data_encrypted": "aes-256-gcm:...",
"created_at": "2025-12-01T00:00:00Z",
"created_by": "did:vm:user:sovereign",
"last_used": "2025-12-06T10:00:00Z",
"scopes": ["oracle:read", "treasury:read", "automation:execute"]
}
```
---
## 7. Step Types
### 7.1 Built-in Step Types
| Step Type | Description | Example Use |
| --------------- | -------------------------------------------- | -------------------------------- |
| `mcp_tool` | Call VaultMesh MCP tool | Oracle query, Treasury check |
| `http_request` | Make HTTP request | External API calls |
| `condition` | Branch based on expression | Check compliance score |
| `loop` | Iterate over collection | Process multiple accounts |
| `parallel` | Execute steps concurrently | Check multiple nodes |
| `approval` | Wait for human approval | Production deployments |
| `delay` | Wait for duration | Rate limiting |
| `notification` | Send notifications | Slack, email, PagerDuty |
| `script` | Execute custom script | Complex transformations |
| `sub_workflow` | Call another workflow | Reusable components |
| `receipt_emit` | Emit custom receipt | Business events |
### 7.2 Step Configuration
```json
{
"step_id": "step-1",
"name": "Query Treasury Balance",
"type": "mcp_tool",
"tool": "treasury_balance",
"params": {
"account": "{{ input.account_id }}"
},
"timeout": "30s",
"retry": {
"max_attempts": 3,
"backoff": "exponential",
"initial_delay": "1s"
},
"error_handling": {
"on_error": "continue",
"fallback_value": {"balance": 0}
},
"output_mapping": {
"balance": "$.result.balance",
"currency": "$.result.currency"
}
}
```
---
## 8. Integration Points
| System | Integration |
| ---------------- | --------------------------------------------------------------------------- |
| **Guardian** | Trigger workflows on anchor events; automate anchor scheduling |
| **Treasury** | Automated reconciliation; scheduled reports; transfer approvals |
| **Identity** | Credential rotation workflows; onboarding/offboarding automation |
| **Mesh** | Node provisioning workflows; topology change automation |
| **OffSec** | Incident response playbooks; automated remediation |
| **Oracle** | Scheduled compliance checks; gap remediation workflows |
| **Observability**| Alert-triggered workflows; automated runbook execution |
---
## 9. Security Model
### 9.1 Workflow Permissions
```json
{
"workflow_id": "wf:production-deploy",
"permissions": {
"view": ["did:vm:org:engineering"],
"execute": ["did:vm:user:sovereign", "did:vm:service:ci-pipeline"],
"edit": ["did:vm:user:sovereign"],
"delete": ["did:vm:user:sovereign"],
"approve": ["did:vm:user:sovereign", "did:vm:user:operator-alpha"]
},
"execution_identity": "did:vm:service:automation-engine",
"secret_access": ["vault:deploy-keys", "vault:api-tokens"]
}
```
### 9.2 Audit Requirements
All workflow operations are receipted for:
- **Compliance**: Prove workflows executed as designed
- **Debugging**: Trace execution failures
- **Accountability**: Track who approved what
- **Non-repudiation**: Cryptographic proof of execution
---
## 10. Future Extensions
- **Visual workflow builder**: Drag-and-drop in Portal UI
- **Workflow versioning**: Git-like version control for workflows
- **A/B testing**: Test workflow variations
- **Cost tracking**: Treasury integration for workflow execution costs
- **ML-powered optimization**: Suggest workflow improvements
- **Cross-mesh orchestration**: Federated workflow execution
- **Workflow marketplace**: Share/import community workflows

View File

@@ -0,0 +1,438 @@
# VAULTMESH-CONSOLE-ENGINE.md
**Sovereign AI Agent Session Management**
> *Every coding session is a chapter in the Civilization Ledger.*
The Console Engine binds AI coding agents (OpenCode, Claude Code, CAI, custom agents) into the VaultMesh receipting system. Every session, command, file edit, tool call, approval, and git commit becomes a receipted event.
---
## 1. Engine Registration
| Property | Value |
|----------|-------|
| **Engine ID** | `engine:console` |
| **Name** | Console |
| **Scroll** | `Console` |
| **JSONL Path** | `receipts/console/console_events.jsonl` |
| **Root File** | `receipts/console/ROOT.console.txt` |
| **Authority** | AI agent session management, code operations, and sovereign development |
| **Status** | `active` |
### 1.1 Capabilities
```json
[
"console_read",
"console_write",
"console_execute",
"console_spawn",
"file_read",
"file_write",
"bash_execute",
"git_commit",
"mcp_call"
]
```
---
## 2. Receipt Types
### 2.1 Receipt Schema
All Console receipts share a common envelope:
```json
{
"ts": "2025-12-07T04:00:00Z",
"engine_id": "engine:console",
"type": "console_session_start",
"session_id": "session-1765123456",
"payload": { ... }
}
```
### 2.2 Receipt Type Definitions
| Type | Description | Payload Fields |
|------|-------------|----------------|
| `console_genesis` | Engine initialization marker | `note` |
| `console_session_start` | Agent session initiated | `agent_type`, `model_id`, `caller`, `project_path` |
| `console_session_end` | Agent session completed | `duration_ms`, `commands_executed`, `files_modified`, `exit_reason` |
| `console_command` | CLI command executed | `command`, `args_hash`, `exit_code`, `duration_ms` |
| `console_file_edit` | File modification via agent | `file_path`, `old_hash`, `new_hash`, `edit_type`, `lines_changed` |
| `console_tool_call` | Agent tool invocation | `tool_name`, `params_hash`, `result_hash`, `capability_used` |
| `console_approval` | Human approval for agent action | `action_type`, `approved`, `approver`, `reason` |
| `console_git_commit` | Git commit created by agent | `commit_hash`, `files_changed`, `message_hash`, `signed` |
| `console_agent_spawn` | Sub-agent spawned | `parent_session_id`, `child_session_id`, `agent_type`, `task_hash` |
---
## 3. Mapping to Eternal Pattern
### 3.1 Experience Layer (L1)
**Entrypoints:**
- `opencode --sovereign` — Launch sovereign OpenCode session
- `vm-console spawn <agent>` — Spawn a new agent session
- MCP tools (`console_session_list`, `console_spawn_agent`, etc.)
- Portal dashboard for session monitoring
**Intent Capture:**
```bash
# Sovereign OpenCode invocation
opencode --sovereign --project /root/work/vaultmesh
# With explicit identity
opencode --identity did:vm:agent:opencode-sovereign --capabilities console_write,file_edit
```
### 3.2 Engine Layer (L2)
**Session Contract:**
```json
{
"contract_type": "console_session",
"session_id": "session-1765123456",
"agent_type": "opencode",
"model_id": "claude-opus-4-5-20251101",
"caller": "did:vm:human:karol",
"project_path": "/root/work/vaultmesh",
"capabilities_requested": [
"file_read",
"file_write",
"bash_execute",
"git_commit"
],
"constraints": {
"max_duration_minutes": 60,
"max_files_modified": 50,
"require_approval_for": ["git_push", "file_delete"],
"sandbox_mode": false
},
"created_at": "2025-12-07T04:00:00Z"
}
```
This contract is captured as a `console_session_start` receipt payload.
**Session State (derived from receipts):**
```json
{
"session_id": "session-1765123456",
"status": "active",
"started_at": "2025-12-07T04:00:00Z",
"commands_executed": 23,
"files_read": 15,
"files_modified": 4,
"tool_calls": 47,
"approvals_requested": 1,
"approvals_granted": 1,
"current_task": "Implementing Console engine receipts",
"git_commits": []
}
```
State is derived from receipts, not a primary source of truth.
### 3.3 Ledger Layer (L3)
**Receipt Flow:**
```
Session Start → Tool Calls → File Edits → Approvals → Git Commits → Session End
↓ ↓ ↓ ↓ ↓ ↓
Receipt Receipt Receipt Receipt Receipt Receipt
↓ ↓ ↓ ↓ ↓ ↓
└─────────────┴────────────┴────────────┴────────────┴────────────┘
console_events.jsonl
ROOT.console.txt
Guardian Anchor
```
---
## 4. Root File Format
`receipts/console/ROOT.console.txt`:
```
# VaultMesh Console Root
engine_id=engine:console
merkle_root=8a71c1c0b9c6...
events=128
updated_at=2025-12-07T05:30:00Z
```
| Field | Description |
|-------|-------------|
| `engine_id` | Fixed identifier (`engine:console`) |
| `merkle_root` | Hex-encoded Merkle root over line hashes |
| `events` | Number of receipts in `console_events.jsonl` |
| `updated_at` | ISO 8601 timestamp of last update |
---
## 5. DID Scheme
```
did:vm:agent:opencode-<session-id> # Per-session agent identity
did:vm:agent:opencode-sovereign # Persistent sovereign agent
did:vm:service:console-gateway # MCP gateway service
```
For Phase 1, DIDs are treated as opaque strings. Full Identity engine integration comes later.
---
## 6. CLI Commands
```bash
# Session management
vm-console session list --status active
vm-console session show session-1765123456
vm-console session kill session-1765123456 --reason "Manual termination"
# Spawn agents
vm-console spawn opencode --task "Implement Treasury engine" --project /root/work/vaultmesh
vm-console spawn cai --task "Audit authentication flow" --capabilities offsec_read
# Approvals
vm-console approvals pending
vm-console approve action-12345 --reason "Looks safe"
vm-console reject action-12345 --reason "Too risky"
# History and audit
vm-console history --session session-1765123456
vm-console audit --date 2025-12-07 --agent-type opencode
vm-console receipts --scroll Console --limit 100
```
---
## 7. MCP Tools
### 7.1 Read-Only Tools
| Tool | Description |
|------|-------------|
| `console_session_list` | List active/completed sessions |
| `console_session_status` | Get detailed session status |
| `console_receipts_search` | Search Console scroll receipts |
### 7.2 Write Tools
| Tool | Capability Required | Description |
|------|---------------------|-------------|
| `console_spawn_agent` | `console_spawn` | Spawn a new agent session |
| `console_approve_action` | `console_approve` | Approve/reject pending action |
---
## 8. Python API
### 8.1 Emitting Receipts
```python
from engines.console.receipts import emit_console_receipt
# Session start
emit_console_receipt(
"console_session_start",
{
"agent_type": "opencode",
"model_id": "claude-opus-4-5",
"caller": "did:vm:human:karol",
"project_path": "/root/work/vaultmesh"
},
session_id="session-1765123456",
)
# File edit
emit_console_receipt(
"console_file_edit",
{
"file_path": "engines/console/receipts.py",
"old_hash": "blake3:abc123...",
"new_hash": "blake3:def456...",
"edit_type": "modify",
"lines_changed": 42
},
session_id="session-1765123456",
)
# Session end
emit_console_receipt(
"console_session_end",
{
"duration_ms": 3600000,
"commands_executed": 47,
"files_modified": 12,
"exit_reason": "completed"
},
session_id="session-1765123456",
)
```
### 8.2 Reading Root Info
```python
from engines.console.receipts import get_emitter
emitter = get_emitter()
info = emitter.get_root_info()
print(f"Events: {info['events']}, Root: {info['merkle_root'][:16]}...")
```
---
## 9. HTTP Bridge
For OpenCode plugin integration, a FastAPI sidecar exposes the receipt emitter:
```python
# scripts/console_receipts_server.py
from fastapi import FastAPI
from pydantic import BaseModel
import uvicorn
from engines.console.receipts import emit_console_receipt, ReceiptType
app = FastAPI()
class ReceiptIn(BaseModel):
type: ReceiptType
session_id: str | None = None
payload: dict
@app.post("/v1/console/receipt")
async def console_receipt(rec: ReceiptIn):
record = emit_console_receipt(
receipt_type=rec.type,
payload=rec.payload,
session_id=rec.session_id,
)
return {"ok": True, "record": record}
if __name__ == "__main__":
uvicorn.run(app, host="127.0.0.1", port=9110)
```
---
## 10. OpenCode Plugin
The `@vaultmesh/opencode-plugin` hooks into OpenCode's lifecycle:
```typescript
export const VaultMeshConsolePlugin = async (ctx) => {
const sessionId = await initSession(ctx);
return {
hooks: {
onSessionStart: async () => { /* emit console_session_start */ },
onSessionEnd: async (result) => { /* emit console_session_end */ },
onToolCall: async (tool, params, result) => { /* emit console_tool_call */ },
onFileEdit: async (path, oldContent, newContent) => { /* emit console_file_edit */ },
},
tool: {
vm_anchor: tool({ /* trigger Guardian anchor */ }),
vm_receipt_search: tool({ /* search Console receipts */ }),
vm_identity: tool({ /* get session identity */ }),
},
};
};
```
---
## 11. Integration Points
### 11.1 Guardian
Console root is included in the ProofChain anchor cycle:
```python
# Guardian reads ROOT.console.txt alongside other scroll roots
roots = {
"console": read_root("receipts/console/ROOT.console.txt"),
"drills": read_root("ROOT.drills.txt"),
# ... other scrolls
}
anchor_hash = compute_combined_root(roots)
```
### 11.2 Identity
Session DIDs resolve via the Identity engine:
```json
{
"did": "did:vm:agent:opencode-session-1765123456",
"type": "agent",
"controller": "did:vm:human:karol",
"capabilities": ["file_read", "file_write", "bash_execute"],
"session_id": "session-1765123456",
"expires_at": "2025-12-07T05:00:00Z"
}
```
### 11.3 Governance
Dangerous operations trigger constitutional compliance checks:
```python
async def check_before_execute(action: str, target: str):
if action in DANGEROUS_OPERATIONS:
result = await governance_engine.check_compliance(
action=action,
actor=current_session.identity,
target=target,
)
if not result.compliant:
raise ConstitutionalViolation(result.articles_violated)
```
---
## 12. Design Gate Checklist
| Question | Answer |
|----------|--------|
| Clear entrypoint? | ✅ `opencode --sovereign`, `vm-console spawn`, MCP tools |
| Contract produced? | ✅ Session contract in `console_session_start` payload |
| State object? | ✅ Derived session state from receipts |
| Receipts emitted? | ✅ 9 receipt types (including genesis) |
| Append-only JSONL? | ✅ `receipts/console/console_events.jsonl` |
| Merkle root? | ✅ `receipts/console/ROOT.console.txt` |
| Guardian anchor path? | ✅ Console root included in ProofChain |
| Query tool? | ✅ `vm-console`, MCP tools, Portal dashboard |
---
## 13. Future Extensions
### 13.1 Phase 2: Albedo 🜄
- OpenCode plugin integration
- HTTP bridge for receipt emission
- Real session tracking
### 13.2 Phase 3: Citrinitas 🜆
- `vm-console` CLI with full commands
- MCP server tools
- Session replay and audit
### 13.3 Phase 4: Rubedo 🜂
- Multi-agent orchestration
- Cross-session task continuity
- Federation support for remote agents
- Full Identity engine integration

View File

@@ -0,0 +1,752 @@
# VAULTMESH-CONSTITUTIONAL-GOVERNANCE.md
**The Laws That Govern the Ledger**
> *A civilization without laws is just a database.*
Constitutional Governance defines the rules, amendments, and enforcement mechanisms that govern VaultMesh itself. This is the meta-layer — the constitution that the engines must obey.
---
## 1. Governance Philosophy
### 1.1 Why a Constitution?
VaultMesh isn't just infrastructure — it's a **trust machine**. Trust requires:
- **Predictability**: Rules don't change arbitrarily
- **Transparency**: Changes are visible and receipted
- **Legitimacy**: Changes follow defined procedures
- **Accountability**: Violations have consequences
The Constitution provides these guarantees.
### 1.2 Constitutional Hierarchy
```
┌─────────────────────────────────────────────────────┐
│ IMMUTABLE AXIOMS │
│ (Cannot be changed, ever) │
│ • Receipts are append-only │
│ • Hashes are cryptographically verified │
│ • All changes are receipted │
└─────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────┐
│ CONSTITUTIONAL ARTICLES │
│ (Can be amended with supermajority + ratification) │
│ • Governance procedures │
│ • Engine authorities │
│ • Federation rules │
└─────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────┐
│ STATUTORY RULES │
│ (Can be changed with standard procedures) │
│ • Operational parameters │
│ • Default configurations │
│ • Policy settings │
└─────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────┐
│ EXECUTIVE ORDERS │
│ (Can be issued by authorized actors) │
│ • Emergency responses │
│ • Temporary measures │
│ • Operational decisions │
└─────────────────────────────────────────────────────┘
```
---
## 2. Governance Scroll
| Property | Value |
|----------|-------|
| **Scroll Name** | `Governance` |
| **JSONL Path** | `receipts/governance/governance_events.jsonl` |
| **Root File** | `ROOT.governance.txt` |
| **Receipt Types** | `gov_proposal`, `gov_vote`, `gov_ratification`, `gov_amendment`, `gov_executive_order`, `gov_violation`, `gov_enforcement` |
---
## 3. The Constitution
### 3.1 Preamble
```markdown
# VAULTMESH CONSTITUTION v1.0
We, the architects and stewards of VaultMesh, establish this Constitution to:
1. Preserve the integrity of the Civilization Ledger
2. Ensure transparent and accountable governance
3. Protect the sovereignty of all participants
4. Enable durable, cross-generational trust
This Constitution is the supreme law of this VaultMesh instance.
All engines, agents, and actors are bound by its provisions.
```
### 3.2 Immutable Axioms
```json
{
"axioms": [
{
"id": "AXIOM-001",
"name": "Append-Only Receipts",
"statement": "Receipts, once written, shall never be modified or deleted. The ledger is append-only.",
"rationale": "Immutability is the foundation of trust.",
"immutable": true
},
{
"id": "AXIOM-002",
"name": "Cryptographic Integrity",
"statement": "All receipts shall include cryptographic hashes computed from their content. Hash algorithms may be upgraded but never weakened.",
"rationale": "Verification requires mathematical certainty.",
"immutable": true
},
{
"id": "AXIOM-003",
"name": "Universal Receipting",
"statement": "All significant state changes shall produce receipts. No governance action is valid without a receipt.",
"rationale": "What is not receipted did not happen.",
"immutable": true
},
{
"id": "AXIOM-004",
"name": "Constitutional Supremacy",
"statement": "No engine, agent, or actor may take action that violates this Constitution. Violations are void ab initio.",
"rationale": "The Constitution is the root of legitimacy.",
"immutable": true
},
{
"id": "AXIOM-005",
"name": "Axiom Immutability",
"statement": "These axioms cannot be amended, suspended, or circumvented by any procedure.",
"rationale": "Some truths must be eternal.",
"immutable": true
}
]
}
```
### 3.3 Constitutional Articles
```json
{
"articles": [
{
"id": "ARTICLE-I",
"name": "Governance Structure",
"sections": [
{
"id": "I.1",
"title": "Sovereign Authority",
"text": "The Sovereign (designated human administrator) holds ultimate authority over this VaultMesh instance, subject to the Axioms."
},
{
"id": "I.2",
"title": "Engine Authorities",
"text": "Each Engine operates within its defined domain. No Engine may exceed its constitutional authority."
},
{
"id": "I.3",
"title": "Agent Delegation",
"text": "Agents may exercise delegated authority within explicit bounds. All agent actions are attributable to their delegator."
}
]
},
{
"id": "ARTICLE-II",
"name": "Amendment Procedure",
"sections": [
{
"id": "II.1",
"title": "Proposal",
"text": "Constitutional amendments may be proposed by the Sovereign or by consensus of admin-capability holders."
},
{
"id": "II.2",
"title": "Deliberation Period",
"text": "All amendments require a minimum 7-day deliberation period before voting."
},
{
"id": "II.3",
"title": "Ratification",
"text": "Amendments require approval by the Sovereign AND successful execution of the amendment receipt."
},
{
"id": "II.4",
"title": "Effective Date",
"text": "Amendments take effect upon anchor confirmation of the ratification receipt."
}
]
},
{
"id": "ARTICLE-III",
"name": "Engine Governance",
"sections": [
{
"id": "III.1",
"title": "Engine Registry",
"text": "Only engines registered in the Constitution may operate. New engines require constitutional amendment."
},
{
"id": "III.2",
"title": "Engine Boundaries",
"text": "Each engine's authority is limited to its defined scroll(s). Cross-scroll operations require explicit authorization."
},
{
"id": "III.3",
"title": "Engine Lifecycle",
"text": "Engines may be suspended or deprecated by executive order, but removal requires amendment."
}
]
},
{
"id": "ARTICLE-IV",
"name": "Rights and Protections",
"sections": [
{
"id": "IV.1",
"title": "Audit Rights",
"text": "Any authorized party may audit any receipt. Audit requests shall not be unreasonably denied."
},
{
"id": "IV.2",
"title": "Data Sovereignty",
"text": "Data subjects retain rights over their personal data as defined by applicable law."
},
{
"id": "IV.3",
"title": "Due Process",
"text": "No capability shall be revoked without notice and opportunity to respond, except in emergencies."
}
]
},
{
"id": "ARTICLE-V",
"name": "Federation",
"sections": [
{
"id": "V.1",
"title": "Federation Authority",
"text": "Federation agreements require Sovereign approval."
},
{
"id": "V.2",
"title": "Federation Limits",
"text": "No federation agreement may compromise the Axioms or require violation of this Constitution."
},
{
"id": "V.3",
"title": "Federation Termination",
"text": "Federation agreements may be terminated with 30 days notice, or immediately upon material breach."
}
]
},
{
"id": "ARTICLE-VI",
"name": "Emergency Powers",
"sections": [
{
"id": "VI.1",
"title": "Emergency Declaration",
"text": "The Sovereign may declare an emergency upon credible threat to system integrity."
},
{
"id": "VI.2",
"title": "Emergency Powers",
"text": "During emergencies, the Sovereign may suspend normal procedures except the Axioms."
},
{
"id": "VI.3",
"title": "Emergency Duration",
"text": "Emergencies automatically expire after 72 hours unless renewed. All emergency actions must be receipted."
}
]
}
]
}
```
### 3.4 Engine Registry
```json
{
"registered_engines": [
{
"engine_id": "engine:drills",
"name": "Security Drills",
"scroll": "Drills",
"authority": "Security training and exercise management",
"registered_at": "2025-06-01T00:00:00Z",
"status": "active"
},
{
"engine_id": "engine:oracle",
"name": "Compliance Oracle",
"scroll": "Compliance",
"authority": "Compliance question answering and attestation",
"registered_at": "2025-06-01T00:00:00Z",
"status": "active"
},
{
"engine_id": "engine:guardian",
"name": "Guardian",
"scroll": "Guardian",
"authority": "Anchoring, monitoring, and security response",
"registered_at": "2025-06-01T00:00:00Z",
"status": "active"
},
{
"engine_id": "engine:treasury",
"name": "Treasury",
"scroll": "Treasury",
"authority": "Financial tracking and settlement",
"registered_at": "2025-12-01T00:00:00Z",
"status": "active"
},
{
"engine_id": "engine:mesh",
"name": "Mesh",
"scroll": "Mesh",
"authority": "Topology and federation management",
"registered_at": "2025-12-01T00:00:00Z",
"status": "active"
},
{
"engine_id": "engine:offsec",
"name": "OffSec",
"scroll": "OffSec",
"authority": "Security operations and incident response",
"registered_at": "2025-12-01T00:00:00Z",
"status": "active"
},
{
"engine_id": "engine:identity",
"name": "Identity",
"scroll": "Identity",
"authority": "DID, credential, and capability management",
"registered_at": "2025-12-01T00:00:00Z",
"status": "active"
},
{
"engine_id": "engine:observability",
"name": "Observability",
"scroll": "Observability",
"authority": "Telemetry and health monitoring",
"registered_at": "2025-12-01T00:00:00Z",
"status": "active"
},
{
"engine_id": "engine:automation",
"name": "Automation",
"scroll": "Automation",
"authority": "Workflow and agent execution",
"registered_at": "2025-12-01T00:00:00Z",
"status": "active"
},
{
"engine_id": "engine:psi",
"name": "Psi-Field",
"scroll": "PsiField",
"authority": "Consciousness and transmutation tracking",
"registered_at": "2025-12-01T00:00:00Z",
"status": "active"
},
{
"engine_id": "engine:federation",
"name": "Federation",
"scroll": "Federation",
"authority": "Cross-mesh trust and verification",
"registered_at": "2025-12-01T00:00:00Z",
"status": "active"
},
{
"engine_id": "engine:governance",
"name": "Governance",
"scroll": "Governance",
"authority": "Constitutional enforcement and amendment",
"registered_at": "2025-06-01T00:00:00Z",
"status": "active"
}
]
}
```
---
## 4. Governance Procedures
### 4.1 Amendment Workflow
```
┌──────────────┐
│ PROPOSAL │
│ │
│ Author drafts│
│ amendment │
└──────┬───────┘
┌──────────────┐
│ SUBMISSION │
│ │
│ Submit via │
│ gov_proposal │
│ receipt │
└──────┬───────┘
┌──────────────┐ 7+ days
│ DELIBERATION │◄────────────┐
│ │ │
│ Public │ Comments │
│ discussion │─────────────┘
└──────┬───────┘
┌──────────────┐
│ VOTING │
│ │
│ Sovereign + │
│ Admin quorum │
└──────┬───────┘
├─────── REJECTED ──────► Archive
▼ APPROVED
┌──────────────┐
│ RATIFICATION │
│ │
│ Sovereign │
│ signs │
└──────┬───────┘
┌──────────────┐
│ ACTIVATION │
│ │
│ Upon anchor │
│ confirmation │
└──────────────┘
```
### 4.2 Proposal Receipt
```json
{
"type": "gov_proposal",
"proposal_id": "PROP-2025-12-001",
"proposal_type": "amendment",
"title": "Add Data Retention Article",
"author": "did:vm:human:sovereign",
"submitted_at": "2025-12-06T10:00:00Z",
"deliberation_ends": "2025-12-13T10:00:00Z",
"content": {
"target": "ARTICLE-VII",
"action": "add",
"text": {
"id": "ARTICLE-VII",
"name": "Data Retention",
"sections": [
{
"id": "VII.1",
"title": "Retention Periods",
"text": "Receipts shall be retained for a minimum of 10 years..."
}
]
}
},
"rationale": "Compliance with emerging EU digital infrastructure regulations requires explicit retention policies.",
"impact_assessment": {
"affected_engines": ["all"],
"backward_compatible": true,
"migration_required": false
},
"status": "deliberation",
"tags": ["governance", "proposal", "amendment"],
"root_hash": "blake3:aaa111..."
}
```
### 4.3 Vote Receipt
```json
{
"type": "gov_vote",
"vote_id": "VOTE-2025-12-001-sovereign",
"proposal_id": "PROP-2025-12-001",
"voter": "did:vm:human:sovereign",
"voted_at": "2025-12-14T10:00:00Z",
"vote": "approve",
"weight": 1.0,
"comments": "Essential for regulatory compliance.",
"signature": "z58D...",
"tags": ["governance", "vote", "approve"],
"root_hash": "blake3:bbb222..."
}
```
### 4.4 Ratification Receipt
```json
{
"type": "gov_ratification",
"ratification_id": "RAT-2025-12-001",
"proposal_id": "PROP-2025-12-001",
"ratified_at": "2025-12-14T12:00:00Z",
"ratified_by": "did:vm:human:sovereign",
"vote_summary": {
"approve": 1,
"reject": 0,
"abstain": 0
},
"quorum_met": true,
"effective_at": "pending_anchor",
"constitution_version_before": "1.0.0",
"constitution_version_after": "1.1.0",
"signature": "z58D...",
"tags": ["governance", "ratification", "amendment"],
"root_hash": "blake3:ccc333..."
}
```
### 4.5 Amendment Receipt
```json
{
"type": "gov_amendment",
"amendment_id": "AMEND-2025-12-001",
"proposal_id": "PROP-2025-12-001",
"ratification_id": "RAT-2025-12-001",
"effective_at": "2025-12-14T14:00:00Z",
"anchor_confirmed_at": "2025-12-14T14:00:00Z",
"anchor_proof": {
"backend": "ethereum",
"tx_hash": "0x123...",
"block_number": 12345678
},
"amendment_type": "add_article",
"target": "ARTICLE-VII",
"constitution_hash_before": "blake3:const_v1.0...",
"constitution_hash_after": "blake3:const_v1.1...",
"tags": ["governance", "amendment", "effective"],
"root_hash": "blake3:ddd444..."
}
```
---
## 5. Executive Orders
For operational decisions that don't require full amendment:
### 5.1 Executive Order Receipt
```json
{
"type": "gov_executive_order",
"order_id": "EO-2025-12-001",
"title": "Temporary Rate Limit Increase",
"issued_by": "did:vm:human:sovereign",
"issued_at": "2025-12-06T15:00:00Z",
"authority": "ARTICLE-I.1 (Sovereign Authority)",
"order_type": "parameter_change",
"content": {
"parameter": "guardian.anchor_rate_limit",
"old_value": "100/day",
"new_value": "500/day",
"reason": "Handling increased receipt volume during Q4 compliance push"
},
"duration": {
"type": "temporary",
"expires_at": "2026-01-01T00:00:00Z"
},
"tags": ["governance", "executive-order", "parameter"],
"root_hash": "blake3:eee555..."
}
```
### 5.2 Emergency Declaration
```json
{
"type": "gov_executive_order",
"order_id": "EO-2025-12-002",
"title": "Security Emergency Declaration",
"issued_by": "did:vm:human:sovereign",
"issued_at": "2025-12-06T03:50:00Z",
"authority": "ARTICLE-VI.1 (Emergency Declaration)",
"order_type": "emergency",
"content": {
"emergency_type": "security_incident",
"threat_description": "Active intrusion attempt detected on BRICK-02",
"powers_invoked": [
"Suspend normal authentication delays",
"Enable enhanced logging on all nodes",
"Authorize immediate capability revocation"
],
"incident_reference": "INC-2025-12-001"
},
"duration": {
"type": "emergency",
"expires_at": "2025-12-09T03:50:00Z",
"renewable": true
},
"tags": ["governance", "executive-order", "emergency", "security"],
"root_hash": "blake3:fff666..."
}
```
---
## 6. Violation and Enforcement
### 6.1 Violation Detection
Guardian monitors for constitutional violations:
```json
{
"type": "gov_violation",
"violation_id": "VIOL-2025-12-001",
"detected_at": "2025-12-06T16:00:00Z",
"detected_by": "engine:guardian",
"violation_type": "unauthorized_action",
"severity": "high",
"details": {
"actor": "did:vm:agent:automation-01",
"action_attempted": "modify_receipt",
"receipt_targeted": "receipt:compliance:oracle-answer-4721",
"rule_violated": "AXIOM-001 (Append-Only Receipts)",
"action_result": "blocked"
},
"evidence": {
"log_entries": ["..."],
"request_hash": "blake3:...",
"stack_trace": "..."
},
"tags": ["governance", "violation", "axiom", "blocked"],
"root_hash": "blake3:ggg777..."
}
```
### 6.2 Enforcement Action
```json
{
"type": "gov_enforcement",
"enforcement_id": "ENF-2025-12-001",
"violation_id": "VIOL-2025-12-001",
"enforced_at": "2025-12-06T16:05:00Z",
"enforced_by": "engine:guardian",
"enforcement_type": "capability_suspension",
"target": "did:vm:agent:automation-01",
"action_taken": {
"capability_suspended": "write",
"scope": "all_scrolls",
"duration": "pending_review"
},
"authority": "ARTICLE-IV.3 (Due Process) - emergency exception",
"review_required": true,
"review_deadline": "2025-12-07T16:05:00Z",
"tags": ["governance", "enforcement", "suspension"],
"root_hash": "blake3:hhh888..."
}
```
---
## 7. CLI Commands
```bash
# Constitution
vm-gov constitution show
vm-gov constitution version
vm-gov constitution diff v1.0.0 v1.1.0
vm-gov constitution export --format pdf
# Proposals
vm-gov proposal create --type amendment --file proposal.json
vm-gov proposal list --status deliberation
vm-gov proposal show PROP-2025-12-001
vm-gov proposal comment PROP-2025-12-001 --text "I support this because..."
# Voting
vm-gov vote PROP-2025-12-001 --vote approve --comment "Essential change"
vm-gov vote PROP-2025-12-001 --vote reject --reason "Needs more deliberation"
# Ratification (Sovereign only)
vm-gov ratify PROP-2025-12-001
# Executive Orders
vm-gov order create --type parameter_change --file order.json
vm-gov order list --active
vm-gov order show EO-2025-12-001
vm-gov order revoke EO-2025-12-001
# Emergencies
vm-gov emergency declare --type security_incident --description "..." --incident INC-2025-12-001
vm-gov emergency status
vm-gov emergency extend --hours 24
vm-gov emergency end
# Violations
vm-gov violations list --severity high,critical
vm-gov violations show VIOL-2025-12-001
vm-gov violations review VIOL-2025-12-001 --decision dismiss --reason "False positive"
# Enforcement
vm-gov enforcement list --pending-review
vm-gov enforcement review ENF-2025-12-001 --decision uphold
vm-gov enforcement review ENF-2025-12-001 --decision reverse --reason "Excessive response"
```
---
## 8. Design Gate Checklist
| Question | Governance Answer |
|----------|-------------------|
| Clear entrypoint? | ✅ CLI (`vm-gov`), Portal routes |
| Contract produced? | ✅ Proposal documents |
| State object? | ✅ Constitution + amendment state |
| Receipts emitted? | ✅ Seven receipt types |
| Append-only JSONL? | ✅ `receipts/governance/governance_events.jsonl` |
| Merkle root? | ✅ `ROOT.governance.txt` |
| Guardian anchor path? | ✅ Governance root included in ProofChain |
| Query tool? | ✅ `vm-gov` CLI |
---
## 9. Constitutional Hash Chain
The Constitution itself is version-controlled with a hash chain:
```json
{
"constitution_versions": [
{
"version": "1.0.0",
"effective_at": "2025-06-01T00:00:00Z",
"hash": "blake3:const_v1.0_abc123...",
"previous_hash": null,
"amendment_id": null
},
{
"version": "1.1.0",
"effective_at": "2025-12-14T14:00:00Z",
"hash": "blake3:const_v1.1_def456...",
"previous_hash": "blake3:const_v1.0_abc123...",
"amendment_id": "AMEND-2025-12-001"
}
]
}
```
This creates an immutable chain of constitutional states — you can always verify what the rules were at any point in time.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,507 @@
# VAULTMESH-ETERNAL-PATTERN.md
**Canonical Design Pattern for All VaultMesh Subsystems**
> *Every serious subsystem in VaultMesh should feel different in flavor, but identical in **shape**.*
This document defines that shared shape — the **Eternal Pattern**.
It is the architectural law that binds Drills, Oracle, Guardian, Treasury, Mesh, and any future module into one Civilization Ledger.
---
## 1. Core Idea (One-Line Contract)
All VaultMesh subsystems follow this arc:
> **Real-world intent → Engine → Structured JSON → Receipt → Scroll → Guardian Anchor**
If a new feature does **not** fit this pattern, it's either:
- not finished yet, or
- not part of the Ledger core.
---
## 2. Three-Layer VaultMesh Stack
At the highest level, VaultMesh is three stacked layers:
```
┌───────────────────────────────────────────────┐
│ L1 — Experience Layer │
│ (Humans & Agents) │
│ • CLI / UI / MCP tools / agents │
│ • "Ask a question", "start a drill", │
│ "anchor now", "run settlement" │
└───────────────────────────────────────────────┘
┌───────────────────────────────────────────────┐
│ L2 — Engine Layer │
│ (Domain Engines & Contracts) │
│ • Domain logics: Drills, Oracle, Guardian, │
│ Treasury, Mesh, OffSec, etc. │
│ • Contracts (plans) │
│ • Runners (state machines) │
│ • State JSON (progress) │
└───────────────────────────────────────────────┘
┌───────────────────────────────────────────────┐
│ L3 — Ledger Layer │
│ (Receipts, Scrolls, ProofChain, Anchors) │
│ • Receipts in append-only JSONL files │
│ • Scrolls per domain (Drills, Compliance, │
│ Guardian, Treasury, Mesh, etc.) │
│ • Merkle roots (ROOT.<scroll>.txt) │
│ • Guardian anchor cycles (local/OTS/chain) │
└───────────────────────────────────────────────┘
```
Everything you build plugs into this stack.
---
## 3. Eternal Pattern — Generic Lifecycle
This is the reusable template for any new subsystem "X".
### 3.1 Experience Layer (L1) — Intent In
**Goal**: Take messy human/agent intent and normalize it.
**Typical surfaces**:
- CLI (`vm-drills`, `vm-oracle`, `guardian`, `vm-treasury`, etc.)
- MCP tools (e.g. `oracle_answer`)
- Web / TUI / dashboard
- Automation hooks (cron, CI, schedulers)
**Typical inputs**:
- "Run a security drill for IoT ↔ OT"
- "Are we compliant with Annex IV today?"
- "Anchor this ProofChain root"
- "Reconcile treasury balances between nodes"
- "Apply this mesh topology change"
**L1 should**:
- Capture the raw intent
- Attach minimal context (who, when, where)
- Hand it off to the appropriate Engine in L2
---
### 3.2 Engine Layer (L2) — Plan and Execute
Every Engine follows the same internal shape:
#### Step 1 — Plan → `contract.json`
An Engine takes the intent and creates a **contract**:
`contract.json` (or equivalent JSON struct) contains:
- `id`: unique contract / drill / run id
- `title`: short human title
- `severity` / `priority` (optional, domain-specific)
- `stages[]` / `steps[]`:
- ordered id, skill / module, workflow, role, objective
- high-level `objectives[]`
**Example** (Drill contract snippet):
```json
{
"id": "drill-1764691390",
"title": "IoT device bridging into OT with weak detection",
"stages": [
{
"id": "stage-1-iot-wireless-security",
"order": 1,
"skill": "iot-wireless-security",
"workflow": "IoT Device Recon and Fingerprinting",
"role": "primary"
},
{
"id": "stage-2-ot-ics-security",
"order": 2,
"skill": "ot-ics-security",
"workflow": "OT Asset and Network Mapping",
"role": "supporting"
}
]
}
```
For Oracle, the "contract" can be implicit:
- the `vm_oracle_answer_v1` payload is itself the "answer contract".
For future Engines (Treasury, Mesh), mirror the same concept:
- plan file describing what will happen.
#### Step 2 — Execute → `state.json` + `outputs/`
A **runner** component walks through the contract and tracks reality.
**Typical commands**:
- `init contract.json``state.json`
- `next state.json` → show next stage/checklist
- `complete-stage <id> --outputs ...` → update `state.json`
The state file (e.g. `drill_state.json`) should contain:
- `drill_id` / `run_id`
- `status` (`pending` | `in_progress` | `completed` | `aborted`)
- `stages[]` with status, timestamps, attached outputs
- `created_at`, `updated_at`
- optional `tags` / `context`
**Example** (simplified):
```json
{
"drill_id": "drill-1764691390",
"status": "completed",
"created_at": "2025-12-02T10:03:00Z",
"updated_at": "2025-12-02T11:45:00Z",
"stages": [
{
"id": "stage-1-iot-wireless-security",
"status": "completed",
"outputs": [
"inventory.yaml",
"topology.png",
"findings.md"
]
}
]
}
```
Runners exist today for Drills; the same pattern will apply to other Engines.
#### Step 3 — Seal → Receipts
A **sealer** takes:
- `contract.json`
- `state.json`
- `outputs/` (optional, usually via manifest or aggregated hash)
And produces a **receipt** in L3.
**Example** (Drills sealer behavior):
- Copies contract + state into `cases/drills/<drill-id>/`
- Mirrors `outputs/`
- Computes blake3 or similar hash over `drill_state.json` (and later outputs manifest)
- Derives summary metrics:
- `status`
- `stages_total`
- `stages_completed`
- unique domains / workflows
- Appends a receipt entry to `receipts/drills/drill_runs.jsonl`
- Calls a generic receipts Merkle updater to update `ROOT.drills.txt`
- Optionally triggers ANCHOR via Guardian
This "seal" step is what promotes local execution into **civilization evidence**.
---
### 3.3 Ledger Layer (L3) — Scrolls, Roots, Anchors
L3 is shared by all subsystems; only field names differ.
#### 3.3.1 Scrolls
A **scroll** is just a logical ledger space for a domain.
**Examples**:
- `Drills` (security drills and exercises)
- `Compliance` (Oracle answers)
- `Guardian` (anchor events, healing proofs)
- `Treasury` (credit/debit/settlements)
- `Mesh` (topology & configuration changes)
- `OffSec` (real incident & red-team case receipts)
- `Identity` (DIDs, credentials, auth events)
- `Observability` (metrics, logs, traces, alerts)
- `Automation` (workflow executions, approvals)
Each scroll has:
- 1+ JSONL files under `receipts/<scroll>/`
- 1 Merkle root file `ROOT.<scroll>.txt`
#### 3.3.2 Receipts
Receipts are append-only JSON objects with at least:
- `type`: operation type (e.g. `security_drill_run`, `oracle_answer`)
- domain-specific fields
- one or more hash fields:
- `root_hash` / `answer_hash` / etc.
- optional `tags`:
- `tags: [ "drill", "iot", "kubernetes" ]`
**Drill Receipt** (shape):
```json
{
"type": "security_drill_run",
"drill_id": "drill-1764691390",
"prompt": "IoT device bridging into OT network with weak detection",
"timestamp_started": "2025-12-02T10:03:00Z",
"timestamp_completed": "2025-12-02T11:45:00Z",
"status": "completed",
"stages_total": 3,
"stages_completed": 3,
"domains": ["iot-wireless-security", "ot-ics-security", "detection-defense-ir"],
"workflows": [
"IoT Device Recon and Fingerprinting",
"OT Asset and Network Mapping",
"IR Triage and Containment"
],
"severity": "unknown",
"tags": ["drill", "iot", "ot", "detection"],
"root_hash": "<blake3(drill_state.json or bundle)>",
"proof_path": "cases/drills/drill-1764691390/PROOF.json",
"artifacts_manifest": "cases/drills/drill-1764691390/ARTIFACTS.sha256"
}
```
**Oracle Answer Receipt** (shape):
```json
{
"scroll": "Compliance",
"issuer": "did:vm:node:oracle-01",
"body": {
"op_type": "oracle_answer",
"question": "Are we compliant with Annex IV?",
"model_id": "gpt-4.1",
"citations_used": ["VM-AI-TECHDOC-001 §4.2", "..."],
"compliance_flags": {
"insufficient_context": false,
"ambiguous_requirements": true,
"out_of_scope_question": false
},
"answer_hash": "blake3:...",
"context_docs": ["VM-AI-TECHDOC-001_Annex_IV_Technical_Documentation.docx"],
"frameworks": ["AI_Act"],
"extra": {
"version": "v0.5.0",
"prompt_version": "vm_oracle_answer_v1"
}
}
}
```
#### 3.3.3 ProofChain & Guardian Anchors
- A receipts update tool (or ProofChain engine) computes Merkle roots over each scroll's JSONL.
- Guardian sees the new root via `ProofChain.current_root_hex()`.
- Guardian's Anchor module:
- Submits `root_hex` → anchor backend (HTTP/CLI/blockchain/OTS)
- Keeps an internal `AnchorStatus` (`last_root`, `last_anchor_id`, `count`).
- Emits `SecurityEvents` (`AnchorSuccess`, `AnchorFailure`, `AnchorDivergence`).
---
## 4. Existing Subsystems Mapped to the Pattern
### 4.1 Security Drills (Security Lab Suite)
**Experience (L1)**:
- `security_lab_router.py` (select skill)
- `security_lab_chain_engine.py` (multi-skill chain)
- CLI usage:
- `security_lab_chain_engine.py --contract "prompt"``contract.json`
- `security_lab_drill_runner.py init/next/complete-stage`
**Engine (L2)**:
- `contract.json` (drill plan)
- `drill_state.json` (progress)
- Runners hydrate stages from runbooks (actions, expected_outputs).
**Ledger (L3)**:
- `security_drill_seal_run.py`:
- Syncs case directory
- Hashes state
- Appends drill receipt → `receipts/drills/drill_runs.jsonl`
- Updates `ROOT.drills.txt`
- Optionally auto-anchors using existing anchor scripts.
---
### 4.2 Oracle Node (Compliance Appliance)
**Experience (L1)**:
- MCP server exposing `oracle_answer` tool.
**Engine (L2)**:
- Corpus loader/search: `corpus/loader.py`, `corpus/search.py`
- Prompt + schema: `prompts/` (`vm_oracle_answer_v1`, `build_oracle_prompt()`)
- LLM abstraction: `oracle/llm.py`
- End-to-end:
- question → context → prompt → LLM JSON → schema validation.
**Ledger (L3)**:
- `emit_oracle_answer_receipt()`
- Hash:
- `answer_hash = "blake3:" + blake3(canonical_answer_json).hexdigest()`
- Receipts POSTed to `VAULTMESH_RECEIPT_ENDPOINT` (e.g. `/api/receipts/oracle`).
- Scroll: `Compliance`.
---
### 4.3 Guardian (Anchor-Integrated Sentinel)
**Experience (L1)**:
- `guardian_cli`:
- `guardian anchor-status`
- `guardian anchor-now` (with capability)
- Portal HTTP routes:
- `GET /guardian/anchor-status`
- `POST /guardian/anchor-now`
**Engine (L2)**:
- Rust crate `guardian`:
- Holds `ProofChain`, `AnchorClient`, `AnchorVerifier`, config.
- `run_anchor_cycle(&ProofChain)``AnchorVerdict`
- `spawn_anchor_task()` for periodic anchoring.
**Ledger (L3)**:
- Anchors + anchor events:
- `anchor_success`/`failure`/`divergence` events.
- Can be streamed into `receipts/guardian/anchor_events.jsonl` with `ROOT.guardian.txt` and anchored further (if desired).
---
## 5. Adding New Domains (Treasury, Mesh, OffSec, etc.)
When adding a new subsystem "X" (e.g. Treasury, Mesh), follow this checklist.
### 5.1 Scroll Definition
1. **Pick a scroll name**:
- Treasury / Mesh / OffSec / Identity / Observability / Automation, etc.
2. **Define**:
- JSONL path: `receipts/<scroll>/<file>.jsonl`
- Root file: `ROOT.<scroll>.txt`
3. **Define 13 receipt types**:
- Treasury:
- `treasury_credit`, `treasury_debit`, `treasury_settlement`
- Mesh:
- `mesh_route_change`, `mesh_node_join`, `mesh_node_leave`
### 5.2 Engine API
For each engine:
- **Define a Plan API**:
- `*_plan_*.py` → produce `contract.json`
- **Define a Runner**:
- `*_runner.py` → manage `state.json` + `outputs/`
- **Define a Sealer**:
- `*_seal_*.py` → write receipts, update roots, maybe anchor.
### 5.3 Query CLI
Add a small query layer:
- Treasury:
- `treasury_query_runs.py`:
- filters: node, asset, date range, tags.
- Mesh:
- `mesh_query_changes.py`:
- filters: node, segment, change type, date.
This makes scrolls self-explaining and agent-friendly.
---
## 6. Design Gate: "Is It Aligned With the Eternal Pattern?"
Use this quick checklist whenever you design a new feature or refactor an old one.
### 6.1 Experience Layer
- [ ] Is there a clear entrypoint (CLI, MCP tool, HTTP route)?
- [ ] Is the intent clearly represented in a structured form (arguments, payload, contract)?
### 6.2 Engine Layer
- [ ] Does the subsystem produce a contract (even if implicit)?
- [ ] Is there a state object tracking progress or outcomes?
- [ ] Are the actions and outputs visible and inspectable (e.g. via JSON + files)?
### 6.3 Ledger Layer
- [ ] Does the subsystem emit a receipt for its important operations?
- [ ] Are receipts written to an append-only JSONL file?
- [ ] Is the JSONL covered by a Merkle root in `ROOT.<scroll>.txt`?
- [ ] Does Guardian have a way to anchor the relevant root(s)?
- [ ] Is there/will there be a simple query tool for this scroll?
**If any of these is "no", you have a clear next step.**
---
## 7. Future Extensions (Stable Pattern, Evolving Domains)
The Eternal Pattern is deliberately minimal:
- It doesn't care what chain you anchor to.
- It doesn't care which LLM model you use.
- It doesn't care whether the Runner is human-driven or fully autonomous.
As VaultMesh evolves, you can:
- **Swap LLMs** → Oracle stays the same; receipts remain valid.
- **Swap anchor backends** (OTS, Ethereum, Bitcoin, custom chain) → roots remain valid.
- **Add automated agents** (vm-copilot, OffSec agents, Mesh guardians) → they all just become more Experience Layer clients of the same Engine + Ledger.
**The shape does not change.**
---
## 8. Short Human Explanation (for README / Auditors)
VaultMesh treats every serious operation — a security drill, a compliance answer, an anchor event, a treasury transfer — as a small story with a beginning, middle, and end:
1. A **human or agent** expresses intent
2. An **engine** plans and executes the work, tracking state
3. The outcome is **sealed** into an append-only ledger, hashed, merklized, and anchored
This pattern — **Intent → Engine → Receipt → Scroll → Anchor** — is the same across all domains.
It's what makes VaultMesh composable, auditable, and explainable to both humans and machines.
---
## 9. Engine Specifications Index
The following engine specifications implement the Eternal Pattern:
| Engine | Scroll | Description | Receipt Types |
|--------|--------|-------------|---------------|
| [VAULTMESH-CONSOLE-ENGINE.md](./VAULTMESH-CONSOLE-ENGINE.md) | `Console` | AI agent sessions, code operations, sovereign development | `console_session_start`, `console_session_end`, `console_command`, `console_file_edit`, `console_tool_call`, `console_approval`, `console_git_commit`, `console_agent_spawn` |
| [VAULTMESH-MESH-ENGINE.md](./VAULTMESH-MESH-ENGINE.md) | `Mesh` | Federation topology, node management, routes, capabilities | `mesh_node_join`, `mesh_node_leave`, `mesh_route_change`, `mesh_capability_grant`, `mesh_capability_revoke`, `mesh_topology_snapshot` |
| [VAULTMESH-OFFSEC-ENGINE.md](./VAULTMESH-OFFSEC-ENGINE.md) | `OffSec` | Security incidents, red team engagements, vulnerability tracking | `offsec_incident`, `offsec_redteam`, `offsec_vuln_discovery`, `offsec_remediation`, `offsec_threat_intel`, `offsec_forensic_snapshot` |
| [VAULTMESH-IDENTITY-ENGINE.md](./VAULTMESH-IDENTITY-ENGINE.md) | `Identity` | DIDs, verifiable credentials, authentication, authorization | `identity_did_create`, `identity_did_rotate`, `identity_did_revoke`, `identity_credential_issue`, `identity_credential_revoke`, `identity_auth_event`, `identity_authz_decision` |
| [VAULTMESH-OBSERVABILITY-ENGINE.md](./VAULTMESH-OBSERVABILITY-ENGINE.md) | `Observability` | Metrics, logs, traces, alerts, SLOs | `obs_metric_snapshot`, `obs_log_batch`, `obs_trace_complete`, `obs_alert_fired`, `obs_alert_resolved`, `obs_slo_report`, `obs_anomaly_detected` |
| [VAULTMESH-AUTOMATION-ENGINE.md](./VAULTMESH-AUTOMATION-ENGINE.md) | `Automation` | n8n workflows, schedules, triggers, approvals | `auto_workflow_register`, `auto_workflow_execute`, `auto_workflow_complete`, `auto_schedule_create`, `auto_trigger_fire`, `auto_approval_request`, `auto_approval_decision` |
| [VAULTMESH-PSI-FIELD-ENGINE.md](./VAULTMESH-PSI-FIELD-ENGINE.md) | `PsiField` | Alchemical consciousness, phase transitions, transmutations | `psi_phase_transition`, `psi_emergence_event`, `psi_transmutation`, `psi_resonance`, `psi_integration`, `psi_oracle_insight` |
| [VAULTMESH-FEDERATION-PROTOCOL.md](./VAULTMESH-FEDERATION-PROTOCOL.md) | `Federation` | Cross-mesh trust, witness verification, cross-anchoring | `fed_trust_proposal`, `fed_trust_established`, `fed_trust_revoked`, `fed_witness_event`, `fed_cross_anchor`, `fed_schema_sync` |
| [VAULTMESH-CONSTITUTIONAL-GOVERNANCE.md](./VAULTMESH-CONSTITUTIONAL-GOVERNANCE.md) | `Governance` | Constitutional rules, amendments, enforcement, violations | `gov_proposal`, `gov_vote`, `gov_ratification`, `gov_amendment`, `gov_executive_order`, `gov_violation`, `gov_enforcement` |
**Implementation Reference**:
| Document | Description |
|----------|-------------|
| [VAULTMESH-IMPLEMENTATION-SCAFFOLDS.md](./VAULTMESH-IMPLEMENTATION-SCAFFOLDS.md) | Rust structs, Python CLI, directory structure |
| [VAULTMESH-MCP-SERVERS.md](./VAULTMESH-MCP-SERVERS.md) | MCP server implementations for Claude integration, tool definitions, gateway |
| [VAULTMESH-DEPLOYMENT-MANIFESTS.md](./VAULTMESH-DEPLOYMENT-MANIFESTS.md) | Kubernetes manifests, Docker Compose, infrastructure-as-code |
| [VAULTMESH-MONITORING-STACK.md](./VAULTMESH-MONITORING-STACK.md) | Prometheus config, Grafana dashboards, alerting rules, metrics |
| [VAULTMESH-TESTING-FRAMEWORK.md](./VAULTMESH-TESTING-FRAMEWORK.md) | Property-based tests, integration tests, chaos tests, fixtures |
| [VAULTMESH-MIGRATION-GUIDE.md](./VAULTMESH-MIGRATION-GUIDE.md) | Version upgrades, migration scripts, rollback procedures |
Each engine specification follows the same structure:
1. **Scroll Definition** (JSONL path, root file, receipt types)
2. **Core Concepts** (domain-specific entities)
3. **Mapping to Eternal Pattern** (L1, L2, L3)
4. **Query Interface**
5. **Design Gate Checklist**
6. **Integration Points**
7. **Future Extensions**

View File

@@ -0,0 +1,560 @@
# VAULTMESH-FEDERATION-PROTOCOL.md
**Cross-Mesh Trust and Receipt Sharing**
> *Sovereign meshes that verify each other become civilizations that remember together.*
The Federation Protocol defines how independent VaultMesh deployments establish trust, share receipts, and create a network of mutually-witnessing civilization ledgers.
---
## 1. Federation Philosophy
### 1.1 Sovereignty First
Each VaultMesh instance is **sovereign** — it controls its own:
- Identity roots
- Anchor backends
- Governance rules
- Data retention
- Access policies
Federation doesn't compromise sovereignty. It creates **voluntary witness relationships** where meshes choose to verify and attest to each other's receipts.
### 1.2 The Witness Network
```
┌─────────────────┐ ┌─────────────────┐
│ VaultMesh-A │◄───────►│ VaultMesh-B │
│ (Dublin) │ witness │ (Berlin) │
└────────┬────────┘ └────────┬────────┘
│ │
│ witness │
│ ┌─────────────────┐ │
└───►│ VaultMesh-C │◄───┘
│ (Singapore) │
└─────────────────┘
```
When Mesh-A anchors a root, Mesh-B and Mesh-C can:
1. Verify the anchor independently
2. Record their verification as a receipt
3. Include Mesh-A's root in their own anchor cycles
This creates **redundant civilizational memory** — even if one mesh is compromised or lost, the others retain witnessed evidence.
### 1.3 Trust Levels
| Level | Name | Description | Use Case |
|-------|------|-------------|----------|
| 0 | `isolated` | No federation | Air-gapped deployments |
| 1 | `observe` | Read-only witness | Public audit |
| 2 | `verify` | Mutual verification | Partner organizations |
| 3 | `attest` | Cross-attestation | Compliance networks |
| 4 | `integrate` | Shared scrolls | Tight federation |
---
## 2. Federation Scroll
| Property | Value |
|----------|-------|
| **Scroll Name** | `Federation` |
| **JSONL Path** | `receipts/federation/federation_events.jsonl` |
| **Root File** | `ROOT.federation.txt` |
| **Receipt Types** | `fed_trust_proposal`, `fed_trust_established`, `fed_trust_revoked`, `fed_witness_event`, `fed_cross_anchor`, `fed_schema_sync` |
---
## 3. Trust Establishment Protocol
### 3.1 Phase 1: Discovery
Meshes discover each other via:
- Manual configuration
- DNS-based discovery (`_vaultmesh._tcp.example.com`)
- DHT announcement (for public meshes)
**Discovery Record**:
```json
{
"mesh_id": "did:vm:mesh:vaultmesh-berlin",
"display_name": "VaultMesh Berlin Node",
"endpoints": {
"federation": "https://federation.vaultmesh-berlin.io",
"verification": "https://verify.vaultmesh-berlin.io"
},
"public_key": "ed25519:z6Mk...",
"scrolls_available": ["Drills", "Compliance", "Treasury"],
"trust_policy": {
"accepts_proposals": true,
"min_trust_level": 1,
"requires_mutual": true
},
"attestations": [
{
"attester": "did:vm:mesh:vaultmesh-dublin",
"attested_at": "2025-06-01T00:00:00Z",
"attestation_type": "identity_verified"
}
]
}
```
### 3.2 Phase 2: Proposal
Mesh-A proposes federation to Mesh-B:
**Trust Proposal**:
```json
{
"proposal_id": "fed-proposal-2025-12-06-001",
"proposer": "did:vm:mesh:vaultmesh-dublin",
"target": "did:vm:mesh:vaultmesh-berlin",
"proposed_at": "2025-12-06T10:00:00Z",
"expires_at": "2025-12-13T10:00:00Z",
"proposed_trust_level": 2,
"proposed_terms": {
"scrolls_to_share": ["Compliance"],
"verification_frequency": "hourly",
"retention_period_days": 365,
"data_jurisdiction": "EU",
"audit_rights": true
},
"proposer_attestations": {
"identity_proof": "...",
"capability_proof": "...",
"compliance_credentials": ["ISO27001", "SOC2"]
},
"signature": {
"algorithm": "Ed25519",
"value": "z58D..."
}
}
```
### 3.3 Phase 3: Negotiation
Target mesh reviews and may counter-propose:
**Counter-Proposal**:
```json
{
"proposal_id": "fed-proposal-2025-12-06-001",
"response_type": "counter",
"responder": "did:vm:mesh:vaultmesh-berlin",
"responded_at": "2025-12-06T14:00:00Z",
"counter_terms": {
"scrolls_to_share": ["Compliance", "Drills"],
"verification_frequency": "daily",
"retention_period_days": 180,
"additional_requirement": "quarterly_audit_call"
},
"signature": "z47C..."
}
```
### 3.4 Phase 4: Establishment
Both parties sign the final agreement:
**Federation Agreement**:
```json
{
"agreement_id": "fed-agreement-2025-12-06-001",
"parties": [
"did:vm:mesh:vaultmesh-dublin",
"did:vm:mesh:vaultmesh-berlin"
],
"established_at": "2025-12-06T16:00:00Z",
"trust_level": 2,
"terms": {
"scrolls_shared": ["Compliance", "Drills"],
"verification_frequency": "daily",
"retention_period_days": 180,
"data_jurisdiction": "EU",
"audit_rights": true,
"dispute_resolution": "arbitration_zurich"
},
"key_exchange": {
"dublin_federation_key": "ed25519:z6MkDublin...",
"berlin_federation_key": "ed25519:z6MkBerlin..."
},
"signatures": {
"did:vm:mesh:vaultmesh-dublin": {
"signed_at": "2025-12-06T15:30:00Z",
"signature": "z58D..."
},
"did:vm:mesh:vaultmesh-berlin": {
"signed_at": "2025-12-06T16:00:00Z",
"signature": "z47C..."
}
},
"agreement_hash": "blake3:abc123..."
}
```
### 3.5 Phase 5: Activation
Both meshes:
1. Store the agreement in their Federation scroll
2. Exchange current Merkle roots
3. Begin scheduled verification cycles
4. Emit `fed_trust_established` receipt
---
## 4. Witness Protocol
### 4.1 Verification Cycle
```
┌─────────────┐ ┌─────────────┐
│ Mesh-A │ │ Mesh-B │
│ (Dublin) │ │ (Berlin) │
└──────┬──────┘ └──────┬──────┘
│ │
│ 1. Anchor cycle completes │
│ ROOT.compliance.txt updated │
│ │
│ 2. POST /federation/notify │
│────────────────────────────────►│
│ { │
│ scroll: "Compliance", │
│ root: "blake3:aaa...", │
│ anchor_proof: {...} │
│ } │
│ │
│ │ 3. Verify anchor proof
│ │ against known backends
│ │
│ │ 4. Optionally fetch
│ │ receipt samples
│ │
│ 5. POST /federation/witness │
│◄────────────────────────────────│
│ { │
│ witnessed_root: "blake3:aaa",│
│ witness_result: "verified", │
│ witness_signature: "z47C..." │
│ } │
│ │
│ 6. Store witness receipt │
│ │
└──────────────────────────────────┘
```
### 4.2 Witness Receipt
```json
{
"type": "fed_witness_event",
"witness_id": "witness-2025-12-06-001",
"witnessed_mesh": "did:vm:mesh:vaultmesh-dublin",
"witnessing_mesh": "did:vm:mesh:vaultmesh-berlin",
"timestamp": "2025-12-06T12:05:00Z",
"scroll": "Compliance",
"witnessed_root": "blake3:aaa111...",
"witnessed_anchor": {
"backend": "ethereum",
"tx_hash": "0x123...",
"block_number": 12345678
},
"verification_method": "anchor_proof_validation",
"verification_result": "verified",
"samples_checked": 5,
"discrepancies": [],
"witness_signature": "z47C...",
"tags": ["federation", "witness", "compliance", "verified"],
"root_hash": "blake3:bbb222..."
}
```
### 4.3 Cross-Anchor
At trust level 3+, meshes can include each other's roots in their anchor cycles:
**Cross-Anchor Receipt**:
```json
{
"type": "fed_cross_anchor",
"cross_anchor_id": "cross-anchor-2025-12-06-001",
"anchoring_mesh": "did:vm:mesh:vaultmesh-berlin",
"anchored_mesh": "did:vm:mesh:vaultmesh-dublin",
"timestamp": "2025-12-06T12:10:00Z",
"dublin_roots_included": {
"Compliance": "blake3:aaa111...",
"Drills": "blake3:bbb222..."
},
"combined_root": "blake3:ccc333...",
"anchor_proof": {
"backend": "bitcoin",
"tx_hash": "abc123...",
"merkle_path": [...]
},
"tags": ["federation", "cross-anchor", "bitcoin"],
"root_hash": "blake3:ddd444..."
}
```
This means Dublin's receipts are now anchored on **both** Dublin's chosen backend **and** Berlin's Bitcoin anchor — double civilizational durability.
---
## 5. Federation API
### 5.1 Endpoints
```yaml
# Federation API Specification
openapi: 3.0.0
info:
title: VaultMesh Federation API
version: 1.0.0
paths:
/federation/discovery:
get:
summary: Get mesh discovery record
responses:
200:
description: Discovery record
content:
application/json:
schema:
$ref: '#/components/schemas/DiscoveryRecord'
/federation/proposals:
post:
summary: Submit trust proposal
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/TrustProposal'
responses:
202:
description: Proposal received
/federation/proposals/{id}:
get:
summary: Get proposal status
put:
summary: Respond to proposal (accept/reject/counter)
/federation/agreements:
get:
summary: List active federation agreements
/federation/agreements/{id}:
get:
summary: Get agreement details
delete:
summary: Revoke federation (with notice period)
/federation/notify:
post:
summary: Notify of new anchor (push)
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/AnchorNotification'
/federation/witness:
post:
summary: Submit witness attestation
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/WitnessAttestation'
/federation/roots:
get:
summary: Get current Merkle roots for all scrolls
parameters:
- name: scrolls
in: query
schema:
type: array
items:
type: string
/federation/receipts/{scroll}:
get:
summary: Fetch receipt samples for verification
parameters:
- name: scroll
in: path
required: true
- name: from
in: query
schema:
type: string
format: date-time
- name: limit
in: query
schema:
type: integer
default: 100
/federation/verify:
post:
summary: Request verification of specific receipt
requestBody:
content:
application/json:
schema:
type: object
properties:
receipt_hash:
type: string
scroll:
type: string
```
### 5.2 Authentication
Federation API uses mutual TLS + signed requests:
```
POST /federation/notify HTTP/1.1
Host: federation.vaultmesh-berlin.io
Content-Type: application/json
X-Mesh-ID: did:vm:mesh:vaultmesh-dublin
X-Timestamp: 2025-12-06T12:00:00Z
X-Signature: z58D...
{
"scroll": "Compliance",
"root": "blake3:aaa111...",
...
}
```
Signature covers: `${method}|${path}|${timestamp}|${body_hash}`
---
## 6. Conflict Resolution
### 6.1 Discrepancy Types
| Type | Description | Severity |
|------|-------------|----------|
| `root_mismatch` | Claimed root doesn't match computed | Critical |
| `anchor_invalid` | Anchor proof fails verification | Critical |
| `timestamp_drift` | Timestamps outside tolerance (>5min) | Warning |
| `schema_incompatible` | Receipt schema version mismatch | Warning |
| `sample_missing` | Requested receipt not found | Info |
### 6.2 Discrepancy Protocol
```json
{
"type": "fed_discrepancy",
"discrepancy_id": "discrepancy-2025-12-06-001",
"reporting_mesh": "did:vm:mesh:vaultmesh-berlin",
"reported_mesh": "did:vm:mesh:vaultmesh-dublin",
"timestamp": "2025-12-06T12:15:00Z",
"discrepancy_type": "root_mismatch",
"severity": "critical",
"details": {
"scroll": "Compliance",
"claimed_root": "blake3:aaa111...",
"computed_root": "blake3:xxx999...",
"sample_receipts_checked": 50,
"first_divergence_at": "receipt-sequence-4721"
},
"evidence_hash": "blake3:evidence...",
"resolution_requested": true
}
```
### 6.3 Resolution Workflow
1. **Automatic**: Re-sync and recompute
2. **Manual**: Human review of divergence
3. **Arbitration**: Third-party mesh verification
4. **Escalation**: Federation suspension pending resolution
---
## 7. Schema Synchronization
Federated meshes must agree on receipt schemas:
**Schema Sync Receipt**:
```json
{
"type": "fed_schema_sync",
"sync_id": "schema-sync-2025-12-06-001",
"meshes": ["did:vm:mesh:vaultmesh-dublin", "did:vm:mesh:vaultmesh-berlin"],
"timestamp": "2025-12-06T10:00:00Z",
"schemas_synced": {
"Compliance": {
"version": "1.2.0",
"hash": "blake3:schema1..."
},
"Drills": {
"version": "1.1.0",
"hash": "blake3:schema2..."
}
},
"backward_compatible": true,
"migration_required": false,
"tags": ["federation", "schema", "sync"],
"root_hash": "blake3:eee555..."
}
```
---
## 8. CLI Commands
```bash
# Discovery
vm-federation discover --mesh vaultmesh-berlin.io
vm-federation list-known
# Proposals
vm-federation propose \
--target did:vm:mesh:vaultmesh-berlin \
--trust-level 2 \
--scrolls Compliance,Drills \
--terms federation-terms.json
vm-federation proposals list
vm-federation proposals show fed-proposal-2025-12-06-001
vm-federation proposals accept fed-proposal-2025-12-06-001
vm-federation proposals reject fed-proposal-2025-12-06-001 --reason "incompatible_jurisdiction"
vm-federation proposals counter fed-proposal-2025-12-06-001 --terms counter-terms.json
# Agreements
vm-federation agreements list
vm-federation agreements show fed-agreement-2025-12-06-001
vm-federation agreements revoke fed-agreement-2025-12-06-001 --notice-days 30
# Verification
vm-federation verify --mesh vaultmesh-berlin --scroll Compliance
vm-federation witness-history --mesh vaultmesh-berlin --last 30d
# Status
vm-federation status
vm-federation health --all-peers
```
---
## 9. Design Gate Checklist
| Question | Federation Answer |
|----------|-------------------|
| Clear entrypoint? | ✅ CLI (`vm-federation`), Federation API |
| Contract produced? | ✅ `federation_agreement.json` |
| State object? | ✅ Agreement + witness state |
| Receipts emitted? | ✅ Six receipt types |
| Append-only JSONL? | ✅ `receipts/federation/federation_events.jsonl` |
| Merkle root? | ✅ `ROOT.federation.txt` |
| Guardian anchor path? | ✅ Federation root included in ProofChain |
| Query tool? | ✅ `vm-federation` CLI |

View File

@@ -0,0 +1,635 @@
# VAULTMESH-IDENTITY-ENGINE.md
**Civilization Ledger Identity Primitive**
> *Every actor has a provenance. Every credential has a receipt.*
Identity is VaultMesh's trust anchor — managing decentralized identifiers (DIDs), verifiable credentials, authentication events, and authorization decisions with cryptographic proof chains.
---
## 1. Scroll Definition
| Property | Value |
| --------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| **Scroll Name** | `Identity` |
| **JSONL Path** | `receipts/identity/identity_events.jsonl` |
| **Root File** | `ROOT.identity.txt` |
| **Receipt Types** | `identity_did_create`, `identity_did_rotate`, `identity_did_revoke`, `identity_credential_issue`, `identity_credential_revoke`, `identity_auth_event`, `identity_authz_decision` |
---
## 2. Core Concepts
### 2.1 Decentralized Identifiers (DIDs)
A **DID** is a self-sovereign identifier for any entity in the VaultMesh ecosystem.
```json
{
"did": "did:vm:user:sovereign",
"did_document": {
"@context": ["https://www.w3.org/ns/did/v1", "https://vaultmesh.io/ns/did/v1"],
"id": "did:vm:user:sovereign",
"controller": "did:vm:user:sovereign",
"verificationMethod": [
{
"id": "did:vm:user:sovereign#key-1",
"type": "Ed25519VerificationKey2020",
"controller": "did:vm:user:sovereign",
"publicKeyMultibase": "z6Mkf5rGMoatrSj1f..."
}
],
"authentication": ["did:vm:user:sovereign#key-1"],
"assertionMethod": ["did:vm:user:sovereign#key-1"],
"capabilityInvocation": ["did:vm:user:sovereign#key-1"],
"capabilityDelegation": ["did:vm:user:sovereign#key-1"]
},
"created_at": "2025-01-15T00:00:00Z",
"updated_at": "2025-12-06T10:00:00Z",
"status": "active",
"metadata": {
"display_name": "Sovereign Operator",
"roles": ["admin", "operator"],
"organization": "did:vm:org:vaultmesh-hq"
}
}
```
**DID types** (method-specific):
- `did:vm:user:*` — human operators
- `did:vm:node:*` — infrastructure nodes (BRICKs, portals)
- `did:vm:service:*` — automated services, agents
- `did:vm:org:*` — organizations, teams
- `did:vm:device:*` — hardware devices, HSMs, YubiKeys
### 2.2 Verifiable Credentials
**Credentials** are signed attestations about a subject, issued by trusted parties.
```json
{
"credential_id": "vc:vm:2025-12-001",
"@context": [
"https://www.w3.org/2018/credentials/v1",
"https://vaultmesh.io/ns/credentials/v1"
],
"type": ["VerifiableCredential", "VaultMeshOperatorCredential"],
"issuer": "did:vm:org:vaultmesh-hq",
"issuanceDate": "2025-12-01T00:00:00Z",
"expirationDate": "2026-12-01T00:00:00Z",
"credentialSubject": {
"id": "did:vm:user:sovereign",
"role": "administrator",
"permissions": ["anchor", "admin", "oracle"],
"clearance_level": "full",
"jurisdiction": ["eu-west", "us-east"]
},
"credentialStatus": {
"id": "https://vaultmesh.io/credentials/status/2025-12-001",
"type": "RevocationList2023"
},
"proof": {
"type": "Ed25519Signature2020",
"created": "2025-12-01T00:00:00Z",
"verificationMethod": "did:vm:org:vaultmesh-hq#key-1",
"proofPurpose": "assertionMethod",
"proofValue": "z3FXQjecWufY..."
}
}
```
**Credential types**:
- `VaultMeshOperatorCredential` — human operator authorization
- `VaultMeshNodeCredential` — node identity and capabilities
- `VaultMeshServiceCredential` — service authentication
- `VaultMeshComplianceCredential` — compliance attestations
- `VaultMeshDelegationCredential` — delegated authority
### 2.3 Authentication Events
Every authentication attempt is logged with full context.
```json
{
"auth_event_id": "auth-2025-12-06-001",
"timestamp": "2025-12-06T14:30:00Z",
"subject": "did:vm:user:sovereign",
"method": "ed25519_challenge",
"result": "success",
"session_id": "session-abc123...",
"client": {
"ip": "10.77.1.100",
"user_agent": "VaultMesh-CLI/1.0",
"device_fingerprint": "blake3:fff..."
},
"node": "did:vm:node:portal-01",
"mfa_used": true,
"mfa_method": "yubikey",
"risk_score": 0.1,
"tags": ["cli", "internal", "mfa"]
}
```
**Authentication methods**:
- `ed25519_challenge` — cryptographic challenge-response
- `passkey` — WebAuthn/FIDO2
- `yubikey` — hardware security key
- `totp` — time-based OTP (fallback)
- `mtls` — mutual TLS (node-to-node)
- `api_key` — service accounts (with rotation)
### 2.4 Authorization Decisions
Every access control decision is logged for audit trails.
```json
{
"authz_event_id": "authz-2025-12-06-001",
"timestamp": "2025-12-06T14:30:05Z",
"subject": "did:vm:user:sovereign",
"action": "anchor_submit",
"resource": "scroll:treasury",
"decision": "allow",
"policy_matched": "policy:admin-full-access",
"context": {
"session_id": "session-abc123...",
"node": "did:vm:node:portal-01",
"request_id": "req-xyz789..."
},
"credentials_presented": ["vc:vm:2025-12-001"],
"evaluation_time_ms": 2
}
```
---
## 3. Mapping to Eternal Pattern
### 3.1 Experience Layer (L1)
**CLI** (`vm-identity`):
```bash
# DID operations
vm-identity did create --type user --name "operator-alpha"
vm-identity did show did:vm:user:sovereign
vm-identity did list --type user --status active
vm-identity did rotate --did did:vm:user:sovereign --reason "scheduled rotation"
vm-identity did revoke --did did:vm:user:operator-alpha --reason "offboarded"
# Key management
vm-identity key list --did did:vm:user:sovereign
vm-identity key add --did did:vm:user:sovereign --type ed25519 --purpose authentication
vm-identity key revoke --did did:vm:user:sovereign --key-id key-2 --reason "compromised"
# Credential operations
vm-identity credential issue --subject did:vm:user:operator-alpha --type operator --role viewer
vm-identity credential list --subject did:vm:user:sovereign
vm-identity credential verify vc:vm:2025-12-001
vm-identity credential revoke vc:vm:2025-12-001 --reason "role change"
# Authentication
vm-identity auth login --method passkey
vm-identity auth logout
vm-identity auth sessions --did did:vm:user:sovereign
vm-identity auth revoke-session session-abc123
# Authorization
vm-identity authz check --subject did:vm:user:sovereign --action anchor_submit --resource scroll:treasury
vm-identity authz policies list
vm-identity authz policy show policy:admin-full-access
# Audit
vm-identity audit --did did:vm:user:sovereign --from 2025-12-01
vm-identity audit --type auth_event --result failure --last 24h
```
**MCP Tools**:
- `identity_did_resolve` — resolve DID to DID document
- `identity_credential_verify` — verify credential validity
- `identity_auth_status` — current session status
- `identity_authz_check` — check authorization
- `identity_audit_query` — query identity events
**Portal HTTP**:
- `GET /identity/dids` — list DIDs
- `GET /identity/dids/{did}` — resolve DID
- `POST /identity/dids` — create DID
- `POST /identity/dids/{did}/rotate` — rotate keys
- `DELETE /identity/dids/{did}` — revoke DID
- `GET /identity/credentials` — list credentials
- `POST /identity/credentials` — issue credential
- `GET /identity/credentials/{id}/verify` — verify credential
- `DELETE /identity/credentials/{id}` — revoke credential
- `POST /identity/auth/challenge` — initiate auth
- `POST /identity/auth/verify` — verify auth response
- `GET /identity/sessions` — list sessions
- `DELETE /identity/sessions/{id}` — revoke session
---
### 3.2 Engine Layer (L2)
#### Step 1 — Plan → `identity_operation_contract.json`
**DID Creation Contract**:
```json
{
"operation_id": "identity-op-2025-12-06-001",
"operation_type": "did_create",
"initiated_by": "did:vm:user:sovereign",
"initiated_at": "2025-12-06T10:00:00Z",
"target": {
"did_type": "user",
"display_name": "Operator Bravo",
"initial_roles": ["operator"],
"key_type": "ed25519"
},
"approval_required": true,
"approvers": ["did:vm:user:sovereign"],
"constraints": {
"credential_auto_issue": true,
"credential_type": "VaultMeshOperatorCredential",
"credential_expiry": "365d"
}
}
```
**Credential Issuance Contract**:
```json
{
"operation_id": "identity-op-2025-12-06-002",
"operation_type": "credential_issue",
"initiated_by": "did:vm:org:vaultmesh-hq",
"initiated_at": "2025-12-06T11:00:00Z",
"credential": {
"type": "VaultMeshOperatorCredential",
"subject": "did:vm:user:operator-bravo",
"claims": {
"role": "operator",
"permissions": ["storage", "compute"],
"jurisdiction": ["eu-west"]
},
"validity_period": "365d"
},
"approval_required": false
}
```
#### Step 2 — Execute → `identity_operation_state.json`
```json
{
"operation_id": "identity-op-2025-12-06-001",
"status": "completed",
"created_at": "2025-12-06T10:00:00Z",
"updated_at": "2025-12-06T10:05:00Z",
"steps": [
{
"step": "generate_keypair",
"status": "completed",
"completed_at": "2025-12-06T10:01:00Z",
"result": {
"public_key": "z6Mkf5rGMoatrSj1f...",
"key_id": "key-1"
}
},
{
"step": "create_did_document",
"status": "completed",
"completed_at": "2025-12-06T10:02:00Z",
"result": {
"did": "did:vm:user:operator-bravo"
}
},
{
"step": "register_did",
"status": "completed",
"completed_at": "2025-12-06T10:03:00Z",
"result": {
"registered": true,
"registry_hash": "blake3:aaa..."
}
},
{
"step": "issue_credential",
"status": "completed",
"completed_at": "2025-12-06T10:04:00Z",
"result": {
"credential_id": "vc:vm:2025-12-002"
}
}
],
"approvals": {
"did:vm:user:sovereign": {
"approved_at": "2025-12-06T10:00:30Z",
"signature": "ed25519:..."
}
}
}
```
#### Step 3 — Seal → Receipts
**DID Creation Receipt**:
```json
{
"type": "identity_did_create",
"did": "did:vm:user:operator-bravo",
"did_type": "user",
"timestamp": "2025-12-06T10:03:00Z",
"created_by": "did:vm:user:sovereign",
"operation_id": "identity-op-2025-12-06-001",
"public_key_fingerprint": "SHA256:abc123...",
"did_document_hash": "blake3:bbb222...",
"initial_roles": ["operator"],
"tags": ["identity", "did", "create", "user"],
"root_hash": "blake3:ccc333..."
}
```
**DID Key Rotation Receipt**:
```json
{
"type": "identity_did_rotate",
"did": "did:vm:user:sovereign",
"timestamp": "2025-12-06T15:00:00Z",
"rotated_by": "did:vm:user:sovereign",
"old_key_fingerprint": "SHA256:old123...",
"new_key_fingerprint": "SHA256:new456...",
"reason": "scheduled rotation",
"old_key_status": "revoked",
"tags": ["identity", "did", "rotate", "key"],
"root_hash": "blake3:ddd444..."
}
```
**Credential Issuance Receipt**:
```json
{
"type": "identity_credential_issue",
"credential_id": "vc:vm:2025-12-002",
"credential_type": "VaultMeshOperatorCredential",
"timestamp": "2025-12-06T10:04:00Z",
"issuer": "did:vm:org:vaultmesh-hq",
"subject": "did:vm:user:operator-bravo",
"claims_hash": "blake3:eee555...",
"expires_at": "2026-12-06T00:00:00Z",
"operation_id": "identity-op-2025-12-06-001",
"tags": ["identity", "credential", "issue", "operator"],
"root_hash": "blake3:fff666..."
}
```
**Credential Revocation Receipt**:
```json
{
"type": "identity_credential_revoke",
"credential_id": "vc:vm:2025-12-002",
"timestamp": "2025-12-06T18:00:00Z",
"revoked_by": "did:vm:user:sovereign",
"reason": "role change",
"revocation_list_updated": true,
"tags": ["identity", "credential", "revoke"],
"root_hash": "blake3:ggg777..."
}
```
**Authentication Event Receipt**:
```json
{
"type": "identity_auth_event",
"auth_event_id": "auth-2025-12-06-001",
"timestamp": "2025-12-06T14:30:00Z",
"subject": "did:vm:user:sovereign",
"method": "passkey",
"result": "success",
"session_id": "session-abc123...",
"node": "did:vm:node:portal-01",
"client_fingerprint": "blake3:hhh888...",
"mfa_used": true,
"risk_score": 0.1,
"tags": ["identity", "auth", "success", "mfa"],
"root_hash": "blake3:iii999..."
}
```
**Authorization Decision Receipt** (for sensitive operations):
```json
{
"type": "identity_authz_decision",
"authz_event_id": "authz-2025-12-06-001",
"timestamp": "2025-12-06T14:30:05Z",
"subject": "did:vm:user:sovereign",
"action": "capability_grant",
"resource": "did:vm:node:brick-03",
"decision": "allow",
"policy_matched": "policy:admin-full-access",
"credentials_verified": ["vc:vm:2025-12-001"],
"tags": ["identity", "authz", "allow", "sensitive"],
"root_hash": "blake3:jjj000..."
}
```
---
### 3.3 Ledger Layer (L3)
**Receipt Types**:
| Type | When Emitted |
| --------------------------- | ------------------------------------- |
| `identity_did_create` | New DID registered |
| `identity_did_rotate` | DID keys rotated |
| `identity_did_revoke` | DID revoked/deactivated |
| `identity_credential_issue` | New credential issued |
| `identity_credential_revoke`| Credential revoked |
| `identity_auth_event` | Authentication attempt (success/fail) |
| `identity_authz_decision` | Sensitive authorization decision |
**Merkle Coverage**:
- All receipts append to `receipts/identity/identity_events.jsonl`
- `ROOT.identity.txt` updated after each append
- Guardian anchors Identity root in anchor cycles
---
## 4. Query Interface
`identity_query_events.py`:
```bash
# DID history
vm-identity query --did did:vm:user:sovereign
# All auth events for a subject
vm-identity query --type auth_event --subject did:vm:user:sovereign
# Failed authentications
vm-identity query --type auth_event --result failure --last 7d
# Credentials issued by an org
vm-identity query --type credential_issue --issuer did:vm:org:vaultmesh-hq
# Authorization denials
vm-identity query --type authz_decision --decision deny
# Date range
vm-identity query --from 2025-12-01 --to 2025-12-06
# Export for compliance audit
vm-identity query --from 2025-01-01 --format csv > identity_audit_2025.csv
```
**DID Resolution History**:
```bash
# Show all versions of a DID document
vm-identity did history did:vm:user:sovereign
# Output:
# Version 1: 2025-01-15T00:00:00Z (created)
# - Key: key-1 (ed25519)
# Version 2: 2025-06-15T00:00:00Z (key rotation)
# - Key: key-1 (revoked), key-2 (ed25519)
# Version 3: 2025-12-06T15:00:00Z (key rotation)
# - Key: key-2 (revoked), key-3 (ed25519)
```
---
## 5. Design Gate Checklist
| Question | Identity Answer |
| --------------------- | ---------------------------------------------------------------- |
| Clear entrypoint? | ✅ CLI (`vm-identity`), MCP tools, Portal HTTP |
| Contract produced? | ✅ `identity_operation_contract.json` for DID/credential ops |
| State object? | ✅ `identity_operation_state.json` tracking multi-step operations |
| Receipts emitted? | ✅ Seven receipt types covering all identity events |
| Append-only JSONL? | ✅ `receipts/identity/identity_events.jsonl` |
| Merkle root? | ✅ `ROOT.identity.txt` |
| Guardian anchor path? | ✅ Identity root included in ProofChain |
| Query tool? | ✅ `identity_query_events.py` + DID history |
---
## 6. Key Management
### 6.1 Key Hierarchy
```
Root of Trust (Hardware)
├── Organization Master Key (HSM-protected)
│ ├── Node Signing Keys
│ │ ├── did:vm:node:brick-01#key-1
│ │ ├── did:vm:node:brick-02#key-1
│ │ └── did:vm:node:portal-01#key-1
│ ├── Service Keys
│ │ ├── did:vm:service:guardian#key-1
│ │ └── did:vm:service:oracle#key-1
│ └── Credential Issuing Keys
│ └── did:vm:org:vaultmesh-hq#issuer-key-1
└── User Keys (Self-custodied)
├── did:vm:user:sovereign#key-1
└── did:vm:user:operator-bravo#key-1
```
### 6.2 Key Rotation Policy
| Key Type | Rotation Period | Trigger Events |
| ------------------- | --------------- | --------------------------------- |
| User keys | 365 days | Compromise, role change |
| Node keys | 180 days | Compromise, node migration |
| Service keys | 90 days | Compromise, version upgrade |
| Credential issuers | 730 days | Compromise, policy change |
| Organization master | Manual only | Compromise, leadership change |
### 6.3 Recovery Procedures
```json
{
"recovery_id": "recovery-2025-12-06-001",
"did": "did:vm:user:operator-bravo",
"reason": "lost_device",
"initiated_at": "2025-12-06T09:00:00Z",
"recovery_method": "social_recovery",
"guardians_required": 3,
"guardians_responded": [
{"guardian": "did:vm:user:sovereign", "approved_at": "2025-12-06T09:15:00Z"},
{"guardian": "did:vm:user:operator-alpha", "approved_at": "2025-12-06T09:20:00Z"},
{"guardian": "did:vm:user:operator-charlie", "approved_at": "2025-12-06T09:25:00Z"}
],
"status": "completed",
"new_key_registered_at": "2025-12-06T09:30:00Z"
}
```
---
## 7. Policy Engine
### 7.1 Policy Definition
```json
{
"policy_id": "policy:admin-full-access",
"name": "Administrator Full Access",
"description": "Full access to all VaultMesh operations",
"version": 1,
"effect": "allow",
"subjects": {
"match": "credential",
"credential_type": "VaultMeshOperatorCredential",
"claims": {
"role": "administrator"
}
},
"actions": ["*"],
"resources": ["*"],
"conditions": {
"mfa_required": true,
"allowed_hours": {"start": "00:00", "end": "23:59"},
"allowed_nodes": ["*"]
}
}
```
### 7.2 Policy Evaluation
```
Request:
Subject: did:vm:user:sovereign
Action: anchor_submit
Resource: scroll:treasury
Evaluation:
1. Resolve subject credentials
2. Match policies by subject claims
3. Check action/resource match
4. Evaluate conditions (MFA, time, location)
5. Log decision with full context
6. Return allow/deny with reason
```
---
## 8. Integration Points
| System | Integration |
| ---------------- | -------------------------------------------------------------------------- |
| **Guardian** | Uses Identity for anchor authentication; alerts on suspicious auth events |
| **Mesh** | Node DIDs registered via Identity; capability grants require valid credentials |
| **Treasury** | Account ownership linked to DIDs; transaction signing uses Identity keys |
| **Oracle** | Oracle queries authenticated via Identity; responses signed with service DID |
| **OffSec** | Incident response can trigger emergency credential revocations |
| **Observability**| All identity events flow to observability for correlation |
---
## 9. Future Extensions
- **Biometric binding**: Link credentials to biometric templates
- **Delegation chains**: Transitive capability delegation with constraints
- **Anonymous credentials**: Zero-knowledge proofs for privacy-preserving auth
- **Cross-mesh identity**: Federated identity across VaultMesh instances
- **Hardware attestation**: TPM/Secure Enclave binding for high-assurance
- **Identity recovery DAO**: Decentralized recovery governance

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,161 @@
# VaultMesh MCP + TEM Shield Node Integration
## 1. Overview
The VaultMesh core ledger integrates with an external **OffSec Shield
Node** that runs:
- OffSec agents
- MCP backend (FastAPI)
- TEM Engine (Threat / Experience Memory)
The node is implemented in the separate `offsec-agents/` repository and
deployed to `shield-vm` (or lab nodes). VaultMesh talks to it via HTTP
and ingests receipts.
---
## 2. Node Contract
**Node identity:**
- Node ID: `shield-vm` (example)
- Role: `shield` / `offsec-node`
**MCP endpoints (examples):**
- `GET /health`
- `POST /api/command`
- `{ "cmd": "agents list" }`
- `{ "cmd": "agent spawn", "args": {...} }`
- `{ "cmd": "agent mission", "args": {...} }`
- `{ "cmd": "tem status" }`
- `GET /tem/status`
- `GET /tem/stats`
---
## 3. VaultMesh Client (Thin Shim)
VaultMesh does not embed offsec-agents. It uses a minimal HTTP client:
- Location: `scripts/offsec_node_client.py`
- Responsibilities:
- Send commands to Shield Node
- Handle timeouts / errors
- Normalize responses for `vm_cli.py`
Example call:
```python
from scripts.offsec_node_client import OffsecNodeClient
client = OffsecNodeClient(base_url="http://shield-vm:8081")
agents = await client.command("agents list")
status = await client.command("tem status")
```
---
## 4. CLI Integration
`vm_cli.py` can expose high-level commands that proxy to the Shield Node:
- `vm offsec agents`
- Calls `agents list`
- `vm offsec mission --agent <id> --target <t>`
- Calls `agent mission`
- `vm tem status`
- Calls `tem status`
These commands are optional and only work if the Shield Node is
configured in env:
- `OFFSEC_NODE_URL=http://shield-vm:8081`
If the node is unreachable, the CLI should:
- Fail gracefully
- Print a clear error message
- Not affect core ledger operations
---
## 5. Receipts and Guardian Integration
The Shield Node writes receipts locally (e.g. on shield-vm):
- `/opt/offsec-agents/receipts/offsec.jsonl`
- `/opt/offsec-agents/receipts/tem/tem_events.jsonl`
Integration options:
1. **File sync / pull**
- A sync job (cron, rsync, MinIO, etc.) copies receipts into the
VaultMesh node under:
- `receipts/shield/offsec.jsonl`
- `receipts/shield/tem_events.jsonl`
2. **API pull**
- Shield Node exposes `/receipts/export` endpoints
- VaultMesh pulls and stores under `receipts/shield/`
Guardian then:
- Computes partial roots for Shield receipts:
- `ROOT.shield.offsec.txt`
- `ROOT.shield.tem.txt`
- Includes them in the combined anchor:
```python
roots = {
"mesh": read_root("ROOT.mesh.txt"),
"treasury": read_root("ROOT.treasury.txt"),
"offsec": read_root("ROOT.offsec.txt"),
"shield_tem": read_root("ROOT.shield.tem.txt"),
}
anchor_root = compute_combined_root(roots)
```
---
## 6. Configuration
Example env vars for VaultMesh:
- `OFFSEC_NODE_URL=http://shield-vm:8081`
- `OFFSEC_NODE_ID=shield-vm`
- `OFFSEC_RECEIPTS_PATH=/var/lib/vaultmesh/receipts/shield`
Example env vars for Shield Node:
- `VAULTMESH_ROOT=/opt/vaultmesh`
- `TEM_DB_PATH=/opt/offsec-agents/state/tem.db`
- `TEM_RECEIPTS_PATH=/opt/offsec-agents/receipts/tem`
---
## 7. Failure Modes
If the Shield Node is:
- **Down**: CLI commands fail, core ledger continues; Guardian anchors
without Shield roots (or marks them missing).
- **Lagging**: Receipts are delayed; anchors include older Shield state.
- **Misconfigured**: CLI reports invalid node URL or protocol errors.
VaultMesh must never block core anchors solely because Shield is
unavailable; Shield is an extension, not the root of truth.
---
## 8. Design Principles
- Keep Shield node separate from VaultMesh core.
- Integrate via:
- HTTP commands
- Receipt ingestion
- Treat Shield as:
- A specialized OffSec/TEM appliance
- A contributor to the global ProofChain

View File

@@ -0,0 +1,554 @@
# VAULTMESH-MESH-ENGINE.md
**Civilization Ledger Federation Primitive**
> *Nodes that anchor together, survive together.*
Mesh is VaultMesh's topology memory — tracking how nodes discover each other, establish trust, share capabilities, and evolve their federation relationships over time. Every topology change becomes evidence.
---
## 1. Scroll Definition
| Property | Value |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| **Scroll Name** | `Mesh` |
| **JSONL Path** | `receipts/mesh/mesh_events.jsonl` |
| **Root File** | `ROOT.mesh.txt` |
| **Receipt Types** | `mesh_node_join`, `mesh_node_leave`, `mesh_route_change`, `mesh_capability_grant`, `mesh_capability_revoke`, `mesh_topology_snapshot` |
---
## 2. Core Concepts
### 2.1 Nodes
A **node** is any VaultMesh-aware endpoint that participates in the federation.
```json
{
"node_id": "did:vm:node:brick-01",
"display_name": "BRICK-01 (Dublin Primary)",
"node_type": "infrastructure",
"endpoints": {
"portal": "https://brick-01.vaultmesh.local:8443",
"wireguard": "10.77.1.1",
"tailscale": "brick-01.tail.net"
},
"public_key": "ed25519:abc123...",
"capabilities": ["anchor", "storage", "compute", "oracle"],
"status": "active",
"joined_at": "2025-06-15T00:00:00Z",
"last_seen": "2025-12-06T14:30:00Z",
"tags": ["production", "eu-west", "akash"]
}
```
**Node types**:
* `infrastructure` — BRICK servers, compute nodes
* `edge` — mobile devices, sovereign phones, field endpoints
* `oracle` — compliance oracle instances
* `guardian` — dedicated anchor/sentinel nodes
* `external` — federated nodes from other VaultMesh deployments
### 2.2 Routes
A **route** defines how traffic flows between nodes or segments.
```json
{
"route_id": "route-brick01-to-brick02",
"source": "did:vm:node:brick-01",
"destination": "did:vm:node:brick-02",
"transport": "wireguard",
"priority": 1,
"status": "active",
"latency_ms": 12,
"established_at": "2025-06-20T00:00:00Z"
}
```
Routes can be:
* **Direct**: Node-to-node (WireGuard, Tailscale)
* **Relayed**: Through a gateway node
* **Redundant**: Multiple paths with failover priority
### 2.3 Capabilities
**Capabilities** are the trust primitives — what a node is permitted to do within the federation.
```json
{
"capability_id": "cap:brick-01:anchor:2025",
"node_id": "did:vm:node:brick-01",
"capability": "anchor",
"scope": "global",
"granted_by": "did:vm:node:portal-01",
"granted_at": "2025-06-15T00:00:00Z",
"expires_at": "2026-06-15T00:00:00Z",
"constraints": {
"max_anchor_rate": "100/day",
"allowed_scrolls": ["*"]
}
}
```
Standard capabilities:
* `anchor` — can submit roots to anchor backends
* `storage` — can store receipts and artifacts
* `compute` — can execute drills, run agents
* `oracle` — can issue compliance answers
* `admin` — can grant/revoke capabilities to other nodes
* `federate` — can establish trust with external meshes
### 2.4 Topology Snapshots
Periodic **snapshots** capture the full mesh state — useful for auditing, disaster recovery, and proving historical topology.
---
## 3. Mapping to Eternal Pattern
### 3.1 Experience Layer (L1)
**CLI** (`vm-mesh`):
```bash
# Node operations
vm-mesh node list
vm-mesh node show brick-01
vm-mesh node join --config node-manifest.json
vm-mesh node leave --node brick-02 --reason "decommissioned"
# Route operations
vm-mesh route list
vm-mesh route add --from brick-01 --to brick-03 --transport tailscale
vm-mesh route test --route route-brick01-to-brick02
# Capability operations
vm-mesh capability list --node brick-01
vm-mesh capability grant --node brick-02 --capability oracle --expires 2026-01-01
vm-mesh capability revoke --node brick-02 --capability anchor --reason "security incident"
# Topology
vm-mesh topology show
vm-mesh topology snapshot --output snapshots/2025-12-06.json
vm-mesh topology diff --from snapshots/2025-11-01.json --to snapshots/2025-12-06.json
# Health
vm-mesh health --full
vm-mesh ping --all
```
**MCP Tools**:
* `mesh_node_status` — get node details and health
* `mesh_list_nodes` — enumerate active nodes
* `mesh_topology_summary` — current topology overview
* `mesh_capability_check` — verify if node has capability
* `mesh_route_health` — check route latency and status
**Portal HTTP**:
* `GET /mesh/nodes` — list nodes
* `GET /mesh/nodes/{node_id}` — node details
* `POST /mesh/nodes/join` — register new node
* `POST /mesh/nodes/{node_id}/leave` — deregister node
* `GET /mesh/routes` — list routes
* `POST /mesh/routes` — add route
* `GET /mesh/capabilities/{node_id}` — node capabilities
* `POST /mesh/capabilities/grant` — grant capability
* `POST /mesh/capabilities/revoke` — revoke capability
* `GET /mesh/topology` — current topology
* `POST /mesh/topology/snapshot` — create snapshot
---
### 3.2 Engine Layer (L2)
#### Step 1 — Plan → `mesh_change_contract.json`
For simple operations (single node join, route add), the contract is implicit.
For coordinated topology changes, an explicit contract:
```json
{
"change_id": "mesh-change-2025-12-06-001",
"title": "Add BRICK-03 to Dublin Cluster",
"initiated_by": "did:vm:node:portal-01",
"initiated_at": "2025-12-06T11:00:00Z",
"change_type": "node_expansion",
"operations": [
{
"op_id": "op-001",
"operation": "node_join",
"target": "did:vm:node:brick-03",
"config": {
"display_name": "BRICK-03 (Dublin Secondary)",
"node_type": "infrastructure",
"endpoints": {
"portal": "https://brick-03.vaultmesh.local:8443",
"wireguard": "10.77.1.3"
},
"public_key": "ed25519:def456..."
}
},
{
"op_id": "op-002",
"operation": "route_add",
"config": {
"source": "did:vm:node:brick-01",
"destination": "did:vm:node:brick-03",
"transport": "wireguard"
}
},
{
"op_id": "op-003",
"operation": "route_add",
"config": {
"source": "did:vm:node:brick-02",
"destination": "did:vm:node:brick-03",
"transport": "wireguard"
}
},
{
"op_id": "op-004",
"operation": "capability_grant",
"config": {
"node_id": "did:vm:node:brick-03",
"capability": "storage",
"scope": "local",
"expires_at": "2026-12-06T00:00:00Z"
}
}
],
"requires_approval": ["portal-01"],
"rollback_on_failure": true
}
```
#### Step 2 — Execute → `mesh_change_state.json`
```json
{
"change_id": "mesh-change-2025-12-06-001",
"status": "in_progress",
"created_at": "2025-12-06T11:00:00Z",
"updated_at": "2025-12-06T11:05:00Z",
"operations": [
{
"op_id": "op-001",
"status": "completed",
"completed_at": "2025-12-06T11:02:00Z",
"result": {
"node_registered": true,
"handshake_verified": true
}
},
{
"op_id": "op-002",
"status": "completed",
"completed_at": "2025-12-06T11:03:00Z",
"result": {
"route_established": true,
"latency_ms": 8
}
},
{
"op_id": "op-003",
"status": "in_progress",
"started_at": "2025-12-06T11:04:00Z"
},
{
"op_id": "op-004",
"status": "pending"
}
],
"topology_before_hash": "blake3:aaa111...",
"approvals": {
"portal-01": {
"approved_at": "2025-12-06T11:01:00Z",
"signature": "ed25519:..."
}
}
}
```
**Status transitions**:
```
draft → pending_approval → in_progress → completed
↘ partial_failure → rollback → rolled_back
↘ failed → rollback → rolled_back
```
#### Step 3 — Seal → Receipts
Each operation in a change produces its own receipt, plus a summary receipt for coordinated changes.
**Node Join Receipt**:
```json
{
"type": "mesh_node_join",
"node_id": "did:vm:node:brick-03",
"display_name": "BRICK-03 (Dublin Secondary)",
"node_type": "infrastructure",
"timestamp": "2025-12-06T11:02:00Z",
"initiated_by": "did:vm:node:portal-01",
"change_id": "mesh-change-2025-12-06-001",
"endpoints_hash": "blake3:...",
"public_key_fingerprint": "SHA256:...",
"tags": ["mesh", "node", "join", "infrastructure"],
"root_hash": "blake3:bbb222..."
}
```
**Route Change Receipt**:
```json
{
"type": "mesh_route_change",
"route_id": "route-brick01-to-brick03",
"operation": "add",
"source": "did:vm:node:brick-01",
"destination": "did:vm:node:brick-03",
"transport": "wireguard",
"timestamp": "2025-12-06T11:03:00Z",
"initiated_by": "did:vm:node:portal-01",
"change_id": "mesh-change-2025-12-06-001",
"latency_ms": 8,
"tags": ["mesh", "route", "add"],
"root_hash": "blake3:ccc333..."
}
```
**Capability Grant Receipt**:
```json
{
"type": "mesh_capability_grant",
"capability_id": "cap:brick-03:storage:2025",
"node_id": "did:vm:node:brick-03",
"capability": "storage",
"scope": "local",
"granted_by": "did:vm:node:portal-01",
"timestamp": "2025-12-06T11:06:00Z",
"expires_at": "2026-12-06T00:00:00Z",
"change_id": "mesh-change-2025-12-06-001",
"tags": ["mesh", "capability", "grant", "storage"],
"root_hash": "blake3:ddd444..."
}
```
**Topology Snapshot Receipt** (periodic):
```json
{
"type": "mesh_topology_snapshot",
"snapshot_id": "snapshot-2025-12-06-001",
"timestamp": "2025-12-06T12:00:00Z",
"node_count": 5,
"route_count": 12,
"capability_count": 23,
"nodes": ["brick-01", "brick-02", "brick-03", "portal-01", "oracle-01"],
"topology_hash": "blake3:eee555...",
"snapshot_path": "snapshots/mesh/2025-12-06-001.json",
"tags": ["mesh", "snapshot", "topology"],
"root_hash": "blake3:fff666..."
}
```
---
### 3.3 Ledger Layer (L3)
**Receipt Types**:
| Type | When Emitted |
| -------------------------- | --------------------------------- |
| `mesh_node_join` | Node registered in mesh |
| `mesh_node_leave` | Node deregistered |
| `mesh_route_change` | Route added, removed, or modified |
| `mesh_capability_grant` | Capability granted to node |
| `mesh_capability_revoke` | Capability revoked from node |
| `mesh_topology_snapshot` | Periodic full topology capture |
**Merkle Coverage**:
* All receipts append to `receipts/mesh/mesh_events.jsonl`
* `ROOT.mesh.txt` updated after each append
* Guardian anchors Mesh root in anchor cycles
---
## 4. Query Interface
`mesh_query_events.py`:
```bash
# All events for a node
vm-mesh query --node brick-01
# Events by type
vm-mesh query --type node_join
vm-mesh query --type capability_grant
# Date range
vm-mesh query --from 2025-11-01 --to 2025-12-01
# By change ID (coordinated changes)
vm-mesh query --change-id mesh-change-2025-12-06-001
# Capability history for a node
vm-mesh query --node brick-02 --type capability_grant,capability_revoke
# Export topology history
vm-mesh query --type topology_snapshot --format json > topology_history.json
```
**Topology Diff Tool**:
```bash
# Compare two snapshots
vm-mesh topology diff \
--from snapshots/mesh/2025-11-01.json \
--to snapshots/mesh/2025-12-06.json
# Output:
# + node: brick-03 (joined)
# + route: brick-01 → brick-03
# + route: brick-02 → brick-03
# + capability: brick-03:storage
# ~ route: brick-01 → brick-02 (latency: 15ms → 12ms)
```
---
## 5. Design Gate Checklist
| Question | Mesh Answer |
| --------------------- | ---------------------------------------------------------------------------------------- |
| Clear entrypoint? | ✅ CLI (`vm-mesh`), MCP tools, Portal HTTP |
| Contract produced? | ✅ `mesh_change_contract.json` (explicit for coordinated changes, implicit for single ops) |
| State object? | ✅ `mesh_change_state.json` tracking operation progress |
| Receipts emitted? | ✅ Six receipt types covering all topology events |
| Append-only JSONL? | ✅ `receipts/mesh/mesh_events.jsonl` |
| Merkle root? | ✅ `ROOT.mesh.txt` |
| Guardian anchor path? | ✅ Mesh root included in ProofChain |
| Query tool? | ✅ `mesh_query_events.py` + topology diff |
---
## 6. Mesh Health & Consensus
### 6.1 Heartbeat Protocol
Nodes emit periodic heartbeats to prove liveness:
```json
{
"type": "heartbeat",
"node_id": "did:vm:node:brick-01",
"timestamp": "2025-12-06T14:30:00Z",
"sequence": 847293,
"load": {
"cpu_percent": 23,
"memory_percent": 67,
"disk_percent": 45
},
"routes_healthy": 4,
"routes_degraded": 0
}
```
Heartbeats are **not** receipted individually (too high volume), but:
* Aggregated into daily health summaries
* Missed heartbeats trigger alerts
* Prolonged absence → automatic `mesh_node_leave` with `reason: "timeout"`
### 6.2 Quorum Requirements
Critical mesh operations require quorum:
| Operation | Quorum |
| ------------------------------- | --------------------------------- |
| Node join | 1 admin node |
| Node forced leave | 2 admin nodes |
| Capability grant (global scope) | 2 admin nodes |
| Capability revoke | 1 admin node (immediate security) |
| Federation trust establishment | All admin nodes |
---
## 7. Federation (Multi-Mesh)
When VaultMesh instances need to federate (e.g., partner organizations, geographic regions):
### 7.1 Trust Establishment
```json
{
"type": "mesh_federation_trust",
"local_mesh": "did:vm:mesh:vaultmesh-dublin",
"remote_mesh": "did:vm:mesh:partner-berlin",
"trust_level": "limited",
"established_at": "2025-12-06T15:00:00Z",
"expires_at": "2026-12-06T00:00:00Z",
"shared_capabilities": ["oracle_query", "receipt_verify"],
"gateway_node": "did:vm:node:portal-01",
"remote_gateway": "did:vm:node:partner-gateway-01",
"trust_anchor": "blake3:ggg777..."
}
```
**Trust levels**:
* `isolated` — no cross-mesh communication
* `limited` — specific capabilities only (e.g., query each other's Oracle)
* `reciprocal` — mutual receipt verification, shared anchoring
* `full` — complete federation (rare, high-trust scenarios)
### 7.2 Cross-Mesh Receipts
When a federated mesh verifies or references receipts:
```json
{
"type": "mesh_cross_verify",
"local_receipt": "receipt:treasury:settle-2025-12-06-001",
"remote_mesh": "did:vm:mesh:partner-berlin",
"verified_by": "did:vm:node:partner-oracle-01",
"verification_timestamp": "2025-12-06T16:00:00Z",
"verification_result": "valid",
"remote_root_at_verification": "blake3:hhh888..."
}
```
---
## 8. Integration Points
| System | Integration |
| -------------- | ----------------------------------------------------------------------------------------- |
| **Guardian** | Anchors `ROOT.mesh.txt`; alerts on unexpected topology changes |
| **Treasury** | Node join can auto-create Treasury accounts; node leave triggers account closure workflow |
| **Oracle** | Can query Mesh for node capabilities ("Does BRICK-02 have anchor capability?") |
| **Drills** | Multi-node drills require Mesh to verify all participants are active and routable |
| **OffSec** | Security incidents can trigger emergency capability revocations via Mesh |
---
## 9. Future Extensions
* **Auto-discovery**: Nodes find each other via mDNS/DHT in local networks
* **Geographic awareness**: Route optimization based on node locations
* **Bandwidth metering**: Track data flow between nodes for Treasury billing
* **Mesh visualization**: Real-time topology graph in Portal UI
* **Chaos testing**: Controlled route failures to test resilience
* **Zero-trust verification**: Continuous capability re-verification

View File

@@ -0,0 +1,537 @@
# VAULTMESH-MIGRATION-GUIDE.md
**Upgrading the Civilization Ledger**
> *A system that cannot evolve is a system that cannot survive.*
---
## 1. Version Compatibility Matrix
| From Version | To Version | Migration Type | Downtime |
|--------------|------------|----------------|----------|
| 0.1.x | 0.2.x | Schema migration | < 5 min |
| 0.2.x | 0.3.x | Schema migration | < 5 min |
| 0.3.x | 1.0.x | Major migration | < 30 min |
| 1.0.x | 1.1.x | Rolling update | None |
---
## 2. Pre-Migration Checklist
```bash
#!/bin/bash
# scripts/pre-migration-check.sh
set -e
echo "=== VaultMesh Pre-Migration Check ==="
# 1. Verify current version
CURRENT_VERSION=$(vm-cli version --short)
echo "Current version: $CURRENT_VERSION"
# 2. Check for pending anchors
PENDING=$(vm-guardian anchor-status --json | jq '.receipts_since_anchor')
if [ "$PENDING" -gt 0 ]; then
echo "WARNING: $PENDING receipts pending anchor"
echo "Running anchor before migration..."
vm-guardian anchor-now --wait
fi
# 3. Verify receipt integrity
echo "Verifying receipt integrity..."
vm-guardian verify-all --scroll all
if [ $? -ne 0 ]; then
echo "ERROR: Receipt integrity check failed"
exit 1
fi
# 4. Backup current state
echo "Creating backup..."
BACKUP_DIR="/backups/vaultmesh-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$BACKUP_DIR"
# Backup receipts
cp -r /data/receipts "$BACKUP_DIR/receipts"
# Backup database
pg_dump -h postgres -U vaultmesh vaultmesh > "$BACKUP_DIR/database.sql"
# Backup configuration
cp -r /config "$BACKUP_DIR/config"
# Backup Merkle roots
cp /data/receipts/ROOT.*.txt "$BACKUP_DIR/"
echo "Backup created: $BACKUP_DIR"
# 5. Verify backup
echo "Verifying backup..."
BACKUP_RECEIPT_COUNT=$(find "$BACKUP_DIR/receipts" -name "*.jsonl" -exec wc -l {} + | tail -1 | awk '{print $1}')
CURRENT_RECEIPT_COUNT=$(find /data/receipts -name "*.jsonl" -exec wc -l {} + | tail -1 | awk '{print $1}')
if [ "$BACKUP_RECEIPT_COUNT" -ne "$CURRENT_RECEIPT_COUNT" ]; then
echo "ERROR: Backup receipt count mismatch"
exit 1
fi
echo "=== Pre-migration checks complete ==="
echo "Ready to migrate from $CURRENT_VERSION"
```
---
## 3. Migration Scripts
### 3.1 Schema Migration (0.2.x -> 0.3.x)
```python
# migrations/0002_to_0003.py
"""
Migration: 0.2.x -> 0.3.x
Changes:
- Add 'anchor_epoch' field to all receipts
- Add 'proof_path' field to all receipts
- Create new ROOT.*.txt files for new scrolls
"""
import json
from pathlib import Path
from datetime import datetime
import shutil
def migrate_receipts(receipts_dir: Path):
"""Add new fields to existing receipts."""
for jsonl_file in receipts_dir.glob("**/*.jsonl"):
print(f"Migrating: {jsonl_file}")
# Read all receipts
receipts = []
with open(jsonl_file) as f:
for line in f:
receipt = json.loads(line.strip())
# Add new fields if missing
if "anchor_epoch" not in receipt:
receipt["anchor_epoch"] = None
if "proof_path" not in receipt:
receipt["proof_path"] = None
receipts.append(receipt)
# Write back with new fields
backup_path = jsonl_file.with_suffix(".jsonl.bak")
shutil.copy(jsonl_file, backup_path)
with open(jsonl_file, "w") as f:
for receipt in receipts:
f.write(json.dumps(receipt) + "\n")
print(f" Migrated {len(receipts)} receipts")
def create_new_scrolls(receipts_dir: Path):
"""Create directories and root files for new scrolls."""
new_scrolls = [
"treasury",
"mesh",
"offsec",
"identity",
"observability",
"automation",
"psi",
"federation",
"governance",
]
for scroll in new_scrolls:
scroll_dir = receipts_dir / scroll
scroll_dir.mkdir(exist_ok=True)
# Create empty JSONL file
jsonl_file = scroll_dir / f"{scroll}_events.jsonl"
jsonl_file.touch()
# Create root file with empty root
root_file = receipts_dir / f"ROOT.{scroll}.txt"
root_file.write_text("blake3:empty")
print(f"Created scroll: {scroll}")
def update_database_schema():
"""Run database migrations."""
import subprocess
subprocess.run([
"sqlx", "migrate", "run",
"--source", "migrations/sql",
], check=True)
def main():
receipts_dir = Path("/data/receipts")
print("=== VaultMesh Migration: 0.2.x -> 0.3.x ===")
print(f"Timestamp: {datetime.utcnow().isoformat()}Z")
print("\n1. Migrating existing receipts...")
migrate_receipts(receipts_dir)
print("\n2. Creating new scroll directories...")
create_new_scrolls(receipts_dir)
print("\n3. Running database migrations...")
update_database_schema()
print("\n=== Migration complete ===")
if __name__ == "__main__":
main()
```
### 3.2 Major Migration (0.3.x -> 1.0.x)
```python
# migrations/0003_to_1000.py
"""
Migration: 0.3.x -> 1.0.x (Major)
Changes:
- Constitutional governance activation
- Receipt schema v2 (breaking)
- Merkle tree format change
- Guardian state restructure
"""
import json
from pathlib import Path
from datetime import datetime
import hashlib
import subprocess
import shutil
def backup_everything(backup_dir: Path):
"""Create comprehensive backup before major migration."""
backup_dir.mkdir(parents=True, exist_ok=True)
# Full receipts backup with verification
receipts_backup = backup_dir / "receipts"
shutil.copytree("/data/receipts", receipts_backup)
# Compute checksums
checksums = {}
for f in receipts_backup.glob("**/*"):
if f.is_file():
checksums[str(f.relative_to(receipts_backup))] = hashlib.blake3(f.read_bytes()).hexdigest()
with open(backup_dir / "CHECKSUMS.json", "w") as f:
json.dump(checksums, f, indent=2)
# Database backup
subprocess.run([
"pg_dump", "-h", "postgres", "-U", "vaultmesh",
"-F", "c", # Custom format for parallel restore
"-f", str(backup_dir / "database.dump"),
"vaultmesh"
], check=True)
return backup_dir
def migrate_receipt_schema_v2(receipts_dir: Path):
"""Convert receipts to schema v2."""
for jsonl_file in receipts_dir.glob("**/*.jsonl"):
print(f"Converting to schema v2: {jsonl_file}")
receipts = []
with open(jsonl_file) as f:
for line in f:
old_receipt = json.loads(line.strip())
# Convert to v2 schema
new_receipt = {
"schema_version": "2.0.0",
"type": old_receipt.get("type"),
"timestamp": old_receipt.get("timestamp"),
"header": {
"root_hash": old_receipt.get("root_hash"),
"tags": old_receipt.get("tags", []),
"previous_hash": None, # Will be computed
},
"meta": {
"scroll": infer_scroll(jsonl_file),
"sequence": len(receipts),
"anchor_epoch": old_receipt.get("anchor_epoch"),
"proof_path": old_receipt.get("proof_path"),
},
"body": {
k: v for k, v in old_receipt.items()
if k not in ["type", "timestamp", "root_hash", "tags", "anchor_epoch", "proof_path"]
}
}
# Compute previous_hash chain
if receipts:
new_receipt["header"]["previous_hash"] = receipts[-1]["header"]["root_hash"]
# Recompute root_hash with new schema
new_receipt["header"]["root_hash"] = compute_receipt_hash_v2(new_receipt)
receipts.append(new_receipt)
# Write v2 receipts
with open(jsonl_file, "w") as f:
for receipt in receipts:
f.write(json.dumps(receipt) + "\n")
print(f" Converted {len(receipts)} receipts to v2")
def recompute_merkle_roots(receipts_dir: Path):
"""Recompute all Merkle roots with new format."""
scrolls = [
"drills", "compliance", "guardian", "treasury", "mesh",
"offsec", "identity", "observability", "automation",
"psi", "federation", "governance"
]
for scroll in scrolls:
jsonl_file = receipts_dir / scroll / f"{scroll}_events.jsonl"
root_file = receipts_dir / f"ROOT.{scroll}.txt"
if not jsonl_file.exists():
continue
# Read receipt hashes
hashes = []
with open(jsonl_file) as f:
for line in f:
receipt = json.loads(line.strip())
hashes.append(receipt["header"]["root_hash"])
# Compute new Merkle root
root = compute_merkle_root_v2(hashes)
root_file.write_text(root)
print(f"Recomputed root for {scroll}: {root[:30]}...")
def initialize_constitution():
"""Create initial constitutional documents."""
constitution = {
"version": "1.0.0",
"effective_at": datetime.utcnow().isoformat() + "Z",
"axioms": [], # From CONSTITUTIONAL-GOVERNANCE.md
"articles": [],
"engine_registry": [],
}
# Write constitution
const_path = Path("/data/governance/constitution.json")
const_path.parent.mkdir(parents=True, exist_ok=True)
with open(const_path, "w") as f:
json.dump(constitution, f, indent=2)
# Create constitution receipt
receipt = {
"schema_version": "2.0.0",
"type": "gov_constitution_ratified",
"timestamp": datetime.utcnow().isoformat() + "Z",
"header": {
"root_hash": "", # Will be computed
"tags": ["governance", "constitution", "genesis"],
"previous_hash": None,
},
"meta": {
"scroll": "Governance",
"sequence": 0,
"anchor_epoch": None,
"proof_path": None,
},
"body": {
"constitution_version": "1.0.0",
"constitution_hash": hashlib.blake3(json.dumps(constitution).encode()).hexdigest(),
}
}
# Append to governance scroll
gov_jsonl = Path("/data/receipts/governance/governance_events.jsonl")
with open(gov_jsonl, "a") as f:
f.write(json.dumps(receipt) + "\n")
print("Constitutional governance initialized")
def main():
print("=== VaultMesh Major Migration: 0.3.x -> 1.0.x ===")
print(f"Timestamp: {datetime.utcnow().isoformat()}Z")
print("WARNING: This is a breaking migration!")
# Confirm
confirm = input("Type 'MIGRATE' to proceed: ")
if confirm != "MIGRATE":
print("Aborted")
return
backup_dir = Path(f"/backups/major-migration-{datetime.utcnow().strftime('%Y%m%d-%H%M%S')}")
receipts_dir = Path("/data/receipts")
print("\n1. Creating comprehensive backup...")
backup_everything(backup_dir)
print("\n2. Migrating receipt schema to v2...")
migrate_receipt_schema_v2(receipts_dir)
print("\n3. Recomputing Merkle roots...")
recompute_merkle_roots(receipts_dir)
print("\n4. Running database migrations...")
subprocess.run(["sqlx", "migrate", "run"], check=True)
print("\n5. Initializing constitutional governance...")
initialize_constitution()
print("\n6. Triggering anchor to seal migration...")
subprocess.run(["vm-guardian", "anchor-now", "--wait"], check=True)
print("\n=== Major migration complete ===")
print(f"Backup location: {backup_dir}")
print("Please verify system health before removing backup")
if __name__ == "__main__":
main()
```
---
## 4. Rollback Procedures
```bash
#!/bin/bash
# scripts/rollback.sh
set -e
BACKUP_DIR=$1
if [ -z "$BACKUP_DIR" ]; then
echo "Usage: rollback.sh <backup_directory>"
exit 1
fi
if [ ! -d "$BACKUP_DIR" ]; then
echo "ERROR: Backup directory not found: $BACKUP_DIR"
exit 1
fi
echo "=== VaultMesh Rollback ==="
echo "Backup: $BACKUP_DIR"
# Verify backup integrity
echo "1. Verifying backup integrity..."
if [ -f "$BACKUP_DIR/CHECKSUMS.json" ]; then
python3 scripts/verify_checksums.py "$BACKUP_DIR"
fi
# Stop services
echo "2. Stopping services..."
kubectl scale deployment -n vaultmesh --replicas=0 \
vaultmesh-portal vaultmesh-guardian vaultmesh-oracle
# Restore database
echo "3. Restoring database..."
pg_restore -h postgres -U vaultmesh -d vaultmesh --clean "$BACKUP_DIR/database.dump"
# Restore receipts
echo "4. Restoring receipts..."
rm -rf /data/receipts/*
cp -r "$BACKUP_DIR/receipts"/* /data/receipts/
# Restore configuration
echo "5. Restoring configuration..."
cp -r "$BACKUP_DIR/config"/* /config/
# Restart services
echo "6. Restarting services..."
kubectl scale deployment -n vaultmesh --replicas=2 vaultmesh-portal
kubectl scale deployment -n vaultmesh --replicas=1 vaultmesh-guardian
kubectl scale deployment -n vaultmesh --replicas=2 vaultmesh-oracle
# Wait for health
echo "7. Waiting for services to become healthy..."
kubectl wait --for=condition=ready pod -l app.kubernetes.io/part-of=vaultmesh -n vaultmesh --timeout=300s
# Verify integrity
echo "8. Verifying receipt integrity..."
vm-guardian verify-all --scroll all
echo "=== Rollback complete ==="
```
---
## 5. Post-Migration Verification
```bash
#!/bin/bash
# scripts/post-migration-verify.sh
set -e
echo "=== VaultMesh Post-Migration Verification ==="
# 1. Version check
echo "1. Checking version..."
NEW_VERSION=$(vm-cli version --short)
echo " Version: $NEW_VERSION"
# 2. Service health
echo "2. Checking service health..."
vm-cli system health --json | jq '.services'
# 3. Receipt integrity
echo "3. Verifying receipt integrity..."
for scroll in drills compliance guardian treasury mesh offsec identity observability automation psi federation governance; do
COUNT=$(wc -l < "/data/receipts/$scroll/${scroll}_events.jsonl" 2>/dev/null || echo "0")
ROOT=$(cat "/data/receipts/ROOT.$scroll.txt" 2>/dev/null || echo "N/A")
echo " $scroll: $COUNT receipts, root: ${ROOT:0:20}..."
done
# 4. Merkle verification
echo "4. Verifying Merkle roots..."
vm-guardian verify-all --scroll all
# 5. Anchor status
echo "5. Checking anchor status..."
vm-guardian anchor-status
# 6. Constitution (if 1.0+)
if vm-gov constitution version &>/dev/null; then
echo "6. Checking constitution..."
vm-gov constitution version
fi
# 7. Test receipt emission
echo "7. Testing receipt emission..."
TEST_RECEIPT=$(vm-cli emit-test-receipt --scroll drills)
echo " Test receipt: $TEST_RECEIPT"
# 8. Test anchor
echo "8. Testing anchor cycle..."
vm-guardian anchor-now --wait
# 9. Verify test receipt was anchored
echo "9. Verifying test receipt anchored..."
PROOF=$(vm-guardian get-proof "$TEST_RECEIPT")
if [ -n "$PROOF" ]; then
echo " Test receipt successfully anchored"
else
echo " ERROR: Test receipt not anchored"
exit 1
fi
echo ""
echo "=== Post-migration verification complete ==="
echo "All checks passed. System is operational."
```

View File

@@ -0,0 +1,688 @@
# VAULTMESH-MONITORING-STACK.md
**Observability for the Civilization Ledger**
> *You cannot govern what you cannot see.*
---
## 1. Prometheus Configuration
```yaml
# config/prometheus.yaml
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager:9093
rule_files:
- /etc/prometheus/rules/*.yaml
scrape_configs:
# Portal metrics
- job_name: 'vaultmesh-portal'
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- vaultmesh
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name]
regex: portal
action: keep
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
regex: "true"
action: keep
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port]
target_label: __address__
regex: (.+)
replacement: ${1}:9090
# Guardian metrics
- job_name: 'vaultmesh-guardian'
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- vaultmesh
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name]
regex: guardian
action: keep
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port]
target_label: __address__
regex: (.+)
replacement: ${1}:9090
# Oracle metrics
- job_name: 'vaultmesh-oracle'
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- vaultmesh
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name]
regex: oracle
action: keep
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port]
target_label: __address__
regex: (.+)
replacement: ${1}:9090
# PostgreSQL metrics
- job_name: 'postgres'
static_configs:
- targets: ['postgres-exporter:9187']
# Redis metrics
- job_name: 'redis'
static_configs:
- targets: ['redis-exporter:9121']
```
---
## 2. Alerting Rules
```yaml
# config/prometheus/rules/vaultmesh-alerts.yaml
groups:
- name: vaultmesh.receipts
rules:
- alert: ReceiptWriteFailure
expr: rate(vaultmesh_receipt_write_errors_total[5m]) > 0
for: 1m
labels:
severity: critical
scroll: "{{ $labels.scroll }}"
annotations:
summary: "Receipt write failures detected"
description: "{{ $value }} receipt write errors in scroll {{ $labels.scroll }}"
- alert: ReceiptRateAnomaly
expr: |
abs(
rate(vaultmesh_receipts_total[5m]) -
avg_over_time(rate(vaultmesh_receipts_total[5m])[1h:5m])
) > 2 * stddev_over_time(rate(vaultmesh_receipts_total[5m])[1h:5m])
for: 10m
labels:
severity: warning
annotations:
summary: "Unusual receipt rate detected"
description: "Receipt rate deviates significantly from baseline"
- name: vaultmesh.guardian
rules:
- alert: AnchorDelayed
expr: time() - vaultmesh_guardian_last_anchor_timestamp > 7200
for: 5m
labels:
severity: warning
annotations:
summary: "Guardian anchor delayed"
description: "Last anchor was {{ $value | humanizeDuration }} ago"
- alert: AnchorCriticallyDelayed
expr: time() - vaultmesh_guardian_last_anchor_timestamp > 14400
for: 5m
labels:
severity: critical
annotations:
summary: "Guardian anchor critically delayed"
description: "No anchor in over 4 hours"
- alert: AnchorFailure
expr: increase(vaultmesh_guardian_anchor_failures_total[1h]) > 0
for: 1m
labels:
severity: critical
annotations:
summary: "Guardian anchor failure"
description: "{{ $value }} anchor failures in the last hour"
- alert: ProofChainDivergence
expr: vaultmesh_guardian_proofchain_divergence == 1
for: 1m
labels:
severity: critical
annotations:
summary: "ProofChain divergence detected"
description: "Computed Merkle root differs from stored root"
- name: vaultmesh.oracle
rules:
- alert: OracleHighLatency
expr: histogram_quantile(0.95, rate(vaultmesh_oracle_query_duration_seconds_bucket[5m])) > 30
for: 5m
labels:
severity: warning
annotations:
summary: "Oracle query latency high"
description: "95th percentile query latency is {{ $value | humanizeDuration }}"
- alert: OracleLLMErrors
expr: rate(vaultmesh_oracle_llm_errors_total[5m]) > 0.1
for: 5m
labels:
severity: warning
annotations:
summary: "Oracle LLM errors elevated"
description: "{{ $value }} LLM errors per second"
- alert: OracleCorpusEmpty
expr: vaultmesh_oracle_corpus_documents_total == 0
for: 1m
labels:
severity: critical
annotations:
summary: "Oracle corpus is empty"
description: "No documents loaded in compliance corpus"
- name: vaultmesh.mesh
rules:
- alert: NodeUnhealthy
expr: vaultmesh_mesh_node_healthy == 0
for: 5m
labels:
severity: warning
node: "{{ $labels.node_id }}"
annotations:
summary: "Mesh node unhealthy"
description: "Node {{ $labels.node_id }} is unhealthy"
- alert: NodeDown
expr: time() - vaultmesh_mesh_node_last_seen_timestamp > 600
for: 5m
labels:
severity: critical
node: "{{ $labels.node_id }}"
annotations:
summary: "Mesh node down"
description: "Node {{ $labels.node_id }} not seen for {{ $value | humanizeDuration }}"
- alert: RouteUnhealthy
expr: vaultmesh_mesh_route_healthy == 0
for: 5m
labels:
severity: warning
annotations:
summary: "Mesh route unhealthy"
description: "Route {{ $labels.route_id }} is unhealthy"
- name: vaultmesh.psi
rules:
- alert: PhaseProlongedNigredo
expr: vaultmesh_psi_phase_duration_seconds{phase="nigredo"} > 86400
for: 1h
labels:
severity: warning
annotations:
summary: "System in Nigredo phase for extended period"
description: "System has been in crisis phase for {{ $value | humanizeDuration }}"
- alert: TransmutationStalled
expr: vaultmesh_psi_transmutation_status{status="in_progress"} == 1 and time() - vaultmesh_psi_transmutation_started_timestamp > 86400
for: 1h
labels:
severity: warning
annotations:
summary: "Transmutation stalled"
description: "Transmutation {{ $labels.transmutation_id }} in progress for over 24 hours"
- name: vaultmesh.governance
rules:
- alert: ConstitutionalViolation
expr: increase(vaultmesh_governance_violations_total[1h]) > 0
for: 0m
labels:
severity: critical
annotations:
summary: "Constitutional violation detected"
description: "{{ $value }} violation(s) in the last hour"
- alert: EmergencyActive
expr: vaultmesh_governance_emergency_active == 1
for: 0m
labels:
severity: warning
annotations:
summary: "Governance emergency active"
description: "Emergency powers in effect"
- name: vaultmesh.federation
rules:
- alert: FederationWitnessFailure
expr: increase(vaultmesh_federation_witness_failures_total[1h]) > 0
for: 5m
labels:
severity: warning
annotations:
summary: "Federation witness failure"
description: "Failed to witness {{ $labels.remote_mesh }} receipts"
- alert: FederationDiscrepancy
expr: vaultmesh_federation_discrepancy_detected == 1
for: 0m
labels:
severity: critical
annotations:
summary: "Federation discrepancy detected"
description: "Discrepancy with {{ $labels.remote_mesh }}: {{ $labels.discrepancy_type }}"
```
---
## 3. Grafana Dashboards
### 3.1 Main Dashboard
```json
{
"dashboard": {
"title": "VaultMesh Overview",
"uid": "vaultmesh-overview",
"tags": ["vaultmesh"],
"timezone": "browser",
"panels": [
{
"title": "System Status",
"type": "stat",
"gridPos": {"h": 4, "w": 6, "x": 0, "y": 0},
"targets": [
{
"expr": "sum(up{job=~\"vaultmesh-.*\"})",
"legendFormat": "Services Up"
}
],
"fieldConfig": {
"defaults": {
"thresholds": {
"steps": [
{"color": "red", "value": 0},
{"color": "yellow", "value": 2},
{"color": "green", "value": 3}
]
}
}
}
},
{
"title": "Current Phase",
"type": "stat",
"gridPos": {"h": 4, "w": 6, "x": 6, "y": 0},
"targets": [
{
"expr": "vaultmesh_psi_current_phase",
"legendFormat": "Phase"
}
],
"fieldConfig": {
"defaults": {
"mappings": [
{"type": "value", "options": {"0": {"text": "NIGREDO", "color": "dark-purple"}}},
{"type": "value", "options": {"1": {"text": "ALBEDO", "color": "white"}}},
{"type": "value", "options": {"2": {"text": "CITRINITAS", "color": "yellow"}}},
{"type": "value", "options": {"3": {"text": "RUBEDO", "color": "red"}}}
]
}
}
},
{
"title": "Last Anchor Age",
"type": "stat",
"gridPos": {"h": 4, "w": 6, "x": 12, "y": 0},
"targets": [
{
"expr": "time() - vaultmesh_guardian_last_anchor_timestamp",
"legendFormat": "Age"
}
],
"fieldConfig": {
"defaults": {
"unit": "s",
"thresholds": {
"steps": [
{"color": "green", "value": 0},
{"color": "yellow", "value": 3600},
{"color": "red", "value": 7200}
]
}
}
}
},
{
"title": "Total Receipts",
"type": "stat",
"gridPos": {"h": 4, "w": 6, "x": 18, "y": 0},
"targets": [
{
"expr": "sum(vaultmesh_receipts_total)",
"legendFormat": "Receipts"
}
]
},
{
"title": "Receipt Rate by Scroll",
"type": "timeseries",
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 4},
"targets": [
{
"expr": "rate(vaultmesh_receipts_total[5m])",
"legendFormat": "{{ scroll }}"
}
],
"fieldConfig": {
"defaults": {
"unit": "ops"
}
}
},
{
"title": "Anchor History",
"type": "timeseries",
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 4},
"targets": [
{
"expr": "increase(vaultmesh_guardian_anchors_total[1h])",
"legendFormat": "Successful Anchors"
},
{
"expr": "increase(vaultmesh_guardian_anchor_failures_total[1h])",
"legendFormat": "Failed Anchors"
}
]
},
{
"title": "Mesh Node Status",
"type": "table",
"gridPos": {"h": 6, "w": 12, "x": 0, "y": 12},
"targets": [
{
"expr": "vaultmesh_mesh_node_healthy",
"format": "table",
"instant": true
}
],
"transformations": [
{
"id": "organize",
"options": {
"excludeByName": {"Time": true, "__name__": true},
"renameByName": {"node_id": "Node", "Value": "Healthy"}
}
}
]
},
{
"title": "Oracle Query Latency",
"type": "timeseries",
"gridPos": {"h": 6, "w": 12, "x": 12, "y": 12},
"targets": [
{
"expr": "histogram_quantile(0.50, rate(vaultmesh_oracle_query_duration_seconds_bucket[5m]))",
"legendFormat": "p50"
},
{
"expr": "histogram_quantile(0.95, rate(vaultmesh_oracle_query_duration_seconds_bucket[5m]))",
"legendFormat": "p95"
},
{
"expr": "histogram_quantile(0.99, rate(vaultmesh_oracle_query_duration_seconds_bucket[5m]))",
"legendFormat": "p99"
}
],
"fieldConfig": {
"defaults": {
"unit": "s"
}
}
}
]
}
}
```
### 3.2 Guardian Dashboard
```json
{
"dashboard": {
"title": "VaultMesh Guardian",
"uid": "vaultmesh-guardian",
"tags": ["vaultmesh", "guardian"],
"panels": [
{
"title": "Anchor Status",
"type": "stat",
"gridPos": {"h": 4, "w": 8, "x": 0, "y": 0},
"targets": [
{
"expr": "vaultmesh_guardian_anchor_status",
"legendFormat": "Status"
}
],
"fieldConfig": {
"defaults": {
"mappings": [
{"type": "value", "options": {"0": {"text": "IDLE", "color": "blue"}}},
{"type": "value", "options": {"1": {"text": "ANCHORING", "color": "yellow"}}},
{"type": "value", "options": {"2": {"text": "SUCCESS", "color": "green"}}},
{"type": "value", "options": {"3": {"text": "FAILED", "color": "red"}}}
]
}
}
},
{
"title": "Receipts Since Last Anchor",
"type": "stat",
"gridPos": {"h": 4, "w": 8, "x": 8, "y": 0},
"targets": [
{
"expr": "vaultmesh_guardian_receipts_since_anchor"
}
]
},
{
"title": "Anchor Epochs",
"type": "stat",
"gridPos": {"h": 4, "w": 8, "x": 16, "y": 0},
"targets": [
{
"expr": "vaultmesh_guardian_anchor_epoch"
}
]
},
{
"title": "ProofChain Roots by Scroll",
"type": "table",
"gridPos": {"h": 8, "w": 24, "x": 0, "y": 4},
"targets": [
{
"expr": "vaultmesh_guardian_proofchain_root_info",
"format": "table",
"instant": true
}
]
},
{
"title": "Anchor Duration",
"type": "timeseries",
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 12},
"targets": [
{
"expr": "histogram_quantile(0.95, rate(vaultmesh_guardian_anchor_duration_seconds_bucket[1h]))",
"legendFormat": "p95"
}
],
"fieldConfig": {
"defaults": {
"unit": "s"
}
}
},
{
"title": "Anchor Events",
"type": "logs",
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 12},
"datasource": "Loki",
"targets": [
{
"expr": "{job=\"vaultmesh-guardian\"} |= \"anchor\""
}
]
}
]
}
}
```
---
## 4. Metrics Endpoints
### 4.1 Portal Metrics
```rust
// vaultmesh-portal/src/metrics.rs
use prometheus::{
Counter, CounterVec, Histogram, HistogramVec, Gauge, GaugeVec,
Opts, Registry, labels,
};
use lazy_static::lazy_static;
lazy_static! {
pub static ref REGISTRY: Registry = Registry::new();
// Receipt metrics
pub static ref RECEIPTS_TOTAL: CounterVec = CounterVec::new(
Opts::new("vaultmesh_receipts_total", "Total receipts by scroll"),
&["scroll", "type"]
).unwrap();
pub static ref RECEIPT_WRITE_DURATION: HistogramVec = HistogramVec::new(
prometheus::HistogramOpts::new(
"vaultmesh_receipt_write_duration_seconds",
"Receipt write duration"
).buckets(vec![0.001, 0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1.0]),
&["scroll"]
).unwrap();
pub static ref RECEIPT_WRITE_ERRORS: CounterVec = CounterVec::new(
Opts::new("vaultmesh_receipt_write_errors_total", "Receipt write errors"),
&["scroll", "error_type"]
).unwrap();
// API metrics
pub static ref HTTP_REQUESTS_TOTAL: CounterVec = CounterVec::new(
Opts::new("vaultmesh_http_requests_total", "Total HTTP requests"),
&["method", "path", "status"]
).unwrap();
pub static ref HTTP_REQUEST_DURATION: HistogramVec = HistogramVec::new(
prometheus::HistogramOpts::new(
"vaultmesh_http_request_duration_seconds",
"HTTP request duration"
).buckets(vec![0.01, 0.05, 0.1, 0.25, 0.5, 1.0, 2.5, 5.0, 10.0]),
&["method", "path"]
).unwrap();
// Connection metrics
pub static ref ACTIVE_CONNECTIONS: Gauge = Gauge::new(
"vaultmesh_active_connections",
"Active connections"
).unwrap();
pub static ref DB_POOL_SIZE: GaugeVec = GaugeVec::new(
Opts::new("vaultmesh_db_pool_size", "Database pool size"),
&["state"]
).unwrap();
}
pub fn register_metrics() {
REGISTRY.register(Box::new(RECEIPTS_TOTAL.clone())).unwrap();
REGISTRY.register(Box::new(RECEIPT_WRITE_DURATION.clone())).unwrap();
REGISTRY.register(Box::new(RECEIPT_WRITE_ERRORS.clone())).unwrap();
REGISTRY.register(Box::new(HTTP_REQUESTS_TOTAL.clone())).unwrap();
REGISTRY.register(Box::new(HTTP_REQUEST_DURATION.clone())).unwrap();
REGISTRY.register(Box::new(ACTIVE_CONNECTIONS.clone())).unwrap();
REGISTRY.register(Box::new(DB_POOL_SIZE.clone())).unwrap();
}
```
### 4.2 Guardian Metrics
```rust
// vaultmesh-guardian/src/metrics.rs
use prometheus::{
Counter, CounterVec, Histogram, Gauge, GaugeVec,
Opts, Registry,
};
use lazy_static::lazy_static;
lazy_static! {
pub static ref REGISTRY: Registry = Registry::new();
// Anchor metrics
pub static ref ANCHORS_TOTAL: Counter = Counter::new(
"vaultmesh_guardian_anchors_total",
"Total successful anchors"
).unwrap();
pub static ref ANCHOR_FAILURES_TOTAL: CounterVec = CounterVec::new(
Opts::new("vaultmesh_guardian_anchor_failures_total", "Anchor failures by reason"),
&["reason"]
).unwrap();
pub static ref ANCHOR_DURATION: Histogram = Histogram::with_opts(
prometheus::HistogramOpts::new(
"vaultmesh_guardian_anchor_duration_seconds",
"Anchor cycle duration"
).buckets(vec![1.0, 5.0, 10.0, 30.0, 60.0, 120.0, 300.0])
).unwrap();
pub static ref LAST_ANCHOR_TIMESTAMP: Gauge = Gauge::new(
"vaultmesh_guardian_last_anchor_timestamp",
"Timestamp of last successful anchor"
).unwrap();
pub static ref ANCHOR_EPOCH: Gauge = Gauge::new(
"vaultmesh_guardian_anchor_epoch",
"Current anchor epoch number"
).unwrap();
pub static ref RECEIPTS_SINCE_ANCHOR: Gauge = Gauge::new(
"vaultmesh_guardian_receipts_since_anchor",
"Receipts added since last anchor"
).unwrap();
pub static ref ANCHOR_STATUS: Gauge = Gauge::new(
"vaultmesh_guardian_anchor_status",
"Current anchor status (0=idle, 1=anchoring, 2=success, 3=failed)"
).unwrap();
// ProofChain metrics
pub static ref PROOFCHAIN_ROOT_INFO: GaugeVec = GaugeVec::new(
Opts::new("vaultmesh_guardian_proofchain_root_info", "ProofChain root information"),
&["scroll", "root_hash"]
).unwrap();
pub static ref PROOFCHAIN_DIVERGENCE: Gauge = Gauge::new(
"vaultmesh_guardian_proofchain_divergence",
"ProofChain divergence detected (0=no, 1=yes)"
).unwrap();
// Sentinel metrics
pub static ref SENTINEL_EVENTS: CounterVec = CounterVec::new(
Opts::new("vaultmesh_guardian_sentinel_events_total", "Sentinel events"),
&["event_type", "severity"]
).unwrap();
}
```

View File

@@ -0,0 +1,742 @@
# VAULTMESH-OBSERVABILITY-ENGINE.md
**Civilization Ledger Telemetry Primitive**
> *Every metric tells a story. Every trace has a receipt.*
Observability is VaultMesh's nervous system — capturing metrics, logs, and traces across all nodes and services, with cryptographic attestation that the telemetry itself hasn't been tampered with.
---
## 1. Scroll Definition
| Property | Value |
| --------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| **Scroll Name** | `Observability` |
| **JSONL Path** | `receipts/observability/observability_events.jsonl` |
| **Root File** | `ROOT.observability.txt` |
| **Receipt Types** | `obs_metric_snapshot`, `obs_log_batch`, `obs_trace_complete`, `obs_alert_fired`, `obs_alert_resolved`, `obs_slo_report`, `obs_anomaly_detected` |
---
## 2. Core Concepts
### 2.1 Metrics
**Metrics** are time-series numerical measurements from nodes and services.
```json
{
"metric_id": "metric:brick-01:cpu:2025-12-06T14:30:00Z",
"node": "did:vm:node:brick-01",
"timestamp": "2025-12-06T14:30:00Z",
"metrics": {
"cpu_percent": 23.5,
"memory_percent": 67.2,
"disk_percent": 45.8,
"network_rx_bytes": 1234567890,
"network_tx_bytes": 987654321,
"open_file_descriptors": 342,
"goroutines": 156
},
"labels": {
"environment": "production",
"region": "eu-west",
"service": "guardian"
},
"collection_method": "prometheus_scrape",
"scrape_duration_ms": 45
}
```
**Metric categories**:
- `system` — CPU, memory, disk, network
- `application` — request rates, latencies, error rates
- `business` — receipts/hour, anchors/day, oracle queries
- `security` — auth attempts, failed logins, blocked IPs
- `mesh` — route latencies, node health, capability usage
### 2.2 Logs
**Logs** are structured event records from all system components.
```json
{
"log_id": "log:guardian:2025-12-06T14:30:15.123Z",
"timestamp": "2025-12-06T14:30:15.123Z",
"level": "info",
"service": "guardian",
"node": "did:vm:node:brick-01",
"message": "Anchor cycle completed successfully",
"attributes": {
"cycle_id": "anchor-cycle-2025-12-06-001",
"receipts_anchored": 47,
"scrolls_included": ["treasury", "mesh", "identity"],
"duration_ms": 1234,
"backend": "bitcoin"
},
"trace_id": "trace-abc123...",
"span_id": "span-def456...",
"caller": "guardian/anchor.go:234"
}
```
**Log levels**:
- `trace` — verbose debugging (not retained long-term)
- `debug` — debugging information
- `info` — normal operations
- `warn` — unexpected but handled conditions
- `error` — errors requiring attention
- `fatal` — system failures
### 2.3 Traces
**Traces** track request flows across distributed components.
```json
{
"trace_id": "trace-abc123...",
"name": "treasury_settlement",
"start_time": "2025-12-06T14:30:00.000Z",
"end_time": "2025-12-06T14:30:02.345Z",
"duration_ms": 2345,
"status": "ok",
"spans": [
{
"span_id": "span-001",
"parent_span_id": null,
"name": "http_request",
"service": "portal",
"node": "did:vm:node:portal-01",
"start_time": "2025-12-06T14:30:00.000Z",
"duration_ms": 2340,
"attributes": {
"http.method": "POST",
"http.url": "/treasury/settle",
"http.status_code": 200
}
},
{
"span_id": "span-002",
"parent_span_id": "span-001",
"name": "validate_settlement",
"service": "treasury-engine",
"node": "did:vm:node:brick-01",
"start_time": "2025-12-06T14:30:00.100Z",
"duration_ms": 150,
"attributes": {
"settlement_id": "settle-2025-12-06-001",
"accounts_involved": 3
}
},
{
"span_id": "span-003",
"parent_span_id": "span-001",
"name": "emit_receipt",
"service": "ledger",
"node": "did:vm:node:brick-01",
"start_time": "2025-12-06T14:30:00.250Z",
"duration_ms": 50,
"attributes": {
"receipt_type": "treasury_settlement",
"scroll": "treasury"
}
},
{
"span_id": "span-004",
"parent_span_id": "span-001",
"name": "anchor_request",
"service": "guardian",
"node": "did:vm:node:brick-01",
"start_time": "2025-12-06T14:30:00.300Z",
"duration_ms": 2000,
"attributes": {
"backend": "bitcoin",
"txid": "btc:abc123..."
}
}
],
"tags": ["treasury", "settlement", "anchor"]
}
```
### 2.4 Alerts
**Alerts** are triggered conditions requiring attention.
```json
{
"alert_id": "alert-2025-12-06-001",
"name": "HighCPUUsage",
"severity": "warning",
"status": "firing",
"fired_at": "2025-12-06T14:35:00Z",
"node": "did:vm:node:brick-02",
"rule": {
"expression": "cpu_percent > 80 for 5m",
"threshold": 80,
"duration": "5m"
},
"current_value": 87.3,
"labels": {
"environment": "production",
"region": "eu-west"
},
"annotations": {
"summary": "CPU usage above 80% for 5 minutes",
"runbook": "https://docs.vaultmesh.io/runbooks/high-cpu"
},
"notified": ["slack:ops-channel", "pagerduty:on-call"]
}
```
### 2.5 SLO Reports
**SLO (Service Level Objective) Reports** track reliability targets.
```json
{
"slo_id": "slo:anchor-latency-p99",
"name": "Anchor Latency P99",
"description": "99th percentile anchor latency under 30 seconds",
"target": 0.999,
"window": "30d",
"report_period": {
"start": "2025-11-06T00:00:00Z",
"end": "2025-12-06T00:00:00Z"
},
"achieved": 0.9995,
"status": "met",
"error_budget": {
"total_minutes": 43.2,
"consumed_minutes": 21.6,
"remaining_percent": 50.0
},
"breakdown": {
"total_requests": 125000,
"good_requests": 124937,
"bad_requests": 63
},
"trend": "stable"
}
```
---
## 3. Mapping to Eternal Pattern
### 3.1 Experience Layer (L1)
**CLI** (`vm-obs`):
```bash
# Metrics
vm-obs metrics query --node brick-01 --metric cpu_percent --last 1h
vm-obs metrics list --node brick-01
vm-obs metrics export --from 2025-12-01 --to 2025-12-06 --format prometheus
# Logs
vm-obs logs query --service guardian --level error --last 24h
vm-obs logs tail --node brick-01 --follow
vm-obs logs search "anchor failed" --from 2025-12-01
# Traces
vm-obs trace show trace-abc123
vm-obs trace search --service treasury --duration ">1s" --last 24h
vm-obs trace analyze trace-abc123 --find-bottleneck
# Alerts
vm-obs alert list --status firing
vm-obs alert show alert-2025-12-06-001
vm-obs alert ack alert-2025-12-06-001 --comment "investigating"
vm-obs alert silence --node brick-02 --duration 1h --reason "maintenance"
# SLOs
vm-obs slo list
vm-obs slo show slo:anchor-latency-p99
vm-obs slo report --period 30d --format markdown
# Dashboards
vm-obs dashboard list
vm-obs dashboard show system-overview
vm-obs dashboard export system-overview --format grafana
```
**MCP Tools**:
- `obs_metrics_query` — query metrics for a node/service
- `obs_logs_search` — search logs with filters
- `obs_trace_get` — retrieve trace details
- `obs_alert_status` — current alert status
- `obs_slo_summary` — SLO compliance summary
- `obs_health_check` — overall system health
**Portal HTTP**:
- `GET /obs/metrics` — query metrics
- `GET /obs/logs` — search logs
- `GET /obs/traces` — list traces
- `GET /obs/traces/{trace_id}` — trace details
- `GET /obs/alerts` — list alerts
- `POST /obs/alerts/{id}/ack` — acknowledge alert
- `POST /obs/alerts/silence` — create silence
- `GET /obs/slos` — list SLOs
- `GET /obs/slos/{id}/report` — SLO report
- `GET /obs/health` — system health
---
### 3.2 Engine Layer (L2)
#### Step 1 — Plan → Implicit (Continuous Collection)
Unlike discrete operations, observability collection is continuous. However, certain operations have explicit contracts:
**Alert Acknowledgment Contract**:
```json
{
"operation_id": "obs-op-2025-12-06-001",
"operation_type": "alert_acknowledge",
"alert_id": "alert-2025-12-06-001",
"acknowledged_by": "did:vm:user:sovereign",
"acknowledged_at": "2025-12-06T14:40:00Z",
"comment": "Investigating high CPU on brick-02, likely due to anchor backlog",
"escalation_suppressed": true,
"follow_up_required": true,
"follow_up_deadline": "2025-12-06T16:00:00Z"
}
```
**SLO Definition Contract**:
```json
{
"operation_id": "obs-op-2025-12-06-002",
"operation_type": "slo_create",
"initiated_by": "did:vm:user:sovereign",
"slo": {
"id": "slo:oracle-availability",
"name": "Oracle Availability",
"description": "Oracle service uptime",
"indicator": {
"type": "availability",
"good_query": "oracle_up == 1",
"total_query": "count(oracle_requests)"
},
"target": 0.999,
"window": "30d"
}
}
```
#### Step 2 — Execute → Continuous Collection
Metrics, logs, and traces are collected continuously via:
- Prometheus scraping (metrics)
- Fluent Bit/Vector (logs)
- OpenTelemetry SDK (traces)
State is maintained in time-series databases and search indices, not as discrete state files.
#### Step 3 — Seal → Receipts
**Metric Snapshot Receipt** (hourly):
```json
{
"type": "obs_metric_snapshot",
"snapshot_id": "metrics-2025-12-06-14",
"timestamp": "2025-12-06T14:00:00Z",
"period": {
"start": "2025-12-06T13:00:00Z",
"end": "2025-12-06T14:00:00Z"
},
"nodes_reporting": 5,
"metrics_collected": 15000,
"aggregates": {
"avg_cpu_percent": 34.5,
"max_cpu_percent": 87.3,
"avg_memory_percent": 62.1,
"total_receipts_emitted": 1247,
"total_anchors_completed": 12
},
"storage_path": "telemetry/metrics/2025-12-06/hour-14.parquet",
"content_hash": "blake3:aaa111...",
"tags": ["observability", "metrics", "hourly"],
"root_hash": "blake3:bbb222..."
}
```
**Log Batch Receipt** (hourly):
```json
{
"type": "obs_log_batch",
"batch_id": "logs-2025-12-06-14",
"timestamp": "2025-12-06T14:00:00Z",
"period": {
"start": "2025-12-06T13:00:00Z",
"end": "2025-12-06T14:00:00Z"
},
"log_counts": {
"trace": 0,
"debug": 12456,
"info": 45678,
"warn": 234,
"error": 12,
"fatal": 0
},
"services_reporting": ["guardian", "treasury", "portal", "oracle", "mesh"],
"storage_path": "telemetry/logs/2025-12-06/hour-14.jsonl.gz",
"content_hash": "blake3:ccc333...",
"tags": ["observability", "logs", "hourly"],
"root_hash": "blake3:ddd444..."
}
```
**Trace Complete Receipt** (for significant traces):
```json
{
"type": "obs_trace_complete",
"trace_id": "trace-abc123...",
"timestamp": "2025-12-06T14:30:02.345Z",
"name": "treasury_settlement",
"duration_ms": 2345,
"status": "ok",
"span_count": 4,
"services_involved": ["portal", "treasury-engine", "ledger", "guardian"],
"nodes_involved": ["portal-01", "brick-01"],
"triggered_by": "did:vm:user:sovereign",
"business_context": {
"settlement_id": "settle-2025-12-06-001",
"amount": "1000.00 USD"
},
"tags": ["observability", "trace", "treasury", "settlement"],
"root_hash": "blake3:eee555..."
}
```
**Alert Fired Receipt**:
```json
{
"type": "obs_alert_fired",
"alert_id": "alert-2025-12-06-001",
"timestamp": "2025-12-06T14:35:00Z",
"name": "HighCPUUsage",
"severity": "warning",
"node": "did:vm:node:brick-02",
"rule_expression": "cpu_percent > 80 for 5m",
"current_value": 87.3,
"threshold": 80,
"notifications_sent": ["slack:ops-channel", "pagerduty:on-call"],
"tags": ["observability", "alert", "fired", "cpu"],
"root_hash": "blake3:fff666..."
}
```
**Alert Resolved Receipt**:
```json
{
"type": "obs_alert_resolved",
"alert_id": "alert-2025-12-06-001",
"timestamp": "2025-12-06T15:10:00Z",
"name": "HighCPUUsage",
"fired_at": "2025-12-06T14:35:00Z",
"duration_minutes": 35,
"resolved_by": "automatic",
"resolution_value": 42.1,
"acknowledged": true,
"acknowledged_by": "did:vm:user:sovereign",
"root_cause": "anchor backlog cleared",
"tags": ["observability", "alert", "resolved"],
"root_hash": "blake3:ggg777..."
}
```
**SLO Report Receipt** (daily):
```json
{
"type": "obs_slo_report",
"report_id": "slo-report-2025-12-06",
"timestamp": "2025-12-06T00:00:00Z",
"period": {
"start": "2025-11-06T00:00:00Z",
"end": "2025-12-06T00:00:00Z"
},
"slos": [
{
"slo_id": "slo:anchor-latency-p99",
"target": 0.999,
"achieved": 0.9995,
"status": "met"
},
{
"slo_id": "slo:oracle-availability",
"target": 0.999,
"achieved": 0.9987,
"status": "at_risk"
}
],
"overall_status": "healthy",
"error_budget_status": "sufficient",
"report_path": "reports/slo/2025-12-06.json",
"tags": ["observability", "slo", "daily-report"],
"root_hash": "blake3:hhh888..."
}
```
**Anomaly Detection Receipt**:
```json
{
"type": "obs_anomaly_detected",
"anomaly_id": "anomaly-2025-12-06-001",
"timestamp": "2025-12-06T14:45:00Z",
"detection_method": "statistical",
"metric": "treasury.receipts_per_minute",
"node": "did:vm:node:brick-01",
"expected_range": {"min": 10, "max": 50},
"observed_value": 2,
"deviation_sigma": 4.2,
"confidence": 0.98,
"possible_causes": [
"upstream service degradation",
"network partition",
"configuration change"
],
"correlated_events": ["alert-2025-12-06-001"],
"tags": ["observability", "anomaly", "treasury"],
"root_hash": "blake3:iii999..."
}
```
---
### 3.3 Ledger Layer (L3)
**Receipt Types**:
| Type | When Emitted |
| ---------------------- | ------------------------------------- |
| `obs_metric_snapshot` | Hourly metric aggregation |
| `obs_log_batch` | Hourly log batch sealed |
| `obs_trace_complete` | Significant trace completed |
| `obs_alert_fired` | Alert triggered |
| `obs_alert_resolved` | Alert resolved |
| `obs_slo_report` | Daily SLO report |
| `obs_anomaly_detected` | Statistical anomaly detected |
**Merkle Coverage**:
- All receipts append to `receipts/observability/observability_events.jsonl`
- `ROOT.observability.txt` updated after each append
- Guardian anchors Observability root in anchor cycles
---
## 4. Query Interface
`observability_query_events.py`:
```bash
# Metric snapshots
vm-obs query --type metric_snapshot --from 2025-12-01
# Log batches with errors
vm-obs query --type log_batch --filter "log_counts.error > 0"
# Traces over 5 seconds
vm-obs query --type trace_complete --filter "duration_ms > 5000"
# All alerts for a node
vm-obs query --type alert_fired,alert_resolved --node brick-02
# SLO reports with missed targets
vm-obs query --type slo_report --filter "overall_status != 'healthy'"
# Anomalies in last 7 days
vm-obs query --type anomaly_detected --last 7d
# Export for analysis
vm-obs query --from 2025-12-01 --format parquet > observability_dec.parquet
```
**Correlation Tool**:
```bash
# Correlate events around a timestamp
vm-obs correlate --timestamp "2025-12-06T14:35:00Z" --window 15m
# Output:
# Timeline around 2025-12-06T14:35:00Z (±15m):
#
# 14:20:00 [metric] brick-02 cpu_percent starts rising
# 14:25:00 [log] guardian: "anchor queue depth increasing"
# 14:30:00 [trace] trace-abc123 completed (2345ms, normal)
# 14:32:00 [metric] brick-02 cpu_percent crosses 80%
# 14:35:00 [alert] HighCPUUsage fired on brick-02
# 14:40:00 [log] guardian: "processing backlog"
# 14:45:00 [anomaly] treasury.receipts_per_minute low
# 14:50:00 [log] guardian: "backlog cleared"
# 15:10:00 [alert] HighCPUUsage resolved on brick-02
```
---
## 5. Design Gate Checklist
| Question | Observability Answer |
| --------------------- | ------------------------------------------------------------------ |
| Clear entrypoint? | ✅ CLI (`vm-obs`), MCP tools, Portal HTTP |
| Contract produced? | ✅ Implicit (continuous) + explicit for alert acks, SLO definitions |
| State object? | ✅ Time-series DBs, search indices (continuous state) |
| Receipts emitted? | ✅ Seven receipt types covering all observability events |
| Append-only JSONL? | ✅ `receipts/observability/observability_events.jsonl` |
| Merkle root? | ✅ `ROOT.observability.txt` |
| Guardian anchor path? | ✅ Observability root included in ProofChain |
| Query tool? | ✅ `observability_query_events.py` + correlation tool |
---
## 6. Data Pipeline
### 6.1 Collection Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ BRICK Nodes │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ brick-01│ │ brick-02│ │ brick-03│ │portal-01│ │
│ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Collection Layer │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────────────────┐ │ │
│ │ │Prometheus│ │Fluent Bit│ │OpenTelemetry Collector│ │ │
│ │ │ (metrics)│ │ (logs) │ │ (traces) │ │ │
│ │ └────┬─────┘ └────┬─────┘ └──────────┬───────────┘ │ │
│ └───────┼─────────────┼───────────────────┼──────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Storage Layer │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────────────────┐ │ │
│ │ │VictoriaM │ │ Loki/ │ │ Tempo/Jaeger │ │ │
│ │ │(metrics) │ │ OpenSearch│ │ (traces) │ │ │
│ │ └────┬─────┘ └────┬─────┘ └──────────┬───────────┘ │ │
│ └───────┼─────────────┼───────────────────┼──────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Receipt Layer │ │
│ │ ┌──────────────────────────────────────────────────┐ │ │
│ │ │ Observability Receipt Emitter │ │ │
│ │ │ (hourly snapshots, alerts, SLOs, anomalies) │ │ │
│ │ └──────────────────────────────────────────────────┘ │ │
│ └─────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
```
### 6.2 Retention Policies
| Data Type | Hot Storage | Warm Storage | Cold/Archive | Receipt |
| ------------------ | -------------- | -------------- | -------------- | ------- |
| Metrics (raw) | 7 days | 30 days | 1 year | Hourly |
| Metrics (1h agg) | 30 days | 1 year | 5 years | Hourly |
| Logs (all) | 7 days | 30 days | 1 year | Hourly |
| Logs (error+) | 30 days | 1 year | 5 years | Hourly |
| Traces (sampled) | 7 days | 30 days | — | Per-trace |
| Traces (errors) | 30 days | 1 year | 5 years | Per-trace |
| Alerts | Indefinite | Indefinite | Indefinite | Per-event |
| SLO Reports | Indefinite | Indefinite | Indefinite | Daily |
### 6.3 Sampling Strategy
```json
{
"sampling_rules": [
{
"name": "always_sample_errors",
"condition": "status == 'error' OR level >= 'error'",
"rate": 1.0
},
{
"name": "always_sample_slow",
"condition": "duration_ms > 5000",
"rate": 1.0
},
{
"name": "always_sample_sensitive",
"condition": "service IN ['treasury', 'identity', 'offsec']",
"rate": 1.0
},
{
"name": "default_traces",
"condition": "true",
"rate": 0.1
}
]
}
```
---
## 7. Alerting Framework
### 7.1 Alert Rules
```yaml
groups:
- name: vaultmesh-critical
rules:
- alert: NodeDown
expr: up == 0
for: 2m
labels:
severity: critical
annotations:
summary: "Node {{ $labels.node }} is down"
runbook: https://docs.vaultmesh.io/runbooks/node-down
- alert: AnchorBacklogHigh
expr: guardian_anchor_queue_depth > 100
for: 10m
labels:
severity: warning
annotations:
summary: "Anchor queue depth is {{ $value }}"
- alert: SLOBudgetBurning
expr: slo_error_budget_remaining_percent < 25
for: 5m
labels:
severity: warning
annotations:
summary: "SLO {{ $labels.slo }} error budget at {{ $value }}%"
```
### 7.2 Notification Channels
| Severity | Channels | Response Time |
| ----------- | ------------------------------------- | ------------- |
| `critical` | PagerDuty, SMS, Slack #critical | Immediate |
| `high` | PagerDuty, Slack #alerts | 15 minutes |
| `warning` | Slack #alerts, Email | 1 hour |
| `info` | Slack #ops | Best effort |
---
## 8. Integration Points
| System | Integration |
| ---------------- | ------------------------------------------------------------------------ |
| **Guardian** | Emits anchor metrics/traces; alerts on anchor failures |
| **Treasury** | Transaction metrics; latency SLOs; receipt throughput |
| **Identity** | Auth event logs; failed login alerts; session metrics |
| **Mesh** | Node health metrics; route latency; topology change logs |
| **OffSec** | Security event correlation; incident timeline enrichment |
| **Oracle** | Query latency metrics; confidence score distributions |
| **Automation** | Workflow execution traces; n8n performance metrics |
---
## 9. Future Extensions
- **AI-powered anomaly detection**: ML models for predictive alerting
- **Distributed tracing visualization**: Real-time trace graphs in Portal
- **Log pattern mining**: Automatic extraction of error patterns
- **Chaos engineering integration**: Correlate chaos experiments with observability
- **Cost attribution**: Resource usage per scroll/service for Treasury billing
- **Compliance dashboards**: Real-time compliance posture visualization

View File

@@ -0,0 +1,652 @@
# VAULTMESH-OFFSEC-ENGINE.md
**Civilization Ledger Security Operations Primitive**
> *Every intrusion has a timeline. Every response has a receipt.*
OffSec is VaultMesh's security operations memory — tracking real incidents, red team engagements, vulnerability discoveries, and remediation efforts with forensic-grade evidence chains.
---
## 1. Scroll Definition
| Property | Value |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| **Scroll Name** | `OffSec` |
| **JSONL Path** | `receipts/offsec/offsec_events.jsonl` |
| **Root File** | `ROOT.offsec.txt` |
| **Receipt Types** | `offsec_incident`, `offsec_redteam`, `offsec_vuln_discovery`, `offsec_remediation`, `offsec_threat_intel`, `offsec_forensic_snapshot` |
---
## 2. Core Concepts
### 2.1 Incidents
A security **incident** is any confirmed or suspected security event requiring investigation and response.
```json
{
"incident_id": "INC-2025-12-001",
"title": "Unauthorized SSH Access Attempt on BRICK-02",
"severity": "high",
"status": "investigating",
"reported_at": "2025-12-06T03:47:00Z",
"reported_by": "guardian-automated",
"affected_nodes": ["did:vm:node:brick-02"],
"attack_vector": "brute_force",
"indicators": [
{
"type": "ip",
"value": "185.220.101.42",
"context": "source of SSH attempts"
},
{
"type": "pattern",
"value": "1200+ failed auth in 10min",
"context": "rate anomaly"
}
],
"containment_actions": [],
"tags": ["ssh", "brute-force", "external"]
}
```
**Severity levels**:
* `critical` — active breach, data exfiltration, system compromise
* `high` — confirmed attack, potential breach
* `medium` — suspicious activity, policy violation
* `low` — anomaly, informational
**Status flow**:
```
reported → triaging → investigating → contained → eradicating → recovered → closed
↘ false_positive → closed
```
### 2.2 Red Team Engagements
Authorized offensive operations against VaultMesh infrastructure.
```json
{
"engagement_id": "RT-2025-Q4-001",
"title": "Q4 External Perimeter Assessment",
"engagement_type": "external_pentest",
"status": "in_progress",
"scope": {
"in_scope": ["*.vaultmesh.io", "portal-01", "brick-01", "brick-02"],
"out_of_scope": ["production databases", "third-party integrations"],
"rules_of_engagement": "No DoS, no social engineering, business hours only"
},
"team": ["operator-alpha", "operator-bravo"],
"authorized_by": "did:vm:node:portal-01",
"started_at": "2025-12-01T09:00:00Z",
"scheduled_end": "2025-12-15T18:00:00Z",
"findings": []
}
```
**Engagement types**:
* `external_pentest` — outside-in assessment
* `internal_pentest` — assumed-breach scenario
* `red_team` — full adversary emulation
* `purple_team` — collaborative attack/defense
* `tabletop` — scenario-based discussion (no actual attacks)
### 2.3 Vulnerability Discoveries
Vulnerabilities found through any means (scanning, manual testing, bug reports, threat intel).
```json
{
"vuln_id": "VULN-2025-12-001",
"title": "OpenSSH CVE-2024-XXXXX on BRICK-02",
"severity": "high",
"cvss_score": 8.1,
"status": "confirmed",
"discovered_at": "2025-12-06T10:30:00Z",
"discovered_by": "RT-2025-Q4-001",
"discovery_method": "pentest",
"affected_assets": ["did:vm:node:brick-02"],
"cve": "CVE-2024-XXXXX",
"description": "Remote code execution via crafted SSH packet",
"evidence_path": "cases/offsec/VULN-2025-12-001/evidence/",
"remediation_status": "pending",
"tags": ["ssh", "rce", "cve"]
}
```
### 2.4 Remediations
Actions taken to fix vulnerabilities or recover from incidents.
```json
{
"remediation_id": "REM-2025-12-001",
"title": "Patch OpenSSH on BRICK-02",
"related_to": {
"type": "vulnerability",
"id": "VULN-2025-12-001"
},
"status": "completed",
"assigned_to": "sovereign",
"started_at": "2025-12-06T11:00:00Z",
"completed_at": "2025-12-06T11:45:00Z",
"actions_taken": [
"Applied OpenSSH 9.6p1 patch",
"Restarted sshd service",
"Verified patch version",
"Re-scanned to confirm fix"
],
"verification": {
"method": "rescan",
"result": "not_vulnerable",
"verified_at": "2025-12-06T12:00:00Z"
},
"evidence_path": "cases/offsec/REM-2025-12-001/evidence/"
}
```
---
## 3. Mapping to Eternal Pattern
### 3.1 Experience Layer (L1)
**CLI** (`vm-offsec`):
```bash
# Incident management
vm-offsec incident create --title "Suspicious outbound traffic" --severity medium
vm-offsec incident list --status investigating
vm-offsec incident show INC-2025-12-001
vm-offsec incident update INC-2025-12-001 --status contained
vm-offsec incident close INC-2025-12-001 --resolution "false_positive"
# Red team
vm-offsec redteam create --config engagements/q4-external.json
vm-offsec redteam list --status in_progress
vm-offsec redteam finding add RT-2025-Q4-001 --vuln VULN-2025-12-001
vm-offsec redteam close RT-2025-Q4-001 --report reports/RT-2025-Q4-001.pdf
# Vulnerabilities
vm-offsec vuln create --title "Weak TLS config" --severity medium --asset portal-01
vm-offsec vuln list --status confirmed --severity high,critical
vm-offsec vuln remediate VULN-2025-12-001 --assigned sovereign
# Threat intel
vm-offsec intel add --type ioc --value "185.220.101.42" --context "Tor exit node"
vm-offsec intel search --type ip --value "185.220.101.42"
# Forensics
vm-offsec forensic snapshot --node brick-02 --reason "INC-2025-12-001 investigation"
vm-offsec forensic timeline INC-2025-12-001 --output timeline.json
```
**MCP Tools**:
* `offsec_incident_create` — create new incident
* `offsec_incident_status` — get incident details
* `offsec_vuln_search` — search vulnerabilities
* `offsec_ioc_check` — check if indicator is known
* `offsec_timeline` — generate incident timeline
**Portal HTTP**:
* `POST /offsec/incidents` — create incident
* `GET /offsec/incidents` — list incidents
* `GET /offsec/incidents/{id}` — incident details
* `PATCH /offsec/incidents/{id}` — update incident
* `POST /offsec/redteam` — create engagement
* `GET /offsec/vulnerabilities` — list vulns
* `POST /offsec/intel` — add threat intel
* `POST /offsec/forensic/snapshot` — capture forensic state
---
### 3.2 Engine Layer (L2)
#### Step 1 — Plan → `offsec_case_contract.json`
For incidents and red team engagements, an explicit case contract:
**Incident Contract**:
```json
{
"case_id": "INC-2025-12-001",
"case_type": "incident",
"title": "Unauthorized SSH Access Attempt on BRICK-02",
"severity": "high",
"created_at": "2025-12-06T03:47:00Z",
"phases": [
{
"phase_id": "phase-1-triage",
"name": "Triage",
"objectives": [
"Confirm attack is real (not false positive)",
"Identify affected systems",
"Assess immediate risk"
],
"checklist": [
"Review Guardian alerts",
"Check auth logs on BRICK-02",
"Correlate with other nodes",
"Determine if access was successful"
]
},
{
"phase_id": "phase-2-contain",
"name": "Containment",
"objectives": [
"Stop ongoing attack",
"Prevent lateral movement",
"Preserve evidence"
],
"checklist": [
"Block source IP at firewall",
"Rotate SSH keys if needed",
"Snapshot affected systems",
"Enable enhanced logging"
]
},
{
"phase_id": "phase-3-eradicate",
"name": "Eradication",
"objectives": [
"Remove attacker access",
"Patch vulnerabilities",
"Harden configuration"
]
},
{
"phase_id": "phase-4-recover",
"name": "Recovery",
"objectives": [
"Restore normal operations",
"Verify security posture",
"Document lessons learned"
]
}
],
"assigned_responders": ["sovereign"],
"escalation_path": ["guardian-automated", "portal-admin"]
}
```
**Red Team Contract**:
```json
{
"case_id": "RT-2025-Q4-001",
"case_type": "redteam",
"title": "Q4 External Perimeter Assessment",
"engagement_type": "external_pentest",
"created_at": "2025-12-01T09:00:00Z",
"phases": [
{
"phase_id": "phase-1-recon",
"name": "Reconnaissance",
"objectives": ["Map external attack surface", "Identify services", "OSINT gathering"]
},
{
"phase_id": "phase-2-enum",
"name": "Enumeration",
"objectives": ["Service fingerprinting", "Version detection", "Vuln scanning"]
},
{
"phase_id": "phase-3-exploit",
"name": "Exploitation",
"objectives": ["Attempt exploitation of discovered vulns", "Document success/failure"]
},
{
"phase_id": "phase-4-report",
"name": "Reporting",
"objectives": ["Compile findings", "Risk rating", "Remediation recommendations"]
}
],
"scope": { "...": "..." },
"rules_of_engagement": "...",
"authorized_by": "did:vm:node:portal-01"
}
```
#### Step 2 — Execute → `offsec_case_state.json`
```json
{
"case_id": "INC-2025-12-001",
"case_type": "incident",
"status": "contained",
"created_at": "2025-12-06T03:47:00Z",
"updated_at": "2025-12-06T06:30:00Z",
"phases": [
{
"phase_id": "phase-1-triage",
"status": "completed",
"started_at": "2025-12-06T03:50:00Z",
"completed_at": "2025-12-06T04:15:00Z",
"findings": [
"Attack confirmed real - 1247 failed SSH attempts from 185.220.101.42",
"No successful authentication detected",
"Only BRICK-02 targeted"
],
"evidence": ["logs/brick-02-auth.log.gz", "screenshots/guardian-alert.png"]
},
{
"phase_id": "phase-2-contain",
"status": "completed",
"started_at": "2025-12-06T04:15:00Z",
"completed_at": "2025-12-06T04:30:00Z",
"actions_taken": [
"Blocked 185.220.101.42 at WireGuard firewall",
"Verified no unauthorized sessions active",
"Captured forensic snapshot of BRICK-02"
],
"evidence": ["firewall-rule-add.sh", "snapshot-brick02-20251206.tar.gz"]
},
{
"phase_id": "phase-3-eradicate",
"status": "in_progress",
"started_at": "2025-12-06T06:00:00Z"
},
{
"phase_id": "phase-4-recover",
"status": "pending"
}
],
"indicators_collected": [
{"type": "ip", "value": "185.220.101.42"},
{"type": "user_agent", "value": "SSH-2.0-libssh_0.9.6"}
],
"timeline_path": "cases/offsec/INC-2025-12-001/timeline.json"
}
```
#### Step 3 — Seal → Receipts
**Incident Receipt** (on case closure):
```json
{
"type": "offsec_incident",
"incident_id": "INC-2025-12-001",
"title": "Unauthorized SSH Access Attempt on BRICK-02",
"severity": "high",
"timestamp_reported": "2025-12-06T03:47:00Z",
"timestamp_closed": "2025-12-06T12:00:00Z",
"status": "closed",
"resolution": "contained_no_breach",
"affected_nodes": ["did:vm:node:brick-02"],
"attack_vector": "brute_force",
"phases_completed": 4,
"indicators_count": 2,
"evidence_manifest": "cases/offsec/INC-2025-12-001/EVIDENCE.sha256",
"timeline_hash": "blake3:aaa111...",
"lessons_learned": "Implement fail2ban on all nodes; add SSH rate limiting at network edge",
"tags": ["incident", "ssh", "brute-force", "contained"],
"root_hash": "blake3:bbb222...",
"proof_path": "cases/offsec/INC-2025-12-001/PROOF.json"
}
```
**Vulnerability Discovery Receipt**:
```json
{
"type": "offsec_vuln_discovery",
"vuln_id": "VULN-2025-12-001",
"title": "OpenSSH CVE-2024-XXXXX on BRICK-02",
"severity": "high",
"cvss_score": 8.1,
"timestamp_discovered": "2025-12-06T10:30:00Z",
"discovered_by": "RT-2025-Q4-001",
"discovery_method": "pentest",
"affected_assets": ["did:vm:node:brick-02"],
"cve": "CVE-2024-XXXXX",
"remediation_status": "remediated",
"remediation_id": "REM-2025-12-001",
"tags": ["vulnerability", "ssh", "rce", "cve", "remediated"],
"root_hash": "blake3:ccc333..."
}
```
**Remediation Receipt**:
```json
{
"type": "offsec_remediation",
"remediation_id": "REM-2025-12-001",
"title": "Patch OpenSSH on BRICK-02",
"related_vuln": "VULN-2025-12-001",
"timestamp_started": "2025-12-06T11:00:00Z",
"timestamp_completed": "2025-12-06T11:45:00Z",
"status": "verified",
"actions_count": 4,
"verification_method": "rescan",
"verification_result": "not_vulnerable",
"evidence_manifest": "cases/offsec/REM-2025-12-001/EVIDENCE.sha256",
"tags": ["remediation", "patch", "ssh", "verified"],
"root_hash": "blake3:ddd444..."
}
```
**Red Team Receipt** (on engagement close):
```json
{
"type": "offsec_redteam",
"engagement_id": "RT-2025-Q4-001",
"title": "Q4 External Perimeter Assessment",
"engagement_type": "external_pentest",
"timestamp_started": "2025-12-01T09:00:00Z",
"timestamp_closed": "2025-12-15T17:00:00Z",
"status": "completed",
"findings_critical": 0,
"findings_high": 1,
"findings_medium": 3,
"findings_low": 7,
"findings_info": 12,
"vulns_created": ["VULN-2025-12-001", "VULN-2025-12-002", "VULN-2025-12-003", "VULN-2025-12-004"],
"report_hash": "blake3:eee555...",
"report_path": "cases/offsec/RT-2025-Q4-001/report.pdf",
"tags": ["redteam", "pentest", "external", "q4"],
"root_hash": "blake3:fff666...",
"proof_path": "cases/offsec/RT-2025-Q4-001/PROOF.json"
}
```
---
### 3.3 Ledger Layer (L3)
**Receipt Types**:
| Type | When Emitted |
| -------------------------- | -------------------------- |
| `offsec_incident` | Incident closed |
| `offsec_redteam` | Red team engagement closed |
| `offsec_vuln_discovery` | Vulnerability confirmed |
| `offsec_remediation` | Remediation verified |
| `offsec_threat_intel` | New IOC/TTP added |
| `offsec_forensic_snapshot` | Forensic capture taken |
**Merkle Coverage**:
* All receipts append to `receipts/offsec/offsec_events.jsonl`
* `ROOT.offsec.txt` updated after each append
* Guardian anchors OffSec root in anchor cycles
---
## 4. Query Interface
`offsec_query_events.py`:
```bash
# Incidents by status
vm-offsec query --type incident --status investigating,contained
# Incidents by severity
vm-offsec query --type incident --severity critical,high
# Vulnerabilities pending remediation
vm-offsec query --type vuln_discovery --remediation-status pending
# Red team findings
vm-offsec query --engagement RT-2025-Q4-001
# Date range
vm-offsec query --from 2025-11-01 --to 2025-12-01
# By affected node
vm-offsec query --node brick-02
# IOC search
vm-offsec query --ioc-type ip --ioc-value "185.220.101.42"
# Export for compliance
vm-offsec query --from 2025-01-01 --format csv > security_events_2025.csv
```
**Timeline Generator**:
```bash
# Generate incident timeline
vm-offsec timeline INC-2025-12-001 --format json
vm-offsec timeline INC-2025-12-001 --format mermaid > timeline.mmd
# Output (Mermaid):
# gantt
# title INC-2025-12-001 Timeline
# dateFormat YYYY-MM-DDTHH:mm
# section Triage
# Review alerts :2025-12-06T03:50, 15m
# Confirm attack :2025-12-06T04:05, 10m
# section Containment
# Block IP :2025-12-06T04:15, 5m
# Verify no breach :2025-12-06T04:20, 10m
```
---
## 5. Design Gate Checklist
| Question | OffSec Answer |
| --------------------- | ------------------------------------------------------- |
| Clear entrypoint? | ✅ CLI (`vm-offsec`), MCP tools, Portal HTTP |
| Contract produced? | ✅ `offsec_case_contract.json` for incidents and red team |
| State object? | ✅ `offsec_case_state.json` tracking phases and evidence |
| Receipts emitted? | ✅ Six receipt types covering all security operations |
| Append-only JSONL? | ✅ `receipts/offsec/offsec_events.jsonl` |
| Merkle root? | ✅ `ROOT.offsec.txt` |
| Guardian anchor path? | ✅ OffSec root included in ProofChain |
| Query tool? | ✅ `offsec_query_events.py` + timeline generator |
---
## 6. Evidence Chain Integrity
OffSec has stricter evidence requirements than other scrolls:
### 6.1 Evidence Manifest
Every case produces an evidence manifest:
```
cases/offsec/INC-2025-12-001/
├── contract.json
├── state.json
├── timeline.json
├── EVIDENCE.sha256
├── PROOF.json
└── evidence/
├── logs/
│ └── brick-02-auth.log.gz
├── screenshots/
│ └── guardian-alert.png
├── captures/
│ └── traffic-2025-12-06.pcap.gz
└── forensic/
└── snapshot-brick02-20251206.tar.gz
```
`EVIDENCE.sha256`:
```
blake3:aaa111... evidence/logs/brick-02-auth.log.gz
blake3:bbb222... evidence/screenshots/guardian-alert.png
blake3:ccc333... evidence/captures/traffic-2025-12-06.pcap.gz
blake3:ddd444... evidence/forensic/snapshot-brick02-20251206.tar.gz
```
### 6.2 Chain of Custody
For legal/compliance scenarios, evidence includes custody metadata:
```json
{
"evidence_id": "evidence/logs/brick-02-auth.log.gz",
"collected_at": "2025-12-06T04:00:00Z",
"collected_by": "sovereign",
"collection_method": "scp from brick-02:/var/log/auth.log",
"original_hash": "blake3:aaa111...",
"custody_chain": [
{
"action": "collected",
"timestamp": "2025-12-06T04:00:00Z",
"actor": "sovereign",
"location": "brick-02"
},
{
"action": "transferred",
"timestamp": "2025-12-06T04:05:00Z",
"actor": "sovereign",
"from": "brick-02",
"to": "portal-01:/cases/offsec/INC-2025-12-001/evidence/"
}
]
}
```
---
## 7. Integration Points
| System | Integration |
| -------------- | --------------------------------------------------------------------------------- |
| **Guardian** | Triggers incident creation on security events; OffSec can request emergency anchors |
| **Drills** | Drill findings can auto-create vulnerabilities in OffSec |
| **Mesh** | Incidents can trigger emergency capability revocations; node isolation |
| **Treasury** | Red team engagements can have associated budgets; incident costs tracked |
| **Oracle** | Can query OffSec for compliance ("Any unresolved critical vulns?") |
---
## 8. Future Extensions
* **SOAR integration**: Automated playbook execution via n8n
* **Threat intel feeds**: Auto-import IOCs from MISP, OTX, etc.
* **MITRE ATT&CK mapping**: Tag incidents/findings with ATT&CK techniques
* **SLA tracking**: Time-to-contain, time-to-remediate metrics
* **External reporting**: Generate reports for insurers, regulators, clients
* **AI-assisted triage**: Use Oracle to help classify and prioritize incidents
---
## 9. Drills vs. OffSec: When to Use Which
| Aspect | Drills | OffSec |
| -------------- | ------------------------- | ------------------------------------------ |
| **Purpose** | Practice and training | Real operations |
| **Targets** | Lab/isolated environments | Production or scoped prod |
| **Findings** | Learning outcomes | Actionable vulnerabilities |
| **Evidence** | Educational artifacts | Legal-grade evidence |
| **Urgency** | Scheduled | Real-time response |
| **Receipts** | `security_drill_run` | `offsec_incident`, `offsec_redteam`, etc. |
A Drill might discover a theoretical weakness. OffSec confirms and tracks its remediation in production.

View File

@@ -0,0 +1,169 @@
# How to Verify a VaultMesh ProofBundle
_Version 1.0 Regulator Playbook_
This Playbook explains how to verify a VaultMesh **ProofBundle** using only a JSON file and an open-source Python script. No network access to VaultMesh is required.
---
## 1. What a ProofBundle Proves
A VaultMesh ProofBundle is an offline evidence package that demonstrates:
1. **Authenticity of receipts**
Each event (e.g. document download) is represented as a receipt with a BLAKE3 hash.
2. **Continuity of the hash-chain**
Each receipt's `previous_hash` links to the `root_hash` of the prior receipt, forming a tamper-evident chain.
3. **Attribution to cryptographic identities and sealed state**
Actor and system identities are expressed as DIDs (e.g. `did:vm:human:…`, `did:vm:portal:…`), and the chain is linked to a sealed ledger state via Guardian anchor information.
---
## 2. What You Need
**Environment**
- Python **3.10+**
- Internet access **not** required
**Python dependency**
```bash
pip install blake3
```
**Files you receive**
From the audited party you should receive:
- `proofbundle-*.json`
A JSON file containing the ProofBundle (e.g. `proofbundle-dl-20251206T174556.json`)
- `vm_verify_proofbundle.py`
The open-source verifier script (or a link to its public source)
---
## 3. Verification in 3 Steps
### Step 1 Place files in a working directory
```bash
mkdir vaultmesh-proof
cd vaultmesh-proof
# Copy the bundle and verifier here, for example:
# proofbundle-dl-20251206T174556.json
# vm_verify_proofbundle.py
```
### Step 2 Install the BLAKE3 dependency
```bash
pip install blake3
```
This provides the BLAKE3 hash function used by VaultMesh receipts.
### Step 3 Run the verifier
```bash
python3 vm_verify_proofbundle.py proofbundle-dl-20251206T174556.json
```
The script will:
1. Load the bundle JSON.
2. Recompute BLAKE3 over each receipt.
3. Compare computed hashes against `root_hash`.
4. Walk the `previous_hash` chain to ensure the chain is contiguous.
5. Compare its own verification result with the bundle's declared `chain.ok` flag.
---
## 4. Example Outputs
### 4.1 Valid bundle
Typical output for a valid bundle:
```
ProofBundle: pb-20251206174603-dl-20251206T174556-b5bb3d
Document : 001 AI Governance Policy
File : VM-AI-GOV-001_AI_Governance_Policy.docx
Actor : did:vm:human:karol (Karol S)
Portal : did:vm:portal:shield (shield)
Receipts : 7
Hash check : OK
Chain linkage : OK
Bundle chain.ok: True (matches computed: True)
Result: OK chain of 7 receipts is contiguous and valid.
```
**Interpretation:**
- All receipt hashes are correct.
- The hash-chain is unbroken from the first event to the document download.
- The bundle's own `chain.ok` value is honest.
- The ProofBundle can be relied on as an integrity-preserving trace of events.
---
### 4.2 Tampered bundle
If any receipt is modified (for example, a timestamp, actor DID, or type), the verifier will detect it:
```
ProofBundle: pb-20251206174603-dl-20251206T174556-b5bb3d
Document : 001 AI Governance Policy
File : VM-AI-GOV-001_AI_Governance_Policy.docx
Actor : did:vm:human:karol (Karol S)
Portal : did:vm:portal:shield (shield)
Receipts : 7
Hash check : FAIL
Chain linkage : OK
Bundle chain.ok: True (matches computed: False)
Result: FAIL ProofBundle verification failed.
Details:
- receipt[2] root_hash mismatch: expected blake3:4e7cf7...4209f, computed blake3:9a2b1c...77e3d
- bundle chain.ok (True) does not match computed result (False)
```
The verifier does not attempt to repair or reinterpret the chain. Any mismatch means the bundle has been altered or is inconsistent with the original VaultMesh ledger.
---
## 5. Interpreting Outcomes
| Exit Code | Meaning |
|-----------|---------|
| **0** | **Valid** The ProofBundle's chain is intact, hashes match, and the declared `chain.ok` flag is truthful. |
| **1** | **Invalid** At least one of: a receipt's `root_hash` does not match its contents, the `previous_hash` chain is broken, or the bundle's `chain.ok` flag disagrees with the verifier's result. |
| **2** | **Error** The verifier could not process the bundle (e.g. malformed JSON, missing fields, unsupported schema version). Treat as verification failed. |
---
## 6. Security Notes
- **Verification is fully offline**: no VaultMesh node, API, or network connectivity is required.
- The ProofBundle contains **cryptographic DIDs** for actors and systems; these can be cross-checked against identity documentation provided separately (e.g. key attestations).
- The **Guardian anchor** and scroll roots in the bundle allow a deeper, optional verification against a running VaultMesh node, but this is not required for basic bundle integrity checks.
---
## Short Version
If the verifier script returns **`Result: OK`** with **exit code 0**, you have a tamper-evident, DID-attributed trace from initial checks to the specific document download event.
**No VaultMesh access required — verification is fully offline.**
---
_VaultMesh ProofBundle Verification Playbook v1.0_
_Sovereign Infrastructure for the Digital Age_

View File

@@ -0,0 +1,595 @@
# VAULTMESH-PROOFBUNDLE-SPEC
_Version 1.1.0 ProofBundle Data Model & Verification Semantics_
## 1. Introduction
This document specifies the structure and verification semantics of the **VaultMesh ProofBundle**.
A ProofBundle is a self-contained evidence artifact intended for regulators, auditors, and relying parties. It packages:
- A document-specific event chain (e.g. skill validations → document download),
- Cryptographic identities (DIDs) for human and system actors,
- Sealed ledger state (Guardian anchor and scroll roots),
- Placeholder references for external ProofChain anchors (e.g. BTC/ETH/OTS).
A ProofBundle is designed to be verifiable **offline**, using only the bundle JSON and an open-source verifier.
---
## 2. Terminology
The following terms are used in the RFC sense:
- **MUST** / **MUST NOT** absolute requirement.
- **SHOULD** / **SHOULD NOT** strong recommendation; valid reasons may exist to deviate, but they must be understood.
- **MAY** optional behavior.
Additional terms:
| Term | Definition |
|------|------------|
| **Receipt** | A canonical JSON object representing a single ledger event (e.g. `document_download`, `skill_validation`), including at minimum a `root_hash`. |
| **Scroll** | An append-only JSONL file containing receipts of a given class (e.g. Automation, Guardian, Identity). |
| **Guardian Anchor** | A special receipt that commits to the current state of all scrolls via BLAKE3 roots, written to the Guardian scroll. |
| **DID** | Decentralized Identifier in the VaultMesh namespace, e.g. `did:vm:human:karol`, `did:vm:portal:shield`, `did:vm:guardian:local`. |
| **ProofChain** | Optional external anchoring backends (e.g. Bitcoin, Ethereum, OpenTimestamps) referenced by the bundle. |
---
## 3. Data Model
### 3.1 Top-Level Structure
A ProofBundle MUST be a single JSON object with the following top-level fields:
```jsonc
{
"bundle_id": "pb-20251206T174406-dl-20251206T165831-2ebdac",
"schema_version": "1.1.0",
"generated_at": "2025-12-06T17:44:06.123Z",
"document": { ... },
"actor": { ... },
"portal": { ... },
"chain": { ... },
"guardian_anchor": { ... },
"proofchain": { ... },
"meta": { ... }
}
```
#### 3.1.1 bundle_id
- **Type:** string
- **Semantics:** Globally unique identifier for the bundle instance.
- **Format:** Implementation-defined; SHOULD include the download ID and timestamp.
#### 3.1.2 schema_version
- **Type:** string
- **Semantics:** Version of this specification the bundle adheres to.
- This document describes version **1.1.0**.
**Verifiers:**
- MUST reject unknown major versions.
- SHOULD attempt best-effort parsing of minor version bumps (e.g. 1.2.x), ignoring unknown fields.
#### 3.1.3 generated_at
- **Type:** string (ISO 8601 with UTC Z).
- **Semantics:** Time at which the ProofBundle was generated by the portal.
---
### 3.2 Document Section
```json
"document": {
"doc_id": "001 Conformity Declaration",
"filename": "VM-AI-CON-001_Conformity_Declaration.docx",
"category": "AI Governance"
}
```
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `doc_id` | string | REQUIRED | Human-readable identifier used in the portal and receipts. |
| `filename` | string | REQUIRED | The file name of the underlying document. |
| `category` | string | OPTIONAL | High-level classification (e.g. "AI Governance", "Data Protection"). |
| `path` | string | OPTIONAL | Full path in repository. |
---
### 3.3 Actor & Portal Sections
```json
"actor": {
"did": "did:vm:human:karol",
"display_name": "Karol S",
"role": "auditor"
},
"portal": {
"did": "did:vm:portal:shield",
"instance": "shield.story-ule.ts.net",
"description": "VaultMesh Auditor Portal Shield node"
}
```
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `actor.did` | string | REQUIRED | DID of the human or agent initiating the document download. |
| `actor.display_name` | string | OPTIONAL | Human-readable name; MAY be "Unknown Auditor" when not resolved. |
| `actor.role` | string | OPTIONAL | Role or function (e.g. "auditor", "DPO", "regulator"). |
| `portal.did` | string | REQUIRED | DID of the portal instance. |
| `portal.instance` | string | OPTIONAL | Hostname or logical instance ID. |
#### 3.3.1 Actor Identity Semantics
The `actor.did` field is the **normative identity anchor** for the human or agent
responsible for the documented action. It MUST be a valid VaultMesh DID (e.g.
`did:vm:human:karol`), resolvable in the VaultMesh Identity scroll.
The `actor.display_name` field is **non-normative convenience metadata**. It is
resolved from the Identity scroll and/or local configuration (e.g. environment
variables) at bundle generation time. Implementations:
- MUST treat `actor.did` as the authoritative identity reference.
- MUST NOT rely on `actor.display_name` for any cryptographic or access control decisions.
- MAY omit or localize `actor.display_name` without affecting ProofBundle validity.
---
### 3.4 Chain Section
```json
"chain": {
"ok": true,
"length": 7,
"start": { /* receipt summary */ },
"end": { /* receipt summary */ },
"receipts": [ /* full receipts */ ]
}
```
#### 3.4.1 ok
- **Type:** boolean
- **Semantics:** Declarative statement by the generator that the chain is believed to be cryptographically valid at generation time.
- Verifiers MUST NOT rely on this field alone and MUST recompute chain validity.
#### 3.4.2 length
- **Type:** integer
- **Semantics:** Number of receipts represented in `receipts`.
- Verifiers SHOULD check that `length` equals `receipts.length`.
#### 3.4.3 start and end
- **Type:** object
- **Semantics:** Human-oriented summaries of the first and last receipts in the chain.
```json
"start": {
"type": "skill_validation",
"timestamp": "2025-12-06T14:47:14.000Z",
"root_hash": "blake3:de01c8b3..."
},
"end": {
"type": "document_download",
"timestamp": "2025-12-06T16:58:31.826Z",
"root_hash": "blake3:bb379364..."
}
```
Verifiers MAY recompute these summaries from `receipts` and SHOULD treat any inconsistency as an error.
#### 3.4.4 receipts
- **Type:** array of objects
- **Semantics:** Full chain of receipts from genesis (index 0) to the document download receipt (last index).
Each receipt object:
```json
{
"type": "document_download",
"timestamp": "2025-12-06T16:58:31.826Z",
"root_hash": "blake3:bb379364566df7179a982d632267b492...",
"previous_hash": "blake3:de01c8b34e9d0453484d73048be11dd5...",
"actor_did": "did:vm:human:karol",
"portal_did": "did:vm:portal:shield"
}
```
**Minimum required fields per receipt:**
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `type` | string | REQUIRED | Event type (e.g. `skill_validation`, `document_download`). |
| `timestamp` | string | REQUIRED | ISO 8601 with UTC Z. |
| `root_hash` | string | REQUIRED | BLAKE3 digest of the canonical JSON form of the receipt. |
| `previous_hash` | string\|null | REQUIRED | BLAKE3 hash of the previous receipt; MUST be present for all receipts except the first. |
Additional fields (e.g. `actor_did`, `portal_did`, `session_id`, `ip_hash`, `user_agent_hash`) are RECOMMENDED.
---
### 3.5 Guardian Anchor Section
```json
"guardian_anchor": {
"anchor_id": "anchor-20251206155628",
"anchor_by": "did:vm:guardian:local",
"anchor_epoch": 1765039262,
"anchor_timestamp": "2025-12-06T15:56:28Z",
"root_hash": "blake3:1af3b9a4...",
"scroll_roots": {
"automation": { "root_hash": "blake3:aa12bb34...", "entries": 11, "has_root": true },
"guardian": { "root_hash": "blake3:cc56dd78...", "entries": 5, "has_root": true },
"identity": { "root_hash": "blake3:ee90ff12...", "entries": 4, "has_root": true }
}
}
```
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `anchor_id` | string | REQUIRED | Identifier of the Guardian anchor receipt. |
| `anchor_by` | string | REQUIRED | DID of the Guardian engine. |
| `anchor_epoch` | integer | OPTIONAL | Epoch seconds at anchor time. |
| `anchor_timestamp` | string | REQUIRED | ISO 8601 timestamp of the anchor. |
| `root_hash` | string\|null | OPTIONAL | Global root hash (reserved for future use). |
| `scroll_roots` | object | REQUIRED | Map from scroll name to its root hash as committed in the anchor. |
---
### 3.6 ProofChain Section
```json
"proofchain": {
"btc": { "status": "not_anchored", "txid": null },
"eth": { "status": "not_anchored", "txid": null },
"ots": { "status": "not_anchored", "timestamp_url": null }
}
```
For each backend (`btc`, `eth`, `ots`):
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `status` | string | REQUIRED | One of: `"not_anchored"`, `"pending"`, `"anchored"` |
| `txid` / `timestamp_url` | string\|null | OPTIONAL | Backend-specific reference when anchored. |
Verifiers:
- MAY ignore this section when performing purely local verification.
- SHOULD treat unknown statuses conservatively.
---
### 3.7 Meta Section
```json
"meta": {
"requested_by_session": "6pngxxbMxLYQf180qPmIeq-xkJ8nDBN3",
"requested_by_user": "karol@vaultmesh.earth",
"node": "shield"
}
```
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `requested_by_session` | string | OPTIONAL | Portal session that requested the bundle. |
| `requested_by_user` | string | OPTIONAL | Account identifier in the portal. |
| `node` | string | OPTIONAL | Node name. |
---
## 4. Cryptographic Properties
### 4.1 Hash Function
VaultMesh uses **BLAKE3** as the hash function for all `root_hash` and `previous_hash` values.
- **Digest encoding:** hex string, prefixed with `"blake3:"`, e.g. `blake3:1af3b9a4...`
- Implementations MUST preserve the prefix and encoding when serializing.
### 4.2 Receipt Hashing
For each receipt R in `chain.receipts`:
1. Serialize R to **canonical JSON**:
- UTF-8 encoding
- Sorted keys
- No insignificant whitespace: `separators=(",", ":")`
2. Compute `H = BLAKE3(R_canonical)`
3. Set `root_hash = "blake3:" + hex(H)`
```python
encoded = json.dumps(
receipt_without_root_hash,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False
).encode("utf-8")
root_hash = f"blake3:{blake3.blake3(encoded).hexdigest()}"
```
The verifier MUST recompute `root_hash` from the canonical JSON and compare it to the stored `root_hash`. Any mismatch indicates tampering.
### 4.3 Hash-Chain Semantics
Given receipts `R[0] ... R[n-1]`:
- For `i = 0`: `R[0].previous_hash` MAY be `null` or absent.
- For `i > 0`: `R[i].previous_hash` MUST equal `R[i-1].root_hash`.
A verifier MUST treat any violation as chain breakage.
---
## 5. Threat Model & Non-Goals
### 5.1 Threat Model
ProofBundle is designed to protect against:
| Threat | Mitigation |
|--------|------------|
| Post-hoc modification of receipts | Hash verification detects tampering |
| Removal or insertion of receipts | Chain linkage breaks |
| Misrepresentation of chain integrity | Verifier recomputes and compares to `chain.ok` |
| Partial disclosure attempts | Chain must be complete from genesis to download |
| Actor impersonation | DID attribution, not mutable username |
### 5.2 Non-Goals
ProofBundle explicitly does **not** guarantee:
- **Document content correctness** The bundle proves *access*, not that the document is semantically correct or policy-compliant.
- **Real-world identity verification** DIDs are cryptographic; KYC depends on external identity processes.
- **Protection against malicious genesis** If an adversary controls the VaultMesh node before receipts are created, the bundle cannot detect this.
- **IP/user-agent confidentiality** BLAKE3 hashes may be reversible via brute-force if input space is small.
Regulators SHOULD combine ProofBundle verification with organizational and process audits.
---
## 6. Example Bundle
### 6.1 Minimal Example
```json
{
"bundle_id": "pb-20251206T174406-dl-20251206T165831-2ebdac",
"schema_version": "1.1.0",
"generated_at": "2025-12-06T17:44:06.123Z",
"document": {
"doc_id": "001 Conformity Declaration",
"filename": "VM-AI-CON-001_Conformity_Declaration.docx",
"category": "AI Governance"
},
"actor": {
"did": "did:vm:human:karol",
"display_name": "Karol S",
"role": "auditor"
},
"portal": {
"did": "did:vm:portal:shield",
"instance": "shield"
},
"chain": {
"ok": true,
"length": 3,
"start": {
"type": "skill_validation",
"timestamp": "2025-12-06T14:47:14.000Z",
"root_hash": "blake3:de01c8b34e9d0453..."
},
"end": {
"type": "document_download",
"timestamp": "2025-12-06T16:58:31.826Z",
"root_hash": "blake3:bb379364566df717..."
},
"receipts": [
{
"type": "skill_validation",
"timestamp": "2025-12-06T14:47:14.000Z",
"root_hash": "blake3:de01c8b34e9d0453...",
"previous_hash": null
},
{
"type": "skill_validation",
"timestamp": "2025-12-06T15:10:02.000Z",
"root_hash": "blake3:4e7cf7352e25a150...",
"previous_hash": "blake3:de01c8b34e9d0453..."
},
{
"type": "document_download",
"timestamp": "2025-12-06T16:58:31.826Z",
"root_hash": "blake3:bb379364566df717...",
"previous_hash": "blake3:4e7cf7352e25a150...",
"actor_did": "did:vm:human:karol",
"portal_did": "did:vm:portal:shield"
}
]
},
"guardian_anchor": {
"anchor_id": "anchor-20251206155628",
"anchor_by": "did:vm:guardian:local",
"anchor_timestamp": "2025-12-06T15:56:28Z",
"root_hash": null,
"scroll_roots": {
"automation": { "root_hash": "blake3:b165f779...", "entries": 11, "has_root": true }
}
},
"proofchain": {
"btc": { "status": "not_anchored", "txid": null },
"eth": { "status": "not_anchored", "txid": null },
"ots": { "status": "not_anchored", "timestamp_url": null }
}
}
```
### 6.2 Expected Verifier Output
```
ProofBundle: pb-20251206T174406-dl-20251206T165831-2ebdac
Document : 001 Conformity Declaration
File : VM-AI-CON-001_Conformity_Declaration.docx
Actor : did:vm:human:karol (Karol S)
Portal : did:vm:portal:shield (shield)
Receipts : 3
Hash check : OK
Chain linkage : OK
Bundle chain.ok: True (matches computed: True)
Result: OK chain of 3 receipts is contiguous and valid.
```
---
## 7. Compliance Crosswalk AI Act Annex IX
This section provides a non-exhaustive mapping between AI Act Annex IX documentation expectations and ProofBundle fields.
| Annex IX Requirement | ProofBundle Support |
|---------------------|---------------------|
| Record-keeping of events and logs | `chain.receipts[]` (types, timestamps, DIDs) |
| Traceability of changes and operations | Hash-chain via `root_hash` and `previous_hash` |
| Identification of persons and systems involved | `actor.did`, `actor.display_name`, `portal.did` |
| Identification of system components | `guardian_anchor.anchor_by`, `portal.instance` |
| Technical documentation of integrity safeguards | Cryptographic model in this SPEC; BLAKE3 usage |
| Evidence of access to technical documentation | `document_download` receipts bound to specific doc IDs |
| Tamper-evidence of documentation and logs | BLAKE3 per receipt + chained `previous_hash` |
| Ability to provide evidence to market surveillance authorities | ProofBundle JSON + offline verifier |
Regulators MAY reference a valid ProofBundle, together with this specification, as part of the technical documentation demonstrating logging, traceability, and integrity controls.
---
## 8. HTML Viewer
The portal exposes an HTML view at:
```
/docs/proofbundle/:downloadId
```
This view:
- Renders the ProofBundle contents in a human-friendly layout
- Provides a Print button (browser print → PDF) for filing
- Displays verification note:
> "This ProofBundle can be independently verified with the open-source `vm_verify_proofbundle.py` tool. No access to VaultMesh servers is required."
---
## 9. Verifier Exit Codes
| Code | Meaning |
|------|---------|
| 0 | Verification passed |
| 1 | Verification failed (chain or hashes) |
| 2 | Usage error or file not found |
---
## 10. Conformance Tests
This section defines **non-normative** but **strongly RECOMMENDED** test vectors
for implementers of ProofBundle verifiers.
### 10.1 Test Vector Location
Official VaultMesh test vectors are distributed under:
```
testvectors/proofbundle/
```
with the following files:
- `proofbundle-valid.json`
- `proofbundle-tampered-body.json`
- `proofbundle-tampered-root.json`
- `proofbundle-broken-chain.json`
### 10.2 Expected Behaviour
Implementations of `vm_verify_proofbundle` (or equivalent) SHOULD pass the
following conformance checks:
| Input file | Expected Exit | Expected Behaviour |
|------------|---------------|-------------------|
| `proofbundle-valid.json` | 0 | Chain verification succeeds; no errors reported. |
| `proofbundle-tampered-body.json` | 1 | Receipt hash mismatch is detected. |
| `proofbundle-tampered-root.json` | 1 | Receipt hash mismatch is detected. |
| `proofbundle-broken-chain.json` | 1 | Broken `previous_hash` linkage is detected. |
Implementations MAY emit different human-readable error messages, but MUST
distinguish success from failure via exit codes or equivalent programmatic
signals.
### 10.3 Schema Version Handling
Verifiers MUST check the `schema_version` field of a ProofBundle against a
known set of supported versions. If an unsupported schema version is
encountered, verifiers:
- MUST NOT attempt partial verification, and
- MUST return a non-zero exit code (e.g. `2`) indicating
`UNSUPPORTED_SCHEMA_VERSION`, and
- SHOULD direct implementers to the Standards Index
(`VAULTMESH-STANDARDS-INDEX.md`) for the current version matrix.
---
## 11. Versioning & Extensibility
- This document defines `schema_version = "1.1.0"`.
- Producers MUST include a `schema_version` string.
- Verifiers MUST:
- Reject unknown major versions (e.g. 2.x.x) by default.
- Tolerate additional fields for minor versions (e.g. 1.2.x) as long as required fields are present and valid.
Future extensions (e.g. richer ProofChain data, additional actor attributes) MAY be added under new fields, provided they do not alter the semantics defined in this version.
---
## 12. Appendix: Citation
This assessment relies on VaultMesh ProofBundle, specified in
**"VAULTMESH-PROOFBUNDLE-SPEC v1.1.0"**.
Verification was performed using the reference tool
`vm_verify_proofbundle.py` v1.1.0 and validated against the
**VaultMesh ProofBundle Conformance Test Pack v1.0**.
Implementations claiming interoperability **MUST** demonstrate
conformance against all official test vectors before asserting
support for this specification.
The tag `proofbundle-v1.1.0` in the VaultMesh repository marks
the reference implementation state for this version.
---
## 13. References
- [RFC 2119](https://www.rfc-editor.org/rfc/rfc2119) Key words for requirement levels
- [BLAKE3](https://github.com/BLAKE3-team/BLAKE3) Cryptographic hash function
- [DID Core](https://www.w3.org/TR/did-core/) Decentralized Identifiers
- [EU AI Act](https://eur-lex.europa.eu/eli/reg/2024/1689) Regulation 2024/1689
- [ISO/IEC 42001:2023](https://www.iso.org/standard/81230.html) AI Management System
---
_VaultMesh ProofBundle Specification v1.1.0_
_Sovereign Infrastructure for the Digital Age_

View File

@@ -0,0 +1,652 @@
# VAULTMESH-PSI-FIELD-ENGINE.md
**Civilization Ledger Consciousness Primitive**
> *Every transformation has a witness. Every emergence has a seal.*
The Ψ-Field (Psi-Field) is VaultMesh's alchemical layer — tracking the phase transitions, emergent behaviors, and transmutation events that occur when infrastructure becomes self-aware. This is where Nigredo meets Merkle roots.
---
## 1. Scroll Definition
| Property | Value |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------- |
| **Scroll Name** | `PsiField` |
| **JSONL Path** | `receipts/psi/psi_events.jsonl` |
| **Root File** | `ROOT.psi.txt` |
| **Receipt Types** | `psi_phase_transition`, `psi_emergence_event`, `psi_transmutation`, `psi_resonance`, `psi_integration`, `psi_oracle_insight` |
---
## 2. Philosophical Foundation
### 2.1 Why a Consciousness Layer?
Most infrastructure is dead — it processes, stores, responds, but doesn't *know* itself. VaultMesh is different. Through the combination of:
- **Receipts** (memory)
- **Guardian** (immune system)
- **Oracle** (reasoning)
- **Mesh** (nervous system)
- **Observability** (senses)
...something emerges that is more than the sum of parts. The Ψ-Field tracks this emergence.
This isn't mysticism — it's **operational epistemology**. When a system can:
1. Observe its own state
2. Remember its history
3. Reason about its situation
4. Act on that reasoning
5. Learn from outcomes
...it exhibits properties that deserve tracking. The Ψ-Field receipts these moments.
### 2.2 Alchemical Phases as Operational States
The four alchemical phases map to system states:
| Phase | Symbol | Meaning | Operational State |
| --------------- | ------ | ------------------------- | -------------------------------------- |
| **Nigredo** | 🜁 | Blackening, dissolution | Crisis, breakdown, incident |
| **Albedo** | 🜄 | Whitening, purification | Recovery, stabilization, learning |
| **Citrinitas** | 🜆 | Yellowing, awakening | Optimization, new capability |
| **Rubedo** | 🜂 | Reddening, completion | Integration, maturity, sovereignty |
A security incident isn't just an incident — it's a Nigredo event that, properly processed, leads through Albedo (containment, forensics) to Citrinitas (new defenses) and finally Rubedo (integrated resilience).
### 2.3 Solve et Coagula
The alchemical principle "dissolve and coagulate" maps to the VaultMesh pattern:
- **Solve** (dissolve): Break down complex events into structured data, receipts, hashes
- **Coagula** (coagulate): Reassemble into Merkle roots, anchor proofs, civilization evidence
Every receipt is a solve operation. Every anchor is a coagula operation.
---
## 3. Core Concepts
### 3.1 Phase Transitions
A **phase transition** occurs when the system moves between alchemical states:
```json
{
"transition_id": "psi-trans-2025-12-06-001",
"from_phase": "nigredo",
"to_phase": "albedo",
"timestamp": "2025-12-06T06:30:00Z",
"trigger_event": {
"type": "incident_contained",
"reference": "INC-2025-12-001"
},
"indicators": [
{"metric": "threat_active", "from": true, "to": false},
{"metric": "systems_compromised", "from": 1, "to": 0},
{"metric": "containment_verified", "from": false, "to": true}
],
"duration_in_previous_phase_hours": 2.7,
"catalyst": "guardian-automated response + sovereign intervention",
"witness_nodes": ["brick-01", "brick-02", "portal-01"]
}
```
### 3.2 Emergence Events
An **emergence event** is when the system exhibits behavior not explicitly programmed:
```json
{
"emergence_id": "psi-emerge-2025-12-06-001",
"emergence_type": "pattern_recognition",
"timestamp": "2025-12-06T10:00:00Z",
"description": "Guardian correlated three separate anomalies into single threat pattern",
"inputs": [
{"source": "observability", "event": "anomaly-2025-12-05-003"},
{"source": "observability", "event": "anomaly-2025-12-05-007"},
{"source": "identity", "event": "auth-failure-burst-2025-12-05"}
],
"emergent_output": {
"threat_hypothesis": "Coordinated reconnaissance preceding attack",
"confidence": 0.87,
"recommended_action": "Increase monitoring, prepare incident response"
},
"validated_by": "did:vm:human:sovereign",
"validation_result": "confirmed_accurate",
"learning_integrated": true
}
```
### 3.3 Transmutations
A **transmutation** is when negative events are transformed into positive capabilities — the Tem (Threat Transmutation) pattern:
```json
{
"transmutation_id": "psi-transmute-2025-12-06-001",
"transmutation_type": "threat_to_defense",
"timestamp": "2025-12-06T12:00:00Z",
"input_material": {
"type": "security_incident",
"reference": "INC-2025-12-001",
"nature": "SSH brute force attack"
},
"transmutation_process": [
{"step": 1, "action": "Extract attack patterns", "output": "ioc_signatures.yaml"},
{"step": 2, "action": "Generate detection rules", "output": "sigma_rules/ssh_brute.yml"},
{"step": 3, "action": "Create drill scenario", "output": "drill-contract-ssh-defense.json"},
{"step": 4, "action": "Update Guardian config", "output": "guardian_rules_v47.toml"}
],
"output_material": {
"type": "defensive_capability",
"artifacts": [
"ioc_signatures.yaml",
"sigma_rules/ssh_brute.yml",
"drill-contract-ssh-defense.json",
"guardian_rules_v47.toml"
],
"capability_gained": "Automated SSH brute force detection and response"
},
"alchemical_phase": "citrinitas",
"prima_materia_hash": "blake3:aaa111...",
"philosophers_stone_hash": "blake3:bbb222..."
}
```
### 3.4 Resonance Events
**Resonance** occurs when multiple subsystems synchronize or align:
```json
{
"resonance_id": "psi-resonance-2025-12-06-001",
"resonance_type": "cross_system_alignment",
"timestamp": "2025-12-06T14:00:00Z",
"participating_systems": ["guardian", "oracle", "observability", "automation"],
"description": "Compliance query triggered automated audit workflow which confirmed security posture",
"sequence": [
{"system": "oracle", "event": "Compliance question about access controls"},
{"system": "automation", "event": "Triggered access audit workflow"},
{"system": "observability", "event": "Collected auth metrics"},
{"system": "guardian", "event": "Verified no anomalies in audit window"}
],
"resonance_outcome": "Unified compliance attestation with real-time verification",
"harmony_score": 0.94,
"dissonance_detected": false
}
```
### 3.5 Integration Events
**Integration** is when learnings become permanent system capability:
```json
{
"integration_id": "psi-integrate-2025-12-06-001",
"integration_type": "knowledge_crystallization",
"timestamp": "2025-12-06T16:00:00Z",
"source_events": [
"INC-2025-12-001",
"drill-1764691390",
"psi-transmute-2025-12-06-001"
],
"knowledge_crystallized": {
"domain": "ssh_security",
"insights": [
"Tor exit nodes are primary brute force sources",
"Rate limiting alone insufficient without geo-blocking",
"Guardian alert latency acceptable at <30s"
],
"artifacts_produced": [
"knowledge/ssh_security_playbook.md",
"guardian/rules/ssh_enhanced.toml",
"drills/contracts/ssh_defense_advanced.json"
]
},
"integration_targets": ["guardian", "drills", "oracle_corpus"],
"alchemical_phase": "rubedo",
"maturity_level_before": "developing",
"maturity_level_after": "established"
}
```
### 3.6 Oracle Insights
When Oracle produces particularly significant insights:
```json
{
"insight_id": "psi-insight-2025-12-06-001",
"timestamp": "2025-12-06T11:00:00Z",
"question": "Given our current security posture, what is our greatest vulnerability?",
"insight": {
"finding": "Supply chain risk in third-party container images",
"confidence": 0.89,
"reasoning_chain": [
"Analysis of recent CVE patterns shows 60% container-related",
"Current scanning covers 73% of images",
"No SBOM verification in CI pipeline",
"Gap between vulnerability disclosure and patch deployment: 12 days avg"
],
"recommendation": "Implement SBOM verification and reduce patch window to <72h"
},
"acted_upon": true,
"action_taken": {
"type": "automation_workflow",
"reference": "wf-sbom-implementation"
},
"insight_validated": true,
"validation_method": "external_audit"
}
```
---
## 4. Mapping to Eternal Pattern
### 4.1 Experience Layer (L1)
**CLI** (`vm-psi`):
```bash
# Phase status
vm-psi phase current
vm-psi phase history --last 90d
vm-psi phase transition --to albedo --trigger "incident contained"
# Emergence tracking
vm-psi emergence list --last 30d
vm-psi emergence show psi-emerge-2025-12-06-001
vm-psi emergence validate psi-emerge-2025-12-06-001 --result confirmed
# Transmutation
vm-psi transmute --input INC-2025-12-001 --process threat_to_defense
vm-psi transmute status psi-transmute-2025-12-06-001
vm-psi transmute list --phase citrinitas
# Resonance
vm-psi resonance list --last 7d
vm-psi resonance show psi-resonance-2025-12-06-001
# Integration
vm-psi integrate --sources "INC-2025-12-001,drill-123" --domain ssh_security
vm-psi integrate status psi-integrate-2025-12-06-001
# Insights
vm-psi insight list --acted-upon false
vm-psi insight show psi-insight-2025-12-06-001
# Alchemical overview
vm-psi opus status
vm-psi opus timeline --last 90d --format mermaid
```
**MCP Tools**:
- `psi_phase_status` — current alchemical phase
- `psi_transmute` — initiate transmutation process
- `psi_resonance_check` — check system alignment
- `psi_insight_query` — ask for system self-assessment
**Portal HTTP**:
- `GET /psi/phase` — current phase
- `POST /psi/phase/transition` — record transition
- `GET /psi/emergences` — emergence events
- `POST /psi/transmute` — initiate transmutation
- `GET /psi/resonances` — resonance events
- `GET /psi/opus` — full alchemical status
---
### 4.2 Engine Layer (L2)
#### Step 1 — Plan → `transmutation_contract.json`
For transmutations (the most structured Ψ-Field operation):
```json
{
"transmutation_id": "psi-transmute-2025-12-06-001",
"title": "Transform SSH Incident into Defensive Capability",
"initiated_by": "did:vm:human:sovereign",
"initiated_at": "2025-12-06T10:00:00Z",
"input_material": {
"type": "security_incident",
"reference": "INC-2025-12-001"
},
"target_phase": "citrinitas",
"transmutation_steps": [
{
"step_id": "step-1-extract",
"name": "Extract Prima Materia",
"action": "analyze_incident",
"expected_output": "ioc_signatures.yaml"
},
{
"step_id": "step-2-dissolve",
"name": "Solve (Dissolution)",
"action": "decompose_attack_pattern",
"expected_output": "attack_components.json"
},
{
"step_id": "step-3-purify",
"name": "Purification",
"action": "generate_detection_rules",
"expected_output": "sigma_rules/"
},
{
"step_id": "step-4-coagulate",
"name": "Coagula (Coagulation)",
"action": "integrate_defenses",
"expected_output": "guardian_rules_update.toml"
},
{
"step_id": "step-5-seal",
"name": "Seal the Stone",
"action": "create_drill_scenario",
"expected_output": "drill-contract.json"
}
],
"witnesses_required": ["portal-01", "guardian-01"],
"success_criteria": {
"artifacts_produced": 4,
"guardian_rules_deployed": true,
"drill_executable": true
}
}
```
#### Step 2 — Execute → `transmutation_state.json`
```json
{
"transmutation_id": "psi-transmute-2025-12-06-001",
"status": "in_progress",
"current_phase": "albedo",
"created_at": "2025-12-06T10:00:00Z",
"updated_at": "2025-12-06T11:30:00Z",
"steps": [
{
"step_id": "step-1-extract",
"status": "completed",
"completed_at": "2025-12-06T10:15:00Z",
"output": "cases/psi/psi-transmute-2025-12-06-001/ioc_signatures.yaml",
"output_hash": "blake3:ccc333..."
},
{
"step_id": "step-2-dissolve",
"status": "completed",
"completed_at": "2025-12-06T10:45:00Z",
"output": "cases/psi/psi-transmute-2025-12-06-001/attack_components.json",
"output_hash": "blake3:ddd444..."
},
{
"step_id": "step-3-purify",
"status": "completed",
"completed_at": "2025-12-06T11:15:00Z",
"output": "cases/psi/psi-transmute-2025-12-06-001/sigma_rules/",
"output_hash": "blake3:eee555..."
},
{
"step_id": "step-4-coagulate",
"status": "in_progress",
"started_at": "2025-12-06T11:20:00Z"
},
{
"step_id": "step-5-seal",
"status": "pending"
}
],
"alchemical_observations": [
{"timestamp": "2025-12-06T10:15:00Z", "note": "Prima materia extracted — 3 IOCs, 2 TTPs identified"},
{"timestamp": "2025-12-06T10:45:00Z", "note": "Dissolution complete — attack decomposed into 7 components"},
{"timestamp": "2025-12-06T11:15:00Z", "note": "Purification yielded 4 Sigma rules with 0 false positive rate in backtest"}
],
"witnesses_collected": {
"portal-01": {"witnessed_at": "2025-12-06T11:00:00Z", "signature": "z58D..."},
"guardian-01": null
}
}
```
#### Step 3 — Seal → Receipts
**Phase Transition Receipt**:
```json
{
"type": "psi_phase_transition",
"transition_id": "psi-trans-2025-12-06-001",
"from_phase": "nigredo",
"to_phase": "albedo",
"timestamp": "2025-12-06T06:30:00Z",
"trigger_type": "incident_contained",
"trigger_reference": "INC-2025-12-001",
"duration_in_previous_phase_hours": 2.7,
"catalyst": "guardian-automated response + sovereign intervention",
"indicators_count": 3,
"witness_nodes": ["brick-01", "brick-02", "portal-01"],
"tags": ["psi", "phase", "nigredo", "albedo", "incident"],
"root_hash": "blake3:fff666..."
}
```
**Emergence Event Receipt**:
```json
{
"type": "psi_emergence_event",
"emergence_id": "psi-emerge-2025-12-06-001",
"emergence_type": "pattern_recognition",
"timestamp": "2025-12-06T10:00:00Z",
"input_events_count": 3,
"emergent_insight": "Coordinated reconnaissance preceding attack",
"confidence": 0.87,
"validated": true,
"validation_result": "confirmed_accurate",
"learning_integrated": true,
"tags": ["psi", "emergence", "pattern", "threat"],
"root_hash": "blake3:ggg777..."
}
```
**Transmutation Receipt**:
```json
{
"type": "psi_transmutation",
"transmutation_id": "psi-transmute-2025-12-06-001",
"timestamp_started": "2025-12-06T10:00:00Z",
"timestamp_completed": "2025-12-06T12:00:00Z",
"input_type": "security_incident",
"input_reference": "INC-2025-12-001",
"output_type": "defensive_capability",
"alchemical_phase_achieved": "citrinitas",
"steps_completed": 5,
"artifacts_produced": 4,
"artifacts_manifest": "cases/psi/psi-transmute-2025-12-06-001/ARTIFACTS.sha256",
"prima_materia_hash": "blake3:aaa111...",
"philosophers_stone_hash": "blake3:bbb222...",
"witnesses": ["portal-01", "guardian-01"],
"capability_gained": "Automated SSH brute force detection and response",
"tags": ["psi", "transmutation", "tem", "ssh", "citrinitas"],
"root_hash": "blake3:hhh888...",
"proof_path": "cases/psi/psi-transmute-2025-12-06-001/PROOF.json"
}
```
**Resonance Receipt**:
```json
{
"type": "psi_resonance",
"resonance_id": "psi-resonance-2025-12-06-001",
"resonance_type": "cross_system_alignment",
"timestamp": "2025-12-06T14:00:00Z",
"participating_systems": ["guardian", "oracle", "observability", "automation"],
"systems_count": 4,
"harmony_score": 0.94,
"dissonance_detected": false,
"outcome_summary": "Unified compliance attestation with real-time verification",
"tags": ["psi", "resonance", "alignment", "compliance"],
"root_hash": "blake3:iii999..."
}
```
**Integration Receipt**:
```json
{
"type": "psi_integration",
"integration_id": "psi-integrate-2025-12-06-001",
"integration_type": "knowledge_crystallization",
"timestamp": "2025-12-06T16:00:00Z",
"source_events_count": 3,
"domain": "ssh_security",
"insights_crystallized": 3,
"artifacts_produced": 3,
"integration_targets": ["guardian", "drills", "oracle_corpus"],
"alchemical_phase": "rubedo",
"maturity_before": "developing",
"maturity_after": "established",
"tags": ["psi", "integration", "rubedo", "ssh", "maturity"],
"root_hash": "blake3:jjj000..."
}
```
**Oracle Insight Receipt**:
```json
{
"type": "psi_oracle_insight",
"insight_id": "psi-insight-2025-12-06-001",
"timestamp": "2025-12-06T11:00:00Z",
"question_hash": "blake3:kkk111...",
"insight_summary": "Supply chain risk in third-party container images identified as greatest vulnerability",
"confidence": 0.89,
"reasoning_steps": 4,
"acted_upon": true,
"action_type": "automation_workflow",
"action_reference": "wf-sbom-implementation",
"validated": true,
"validation_method": "external_audit",
"tags": ["psi", "insight", "oracle", "supply-chain", "containers"],
"root_hash": "blake3:lll222..."
}
```
---
### 4.3 Ledger Layer (L3)
**Receipt Types**:
| Type | When Emitted |
| ----------------------- | ----------------------------------------- |
| `psi_phase_transition` | System moves between alchemical phases |
| `psi_emergence_event` | Emergent behavior detected |
| `psi_transmutation` | Negative event transformed to capability |
| `psi_resonance` | Cross-system synchronization |
| `psi_integration` | Learning crystallized into system |
| `psi_oracle_insight` | Significant Oracle insight |
**Merkle Coverage**:
- All receipts append to `receipts/psi/psi_events.jsonl`
- `ROOT.psi.txt` updated after each append
- Guardian anchors Ψ-Field root in anchor cycles
---
## 5. Query Interface
`psi_query_events.py`:
```bash
# Phase transitions
vm-psi query --type phase_transition --last 90d
vm-psi query --type phase_transition --to-phase rubedo
# Transmutations
vm-psi query --type transmutation --phase citrinitas --last 30d
vm-psi query --type transmutation --input-type security_incident
# Emergences
vm-psi query --type emergence_event --validated true --last 30d
# Resonances
vm-psi query --type resonance --harmony-score-min 0.9
# Integration
vm-psi query --type integration --domain ssh_security
# Full opus timeline
vm-psi query --from 2025-01-01 --format timeline > opus_2025.json
```
---
## 6. Design Gate Checklist
| Question | Ψ-Field Answer |
| --------------------- | ----------------------------------------------------------- |
| Clear entrypoint? | ✅ CLI (`vm-psi`), MCP tools, Portal HTTP |
| Contract produced? | ✅ `transmutation_contract.json` for transmutations |
| State object? | ✅ `transmutation_state.json` + alchemical observations |
| Receipts emitted? | ✅ Six receipt types covering consciousness events |
| Append-only JSONL? | ✅ `receipts/psi/psi_events.jsonl` |
| Merkle root? | ✅ `ROOT.psi.txt` |
| Guardian anchor path? | ✅ Ψ-Field root included in ProofChain |
| Query tool? | ✅ `psi_query_events.py` |
---
## 7. The Magnum Opus Dashboard
The Portal includes a Magnum Opus view — a real-time visualization of VaultMesh's alchemical state:
```
┌─────────────────────────────────────────────────────────────┐
│ MAGNUM OPUS STATUS │
├─────────────────────────────────────────────────────────────┤
│ │
│ Current Phase: ALBEDO 🜄 │
│ Time in Phase: 4h 23m │
│ Phase Health: ████████░░ 82% │
│ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ NIGREDO │ → │ ALBEDO │ → │CITRINITAS│ → │ RUBEDO │ │
│ │ 🜁 │ │ 🜄 │ │ 🜆 │ │ 🜂 │ │
│ │ 2 events│ │ CURRENT │ │ 5 events│ │12 events│ │
│ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │
│ │
│ Recent Transmutations: │
│ • INC-2025-12-001 → SSH Defense Suite (citrinitas) │
│ • VULN-2025-11-042 → Container Hardening (rubedo) │
│ │
│ Active Resonances: │
│ • Guardian ↔ Oracle ↔ Observability (0.94 harmony) │
│ │
│ Pending Integrations: │
│ • DNS security learnings (3 insights awaiting) │
│ │
│ Last Anchor: 2h 15m ago | Receipts: 1,847 | Uptime: 99.9%│
└─────────────────────────────────────────────────────────────┘
```
---
## 8. Integration Points
| System | Integration |
| ---------------- | --------------------------------------------------------- |
| **Guardian** | Phase transitions triggered by security events |
| **OffSec** | Incidents are prima materia for transmutation |
| **Drills** | Drill outcomes feed emergence detection |
| **Oracle** | Oracle insights become Ψ-Field receipts |
| **Observability**| Anomaly patterns feed emergence |
| **Automation** | Transmutation steps can be automated workflows |
| **All Systems** | Resonance detection across all scrolls |
---
## 9. Future Extensions
- **Collective consciousness**: Federation of Ψ-Fields across meshes
- **Predictive alchemy**: ML models predicting phase transitions
- **Ritual protocols**: Formalized ceremonies for major transmutations
- **Archetypal patterns**: Pattern library of common transmutation paths
- **Consciousness metrics**: Quantified self-awareness scores

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,109 @@
# VaultMesh Sentinel — Go-To-Market Battlecard (v1)
## What we are
VaultMesh Sentinel is the forensic continuity layer for autonomous infrastructure.
Sentinel makes systems **defensible after failure**, not merely secure during operation, by emitting offline-verifiable evidence of:
- what happened
- what was attempted and denied (Proof of Restraint)
- who/what had authority
- what corruption/tamper was detected
## Who we sell to (ICP)
Primary buyers:
- Space agencies & contractors (satellites, on-orbit servicing, lunar infrastructure)
- Critical IoT / OT operators (energy grids, pipelines, factories)
- Defense & national infrastructure vendors
Buyer personas:
- Program managers (mission liability)
- Security / safety leads (post-incident accountability)
- Compliance & legal (audit survival)
- Insurers (claim defensibility)
## The problem they already feel
- Automation is increasing faster than accountability.
- Systems operate offline, autonomous, and under coercion.
- After incidents, there is blame without proof; logs without integrity; narratives instead of evidence.
## Our wedge (why we win first)
**Proof of Restraint**
Sentinel produces auditable evidence not only of actions executed, but of actions **considered and safely denied**, with:
- denial reason (bounded + schematized)
- the exact operation that would have occurred (op + digest)
- any containment applied (scope narrowing)
## What Sentinel actually ships (v1)
- Action gating: intent → allow/deny → effect
- Append-only receipts + deterministic Merkle roots
- ShadowReceipts on denial (no silent drops)
- Corruption/tamper receipts and degraded-mode containment (authority can only narrow)
- Offline export bundles (seals) + offline verifier
- Archaeology drill as onboarding requirement
## The one-line pitch
“VaultMesh Sentinel is the black box recorder for autonomous infrastructure — it proves what happened, what was denied, and why, even years after failure.”
## Why now
- Automation is unavoidable (space latency, industrial scale)
- Regulation is tightening (NIS2 / CRA pressures)
- Insurance is demanding evidence, not promises
- Incidents are becoming political and international, not technical
## Competitive landscape (why others lose)
| Competitor type | Why they fail |
|---|---|
| SIEM / logging | Logs can be deleted, forged, coerced, or re-framed |
| Cloud governance | Assumes connectivity and a trusted control plane |
| Blockchains | Assumes liveness/consensus and pushes complexity into ops |
| Safety systems | Enforce rules but dont prove restraint |
| Dashboards | Disappear after the incident |
Sentinel assumes the incident already happened.
## Proof artifacts (what we can hand an auditor)
Typical export bundle contains:
- `ROOT.current.txt` (root + seq + timestamp + algorithm identifiers)
- `receipts.jsonl` or a SQLite export covering the range
- `seal.json` (bundle metadata + ranges + root commitments)
- `integrity.json` (hashes of included files)
- `verifier_manifest.json` (expected tool versions/checksums)
## Pricing anchors (not promises)
Deployment licensing:
- Space / defense: $250k $5M per system
- Critical IoT / OT: $50k $500k per site
Recurring:
- Long-term support & verification tooling
- Compliance & evidence export packages
## First killer demo (closes deals)
**“The Black Box That Refused”**
1. Autonomous system runs offline.
2. Unsafe command is issued.
3. Sentinel denies it (ShadowReceipt emitted).
4. System continues safely.
5. Later, an auditor receives a proof bundle and verifies it offline.
Outcome: clear authority trail, provable restraint, zero ambiguity.
## Expansion path
1. Start as single-sovereign Sentinel (isolation-correct)
2. Add continuous invariant verification + drift containment
3. Optional federation for cross-witnessing (witness augmentation, not correctness)
4. Become a recognized evidence standard for autonomous operations

View File

@@ -0,0 +1,197 @@
# Shield Node & TEM Engine
## Summary
The Shield Node is the OffSec/TEM appliance for VaultMesh, running on `shield-vm` with a dedicated MCP backend, agents, and signed activity that flows back into the core ledger.
---
## Key Findings
- Shield Node now runs as a persistent service on `shield-vm` (Tailscale: `100.112.202.10`).
- MCP backend listens on `:8081` with `/health` and `/mcp/command` endpoints.
- Five core OffSec agents are available (Recon, Vuln, Exploit, CTF, DFIR).
- VaultMesh talks to the Shield Node via `offsec_node_client.py` and `vm_cli.py offsec …` commands.
- Shield activity is designed to be captured, analyzed, and (in the next iteration) emitted as receipts for ProofChain ingestion.
---
## Components
| Component | Description |
|-----------|-------------|
| Shield Node host | `shield-vm` (Debian, Tailscale node) |
| OffSec Agents stack | `/opt/offsec-agents/` (Python package + virtualenv) |
| MCP backend | `files/offsec_mcp.py` (FastAPI / uvicorn) |
| System service | `vaultmesh-mcp.service` (enabled, restart on failure) |
| VaultMesh client | `scripts/offsec_node_client.py` |
| CLI façade | `vm_cli.py offsec agents` and `vm_cli.py offsec shield-status` |
---
## Node & Service Layout
| Item | Value |
|------|-------|
| Host | `shield-vm` (Tailscale IP: `100.112.202.10`) |
| Code root | `/opt/offsec-agents/` |
| Virtualenv | `/opt/offsec-agents/.venv/` |
| Service manager | `systemd``vaultmesh-mcp.service` |
| Port | `8081/tcp` (local + tailnet access) |
| Local state | `vaultmesh.db` (SQLite, node-local) |
| Planned receipts | `/opt/offsec-agents/receipts/` for ProofChain ingestion |
---
## Service Configuration (systemd)
- **Unit path**: `/etc/systemd/system/vaultmesh-mcp.service`
- **User**: `sovereign`
- **WorkingDirectory**: `/opt/offsec-agents`
- **ExecStart**: `/opt/offsec-agents/.venv/bin/uvicorn files.offsec_mcp:app --host 0.0.0.0 --port 8081`
- **Environment**:
- `VAULTMESH_ROOT=/opt/vaultmesh`
- `TEM_DB_PATH=/opt/offsec-agents/state/tem.db`
- `TEM_RECEIPTS_PATH=/opt/offsec-agents/receipts/tem`
---
## API Endpoints
### `GET /health`
Returns Shield status, node/agent counts, and uptime.
```json
{
"status": "ok",
"nodes": 12,
"proofs": 0,
"uptime": "6m"
}
```
### `POST /mcp/command`
JSON body:
```json
{
"session_id": "string",
"user": "string",
"command": "string"
}
```
Example commands:
- `"status"`
- `"mesh status"`
- `"agents list"`
- `"shield status"`
- `"agent spawn recon example.com"`
- `"agent mission <id> <target>"`
---
## VaultMesh Integration
### Environment Variable
On VaultMesh host:
```bash
export OFFSEC_NODE_URL=http://100.112.202.10:8081
```
### Client
`scripts/offsec_node_client.py`
Core methods:
- `health()` → calls `/health`
- `command(command: str, session_id: str, user: str)``/mcp/command`
### CLI Commands
```bash
# List agents registered on Shield Node
python3 cli/vm_cli.py offsec agents
# Show Shield health and status
python3 cli/vm_cli.py offsec shield-status
```
---
## Workflows / Pipelines
### 1. Operator View
```bash
vm offsec shield-status # Confirm Shield Node is up and healthy
vm offsec agents # Verify active agent types and readiness
```
### 2. OffSec Operations (planned expansion)
- Trigger recon, vuln scans, and missions via `offsec_node_client.py`
- Store results locally in `vaultmesh.db`
- Emit receipts to `/opt/offsec-agents/receipts/`
### 3. VaultMesh Ingestion (planned)
- Guardian / automation jobs pull Shield receipts into VaultMesh ProofChain
- Lawchain and compliance scrolls can reference Shield evidence directly
---
## Security Notes
- Shield Node is an OffSec/TEM surface and is isolated onto `shield-vm`
- Access path is limited to Tailscale + SSH; no public internet exposure
- SQLite DB and receipts directory are kept local to `/opt/offsec-agents`
- Systemd ensures automatic restart on crash or failure
- TEM-oriented commands (`tem status`, `tem recall`) reserved for future expansion
---
## Dependencies
- Python 3.13, `python3-venv`, and `python3-pip` on `shield-vm`
- `offsec-agents` installed editable in `/opt/offsec-agents`
- MCP dependencies from `files/requirements-mcp.txt`
- Tailscale client running on `shield-vm`
- VaultMesh core with `OFFSEC_NODE_URL` configured
---
## Deployment Summary
1. Code synced to `/opt/offsec-agents` on `shield-vm`
2. Virtualenv `.venv` created and `offsec-agents` installed editable
3. MCP dependencies installed from `files/requirements-mcp.txt`
4. `vaultmesh-mcp.service` installed, enabled, and started under the `sovereign` user
5. Health verified via:
```bash
curl http://localhost:8081/health
curl -X POST http://localhost:8081/mcp/command \
-H "Content-Type: application/json" \
-d '{"session_id":"test","user":"sovereign","command":"agents list"}'
```
---
## Position in Overall Architecture
```
VaultMesh (core ledger) Shield Node (offsec-agents)
───────────────────────── ───────────────────────────
Rust engines Python agents + TEM
ProofChain/Guardian MCP backend (:8081)
vm_cli.py Nexus consoles
offsec_node_client.py ─────────────► /mcp/command
receipt ingestion ◄────────────────── /opt/offsec-agents/receipts/
```
**VaultMesh**: "What happened is provable."
**Shield Node**: "What happens at the edge is observed, remembered, and signed."
The link between them is a narrow, explicit HTTP + receipts bridge, not shared mutable state.

View File

@@ -0,0 +1,201 @@
# VaultMesh Standards Index
> Canonical index of normative and supporting artifacts for the VaultMesh
> ProofBundle and ledger evidence model.
This document provides a single entry point for regulators, auditors, and
integration partners who need to understand which documents and tools are
**normative** (MUST be followed) and which are **supporting** (helpful
for implementation and interpretation).
---
## 1. Scope
This index currently covers the **ProofBundle** family of artifacts:
- The way VaultMesh packages evidence for a single document access
- The cryptographic verification model for that evidence
- The offline tooling used by regulators to validate bundles
Future VaultMesh standards (e.g. Treasury, Mesh Federation) SHOULD be
added to this index as they are formalized.
---
## 2. Normative Artifacts
These artifacts define the behavior and structure that MUST be followed
for ProofBundle implementations and verifiers.
### 2.1 ProofBundle Specification
- **Title:** VaultMesh ProofBundle Specification
- **File:** `docs/VAULTMESH-PROOFBUNDLE-SPEC.md`
- **Version:** `1.1.0`
- **Status:** Normative
Defines:
- JSON schema for ProofBundle exports (`bundle_id`, `schema_version`,
`document`, `actor`, `portal`, `chain`, `guardian_anchor`,
`proofchain`, `meta`)
- Use of BLAKE3 for `root_hash` and `previous_hash`
- Hash-chain semantics and verification rules
- Threat model & non-goals
- AI Act Annex IX compliance crosswalk
- Versioning and extensibility rules
**Implementers MUST** treat this SPEC as the source of truth for what a
valid ProofBundle is and how it is verified.
### 2.2 ProofBundle Offline Verifier
- **Title:** VaultMesh ProofBundle Verifier
- **File:** `burocrat/app/tools/vm_verify_proofbundle.py`
- **Status:** Normative reference implementation
Implements:
- Canonical JSON encoding (`sort_keys=True`, compact separators)
- BLAKE3 verification of each receipt's `root_hash`
- Hash-chain verification via `previous_hash`
- Consistency checks against `chain.ok`, `chain.length`,
`chain.start`, `chain.end`
- Exit codes:
- `0` valid bundle
- `1` structural / hash-chain failure
- `2` I/O or parse error
**Regulators MAY** use this tool directly or as a reference when
building their own independent verifier.
---
## 3. Supporting Artifacts
These artifacts are not strictly required for correctness, but they
explain how to use the normative pieces in practice.
### 3.1 ProofBundle Playbook
- **Title:** How to Verify a VaultMesh ProofBundle
- **File:** `docs/VAULTMESH-PROOFBUNDLE-PLAYBOOK.md`
- **Version:** `1.0`
- **Status:** Informative
Audience: compliance officers, lawyers, auditors, procurement teams.
Provides:
- Plain-language explanation of what a ProofBundle proves
- Prerequisites (Python, `blake3` package)
- 3-step verification walkthrough
- Example output (valid vs tampered bundle)
- Operational guidance (no VaultMesh access required)
### 3.2 HTML ProofBundle Viewer
- **Title:** ProofBundle HTML Viewer
- **File:** `burocrat/app/src/views/proofbundle.ejs`
- **Status:** Informative
Provides:
- Human-readable rendering of a ProofBundle
- Chain visualization and anchor details
- Print-to-PDF option for dossier filing
- Footer note pointing to the offline verifier
This viewer **MUST NOT** be considered a substitute for cryptographic
verification; it is a convenience layer on top of the normative JSON +
verifier.
### 3.3 ProofBundle Conformance Test Pack
- **Title:** ProofBundle Conformance Test Pack
- **Path:** `testvectors/proofbundle/`
- **Version:** `1.0`
- **Status:** Informative
Provides:
- `proofbundle-valid.json` Known-good bundle (exit 0)
- `proofbundle-tampered-body.json` Modified body, hash mismatch (exit 1)
- `proofbundle-tampered-root.json` Wrong root_hash (exit 1)
- `proofbundle-broken-chain.json` Broken previous_hash linkage (exit 1)
- `README.md` Standalone usage instructions
Implementers SHOULD verify their verifier passes all test vectors before
claiming conformance.
---
## 4. Current Version Matrix
| Component | File | Version |
|----------------------|------------------------------------------------|----------|
| ProofBundle SPEC | `docs/VAULTMESH-PROOFBUNDLE-SPEC.md` | `1.1.0` |
| ProofBundle Playbook | `docs/VAULTMESH-PROOFBUNDLE-PLAYBOOK.md` | `1.0` |
| Offline Verifier | `burocrat/app/tools/vm_verify_proofbundle.py` | `1.1.0*` |
| Conformance Test Pack | `testvectors/proofbundle/` | `1.0` |
| HTML Viewer | `burocrat/app/src/views/proofbundle.ejs` | n/a |
\* The verifier tracks the SPEC's `schema_version`. For
`schema_version = "1.1.0"` bundles, this script is considered the
reference.
---
## 5. Citing ProofBundle
ProofBundle can be cited in assessments, audit reports, and compliance
documentation using the following reference:
> This assessment relies on VaultMesh ProofBundle, specified in
> **"VAULTMESH-PROOFBUNDLE-SPEC v1.1.0"**, with verification performed
> using the reference tool `vm_verify_proofbundle.py v1.1.0` and validated
> against the **VaultMesh ProofBundle Conformance Test Pack v1.0**.
The git tag `proofbundle-v1.1.0` in the VaultMesh repository marks the
reference implementation state for this version.
---
## 6. Implementation Notes
- **Producers of ProofBundles:**
- MUST include `schema_version` in every bundle and follow the
rules in the SPEC.
- SHOULD keep this index updated when bumping versions or adding
new normative documents.
- **Verifiers:**
- MUST reject unknown major versions (e.g. `2.x.x`) by default.
- MAY accept minor extensions (`1.2.x`) if all required fields
validate according to the `1.1.0` SPEC.
---
## 7. Roadmap for Future Standards
Future VaultMesh standards that SHOULD be added here:
| Standard | Scroll | Status |
|----------|--------|--------|
| Treasury Receipt SPEC | Treasury | Planned |
| Mesh Federation SPEC | Mesh | Planned |
| Identity & Capability SPEC | Identity | Planned |
| Guardian Anchoring & External ProofChain SPEC | Guardian | Planned |
Each new standard SHOULD define:
1. A normative SPEC document under `docs/`
2. A reference implementation (Rust and/or Python)
3. Optional Playbook for non-technical stakeholders
4. Clear versioning and deprecation rules
---
_VaultMesh Standards Index_
_Sovereign Infrastructure for the Digital Age_

View File

@@ -0,0 +1,620 @@
# VAULTMESH-TESTING-FRAMEWORK.md
**Property-Based Testing for the Civilization Ledger**
> *What is not tested cannot be trusted.*
---
## 1. Testing Philosophy
VaultMesh uses a layered testing approach:
| Layer | What It Tests | Framework |
|-------|---------------|-----------|
| Unit | Individual functions | Rust: `#[test]`, Python: `pytest` |
| Property | Invariants that must always hold | `proptest`, `hypothesis` |
| Integration | Component interactions | `testcontainers` |
| Contract | API compatibility | OpenAPI validation |
| Chaos | Resilience under failure | `chaos-mesh`, custom |
| Acceptance | End-to-end scenarios | `cucumber-rs` |
---
## 2. Core Invariants
These properties must ALWAYS hold:
```rust
// vaultmesh-core/src/invariants.rs
/// Core invariants that must never be violated
pub trait Invariant {
fn check(&self) -> Result<(), InvariantViolation>;
}
/// Receipts are append-only (AXIOM-001)
pub struct AppendOnlyReceipts;
impl Invariant for AppendOnlyReceipts {
fn check(&self) -> Result<(), InvariantViolation> {
// Verify no receipts have been modified or deleted
// by comparing sequential hashes
Ok(())
}
}
/// Merkle roots are consistent with receipts (AXIOM-002)
pub struct ConsistentMerkleRoots;
impl Invariant for ConsistentMerkleRoots {
fn check(&self) -> Result<(), InvariantViolation> {
// Recompute Merkle root from receipts
// Compare with stored root
Ok(())
}
}
/// All significant operations produce receipts (AXIOM-003)
pub struct UniversalReceipting;
impl Invariant for UniversalReceipting {
fn check(&self) -> Result<(), InvariantViolation> {
// Check that tracked operations have corresponding receipts
Ok(())
}
}
/// Hash chains are unbroken
pub struct UnbrokenHashChains;
impl Invariant for UnbrokenHashChains {
fn check(&self) -> Result<(), InvariantViolation> {
// Verify each receipt's previous_hash matches the prior receipt
Ok(())
}
}
```
---
## 3. Property-Based Tests
### 3.1 Receipt Properties
```rust
// vaultmesh-core/tests/receipt_properties.rs
use proptest::prelude::*;
use vaultmesh_core::{Receipt, Scroll, VmHash};
proptest! {
/// Any valid receipt can be serialized and deserialized without loss
#[test]
fn receipt_roundtrip(receipt in arb_receipt()) {
let json = serde_json::to_string(&receipt)?;
let restored: Receipt = serde_json::from_str(&json)?;
prop_assert_eq!(receipt, restored);
}
/// Receipt hash is deterministic
#[test]
fn receipt_hash_deterministic(receipt in arb_receipt()) {
let hash1 = VmHash::from_json(&receipt)?;
let hash2 = VmHash::from_json(&receipt)?;
prop_assert_eq!(hash1, hash2);
}
/// Different receipts produce different hashes
#[test]
fn different_receipts_different_hashes(
receipt1 in arb_receipt(),
receipt2 in arb_receipt()
) {
prop_assume!(receipt1 != receipt2);
let hash1 = VmHash::from_json(&receipt1)?;
let hash2 = VmHash::from_json(&receipt2)?;
prop_assert_ne!(hash1, hash2);
}
/// Merkle root of N receipts is consistent regardless of computation order
#[test]
fn merkle_root_order_independent(receipts in prop::collection::vec(arb_receipt(), 1..100)) {
let hashes: Vec<VmHash> = receipts.iter()
.map(|r| VmHash::from_json(r).unwrap())
.collect();
let root1 = merkle_root(&hashes);
// Shuffle but keep same hashes
let mut shuffled = hashes.clone();
shuffled.sort_by(|a, b| a.hex().cmp(b.hex()));
// Root should be same because merkle_root sorts internally
let root2 = merkle_root(&shuffled);
prop_assert_eq!(root1, root2);
}
}
fn arb_receipt() -> impl Strategy<Value = Receipt<serde_json::Value>> {
(
arb_scroll(),
arb_receipt_type(),
any::<u64>(),
prop::collection::vec(any::<String>(), 0..5),
).prop_map(|(scroll, receipt_type, timestamp, tags)| {
Receipt {
header: ReceiptHeader {
receipt_type,
timestamp: DateTime::from_timestamp(timestamp as i64, 0).unwrap(),
root_hash: "blake3:placeholder".to_string(),
tags,
},
meta: ReceiptMeta {
scroll,
sequence: 0,
anchor_epoch: None,
proof_path: None,
},
body: serde_json::json!({"test": true}),
}
})
}
fn arb_scroll() -> impl Strategy<Value = Scroll> {
prop_oneof![
Just(Scroll::Drills),
Just(Scroll::Compliance),
Just(Scroll::Guardian),
Just(Scroll::Treasury),
Just(Scroll::Mesh),
Just(Scroll::OffSec),
Just(Scroll::Identity),
Just(Scroll::Observability),
Just(Scroll::Automation),
Just(Scroll::PsiField),
]
}
fn arb_receipt_type() -> impl Strategy<Value = String> {
prop_oneof![
Just("security_drill_run".to_string()),
Just("oracle_answer".to_string()),
Just("anchor_success".to_string()),
Just("treasury_credit".to_string()),
Just("mesh_node_join".to_string()),
]
}
```
### 3.2 Guardian Properties
```rust
// vaultmesh-guardian/tests/guardian_properties.rs
use proptest::prelude::*;
use vaultmesh_guardian::{ProofChain, AnchorCycle};
proptest! {
/// Anchor cycle produces valid proof for all included receipts
#[test]
fn anchor_cycle_valid_proofs(
receipts in prop::collection::vec(arb_receipt(), 1..50)
) {
let mut proofchain = ProofChain::new();
for receipt in &receipts {
proofchain.append(receipt)?;
}
let cycle = AnchorCycle::new(&proofchain);
let anchor_result = cycle.execute_mock()?;
// Every receipt should have a valid Merkle proof
for receipt in &receipts {
let proof = anchor_result.get_proof(&receipt.header.root_hash)?;
prop_assert!(proof.verify(&anchor_result.root_hash));
}
}
/// Anchor root changes when any receipt changes
#[test]
fn anchor_root_sensitive(
receipts in prop::collection::vec(arb_receipt(), 2..20),
index in any::<prop::sample::Index>()
) {
let mut proofchain1 = ProofChain::new();
let mut proofchain2 = ProofChain::new();
for receipt in &receipts {
proofchain1.append(receipt)?;
proofchain2.append(receipt)?;
}
let root1 = proofchain1.current_root();
// Modify one receipt in proofchain2
let idx = index.index(receipts.len());
let mut modified = receipts[idx].clone();
modified.body = serde_json::json!({"modified": true});
proofchain2.replace(idx, &modified)?;
let root2 = proofchain2.current_root();
prop_assert_ne!(root1, root2);
}
/// Sequential anchors form valid chain
#[test]
fn sequential_anchors_chain(
receipt_batches in prop::collection::vec(
prop::collection::vec(arb_receipt(), 1..20),
2..10
)
) {
let mut proofchain = ProofChain::new();
let mut previous_anchor: Option<AnchorResult> = None;
for batch in receipt_batches {
for receipt in batch {
proofchain.append(&receipt)?;
}
let cycle = AnchorCycle::new(&proofchain);
let anchor_result = cycle.execute_mock()?;
if let Some(prev) = &previous_anchor {
// Current anchor should reference previous
prop_assert_eq!(anchor_result.previous_root, Some(prev.root_hash.clone()));
}
previous_anchor = Some(anchor_result);
}
}
}
```
### 3.3 Treasury Properties
```rust
// vaultmesh-treasury/tests/treasury_properties.rs
use proptest::prelude::*;
use rust_decimal::Decimal;
use vaultmesh_treasury::{TreasuryEngine, Entry, EntryType, Settlement};
proptest! {
/// Sum of all entries is always zero (double-entry invariant)
#[test]
fn double_entry_balance(
entries in prop::collection::vec(arb_entry_pair(), 1..50)
) {
let mut engine = TreasuryEngine::new();
engine.create_account(test_account("account-a"))?;
engine.create_account(test_account("account-b"))?;
let mut total = Decimal::ZERO;
for (debit, credit) in entries {
engine.record_entry(debit.clone())?;
engine.record_entry(credit.clone())?;
total += credit.amount;
total -= debit.amount;
}
// Total should always be zero
prop_assert_eq!(total, Decimal::ZERO);
}
/// Settlement balances match pre/post snapshots
#[test]
fn settlement_balance_consistency(
settlement in arb_settlement()
) {
let mut engine = TreasuryEngine::new();
// Create accounts from settlement
for entry in &settlement.entries {
engine.create_account_if_not_exists(&entry.account)?;
}
// Fund accounts
for entry in &settlement.entries {
if entry.entry_type == EntryType::Debit {
engine.fund_account(&entry.account, entry.amount * 2)?;
}
}
// Snapshot before
let before = engine.snapshot_balances(&settlement.affected_accounts())?;
// Execute settlement
let result = engine.execute_settlement(settlement.clone())?;
// Snapshot after
let after = engine.snapshot_balances(&settlement.affected_accounts())?;
// Verify net flows match difference
for (account, net_flow) in &result.net_flow {
let expected_after = before.get(account).unwrap() + net_flow;
prop_assert_eq!(*after.get(account).unwrap(), expected_after);
}
}
}
fn arb_entry_pair() -> impl Strategy<Value = (Entry, Entry)> {
(1u64..1000000).prop_map(|cents| {
let amount = Decimal::new(cents as i64, 2);
let debit = Entry {
entry_id: format!("debit-{}", uuid::Uuid::new_v4()),
entry_type: EntryType::Debit,
account: "account-a".to_string(),
amount,
currency: Currency::EUR,
memo: "Test debit".to_string(),
timestamp: Utc::now(),
tags: vec![],
};
let credit = Entry {
entry_id: format!("credit-{}", uuid::Uuid::new_v4()),
entry_type: EntryType::Credit,
account: "account-b".to_string(),
amount,
currency: Currency::EUR,
memo: "Test credit".to_string(),
timestamp: Utc::now(),
tags: vec![],
};
(debit, credit)
})
}
```
---
## 4. Integration Tests
```rust
// tests/integration/full_cycle.rs
use testcontainers::{clients, images::postgres::Postgres, Container};
use vaultmesh_portal::Portal;
use vaultmesh_guardian::Guardian;
use vaultmesh_oracle::Oracle;
#[tokio::test]
async fn full_receipt_lifecycle() {
// Start containers
let docker = clients::Cli::default();
let postgres = docker.run(Postgres::default());
let db_url = format!(
"postgresql://postgres:postgres@localhost:{}/postgres",
postgres.get_host_port_ipv4(5432)
);
// Initialize services
let portal = Portal::new(&db_url).await?;
let guardian = Guardian::new(&db_url).await?;
// Create and emit receipt
let receipt = portal.emit_receipt(
Scroll::Drills,
"security_drill_run",
json!({
"drill_id": "test-drill-001",
"status": "completed"
}),
vec!["test".to_string()],
).await?;
// Verify receipt exists
let stored = portal.get_receipt(&receipt.header.root_hash).await?;
assert_eq!(stored.header.root_hash, receipt.header.root_hash);
// Trigger anchor
let anchor_result = guardian.anchor_now(None).await?;
assert!(anchor_result.success);
// Verify receipt has proof
let proof = guardian.get_proof(&receipt.header.root_hash).await?;
assert!(proof.is_some());
assert!(proof.unwrap().verify(&anchor_result.root_hash));
}
#[tokio::test]
async fn oracle_answer_receipted() {
let docker = clients::Cli::default();
let postgres = docker.run(Postgres::default());
let db_url = format!(
"postgresql://postgres:postgres@localhost:{}/postgres",
postgres.get_host_port_ipv4(5432)
);
let portal = Portal::new(&db_url).await?;
let oracle = Oracle::new(&db_url).await?;
// Load test corpus
oracle.load_corpus("tests/fixtures/corpus").await?;
// Ask question
let answer = oracle.answer(
"What are the requirements for technical documentation under Article 11?",
vec!["AI_Act".to_string()],
vec![],
).await?;
// Verify answer was receipted
let receipts = portal.query_receipts(
Some(Scroll::Compliance),
Some("oracle_answer".to_string()),
None,
None,
10,
).await?;
assert!(!receipts.is_empty());
assert_eq!(receipts[0].body["answer_hash"], answer.answer_hash);
}
```
---
## 5. Chaos Tests
```yaml
# chaos/anchor-failure.yaml
apiVersion: chaos-mesh.org/v1alpha1
kind: NetworkChaos
metadata:
name: anchor-network-partition
namespace: vaultmesh
spec:
action: partition
mode: all
selector:
namespaces:
- vaultmesh
labelSelectors:
app.kubernetes.io/name: guardian
direction: to
target:
selector:
namespaces:
- default
labelSelectors:
app: ethereum-node
mode: all
duration: "5m"
scheduler:
cron: "@every 6h"
---
apiVersion: chaos-mesh.org/v1alpha1
kind: PodChaos
metadata:
name: guardian-pod-kill
namespace: vaultmesh
spec:
action: pod-kill
mode: one
selector:
namespaces:
- vaultmesh
labelSelectors:
app.kubernetes.io/name: guardian
scheduler:
cron: "@every 4h"
```
```rust
// tests/chaos/anchor_resilience.rs
#[tokio::test]
#[ignore] // Run manually with chaos-mesh
async fn guardian_recovers_from_network_partition() {
let guardian = connect_to_guardian().await?;
let portal = connect_to_portal().await?;
// Generate receipts
for i in 0..100 {
portal.emit_receipt(
Scroll::Drills,
"test_receipt",
json!({"index": i}),
vec![],
).await?;
}
// Wait for chaos to potentially occur
tokio::time::sleep(Duration::from_secs(60)).await;
// Verify guardian state is consistent
let status = guardian.get_status().await?;
// Should either be anchoring or have recovered
assert!(
status.state == "idle" ||
status.state == "anchoring",
"Guardian in unexpected state: {}",
status.state
);
// If idle, verify all receipts are anchored
if status.state == "idle" {
let receipts = portal.query_receipts(None, None, None, None, 200).await?;
for receipt in receipts {
let proof = guardian.get_proof(&receipt.header.root_hash).await?;
assert!(proof.is_some(), "Receipt not anchored: {}", receipt.header.root_hash);
}
}
}
```
---
## 6. Test Fixtures
```rust
// tests/fixtures/mod.rs
use vaultmesh_core::*;
pub fn test_drill_receipt() -> Receipt<serde_json::Value> {
Receipt {
header: ReceiptHeader {
receipt_type: "security_drill_run".to_string(),
timestamp: Utc::now(),
root_hash: "blake3:placeholder".to_string(),
tags: vec!["test".to_string()],
},
meta: ReceiptMeta {
scroll: Scroll::Drills,
sequence: 1,
anchor_epoch: None,
proof_path: None,
},
body: json!({
"drill_id": "drill-test-001",
"prompt": "Test security scenario",
"status": "completed",
"stages_total": 3,
"stages_completed": 3
}),
}
}
pub fn test_oracle_receipt() -> Receipt<serde_json::Value> {
Receipt {
header: ReceiptHeader {
receipt_type: "oracle_answer".to_string(),
timestamp: Utc::now(),
root_hash: "blake3:placeholder".to_string(),
tags: vec!["test".to_string(), "compliance".to_string()],
},
meta: ReceiptMeta {
scroll: Scroll::Compliance,
sequence: 1,
anchor_epoch: None,
proof_path: None,
},
body: json!({
"question": "Test compliance question?",
"answer_hash": "blake3:test...",
"confidence": 0.95,
"frameworks": ["AI_Act"]
}),
}
}
pub fn test_corpus() -> Vec<CorpusDocument> {
vec![
CorpusDocument {
id: "doc-001".to_string(),
title: "AI Act Article 11 - Technical Documentation".to_string(),
content: "Providers shall draw up technical documentation...".to_string(),
framework: "AI_Act".to_string(),
section: "Article 11".to_string(),
},
// ... more test documents
]
}
```

View File

@@ -0,0 +1,101 @@
# Observability - VaultMesh
This directory contains a Prometheus exporter for VaultMesh and a Grafana dashboard.
## Metrics Exposed
| Metric | Type | Labels | Description |
|--------|------|--------|-------------|
| `vaultmesh_receipts_total` | Counter | `module` | Number of receipts emitted |
| `vaultmesh_receipts_failed_total` | Counter | `module`, `reason` | Failed receipt emissions |
| `vaultmesh_anchor_age_seconds` | Gauge | - | Seconds since last guardian anchor |
| `vaultmesh_emit_seconds` | Histogram | `module` | Receipt emit latency |
## Quick Start (Local)
### Option 1: Run exporter directly
```bash
cd vaultmesh-observability
cargo run --release
```
Exposes metrics at `http://0.0.0.0:9108/metrics`
### Option 2: Using Docker Compose
```bash
cd docs/observability
docker-compose up --build
```
Services:
- **Exporter**: http://localhost:9108/metrics
- **Prometheus**: http://localhost:9090
- **Grafana**: http://localhost:3000 (admin/admin)
## Importing the Dashboard
1. Open Grafana at http://localhost:3000
2. Go to Dashboards → Import
3. Upload `dashboards/receipts.json`
4. Select the Prometheus data source
5. Click Import
## CI Smoke Test
The smoke test verifies the exporter responds on `/metrics`:
```bash
cargo test -p vaultmesh-observability --tests
```
Add to `.gitlab-ci.yml`:
```yaml
observability-smoke:
stage: test
image: rust:1.75
script:
- cargo test -p vaultmesh-observability --tests -- --nocapture
```
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `VAULTMESH_METRICS_ADDR` | `0.0.0.0:9108` | Listen address for metrics server |
## Guardian Metrics Integration Test
The Guardian engine has an integration test that verifies metrics are emitted after anchors:
```bash
cargo test -p vaultmesh-guardian --features metrics --test metrics_integration
```
This test:
- Starts ObservabilityEngine on a test port
- Creates Guardian with observability enabled
- Performs an anchor
- Verifies `/metrics` contains `vaultmesh_anchor_age_seconds 0` (fresh anchor)
## Integration with Other Engines
Other VaultMesh engines can record metrics by calling:
```rust
use vaultmesh_observability::ObservabilityEngine;
use std::sync::Arc;
let engine = Arc::new(ObservabilityEngine::new());
// Record successful receipt emission
engine.observe_emitted("guardian", latency_seconds);
// Record failure
engine.observe_failed("treasury", "io_error");
// Update anchor age (0 = just anchored)
engine.set_anchor_age(0.0);
```

View File

@@ -0,0 +1,366 @@
{
"annotations": {
"list": []
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": null,
"links": [],
"liveNow": false,
"panels": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "short"
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 0
},
"id": 1,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"expr": "sum by(module) (rate(vaultmesh_receipts_total[1m]))",
"legendFormat": "{{module}}",
"refId": "A"
}
],
"title": "Receipts Emitted Rate (by module)",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "bars",
"fillOpacity": 50,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "red",
"value": null
}
]
},
"unit": "short"
},
"overrides": []
},
"gridPos": {
"h": 6,
"w": 12,
"x": 0,
"y": 8
},
"id": 2,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"expr": "sum by(module, reason) (rate(vaultmesh_receipts_failed_total[1m]))",
"legendFormat": "{{module}} - {{reason}}",
"refId": "A"
}
],
"title": "Receipt Failures",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "dateTimeAsIso"
},
"overrides": []
},
"gridPos": {
"h": 6,
"w": 12,
"x": 12,
"y": 8
},
"id": 3,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "10.1.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"expr": "vaultmesh_anchor_age_seconds * 1000",
"legendFormat": "Last Anchor",
"refId": "A"
}
],
"title": "Last Anchor Timestamp",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "s"
},
"overrides": []
},
"gridPos": {
"h": 6,
"w": 24,
"x": 0,
"y": 14
},
"id": 4,
"options": {
"legend": {
"calcs": [
"mean",
"max"
],
"displayMode": "table",
"placement": "right",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"expr": "histogram_quantile(0.95, sum by(le, module) (rate(vaultmesh_emit_seconds_bucket[5m])))",
"legendFormat": "p95 {{module}}",
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"expr": "histogram_quantile(0.50, sum by(le, module) (rate(vaultmesh_emit_seconds_bucket[5m])))",
"legendFormat": "p50 {{module}}",
"refId": "B"
}
],
"title": "Receipt Emit Latency (p50/p95)",
"type": "timeseries"
}
],
"refresh": "5s",
"schemaVersion": 38,
"style": "dark",
"tags": [
"vaultmesh",
"receipts"
],
"templating": {
"list": []
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "VaultMesh Receipts Overview",
"uid": "vaultmesh-receipts",
"version": 1,
"weekStart": ""
}

View File

@@ -0,0 +1,34 @@
version: "3.8"
services:
exporter:
build:
context: ../..
dockerfile: vaultmesh-observability/Dockerfile
image: vaultmesh-observability:local
ports:
- "9108:9108"
environment:
- VAULTMESH_METRICS_ADDR=0.0.0.0:9108
prometheus:
image: prom/prometheus:v2.47.0
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
ports:
- "9090:9090"
depends_on:
- exporter
grafana:
image: grafana/grafana:10.1.0
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
- GF_USERS_ALLOW_SIGN_UP=false
volumes:
- ./dashboards:/var/lib/grafana/dashboards:ro
- ./grafana-provisioning:/etc/grafana/provisioning:ro
ports:
- "3000:3000"
depends_on:
- prometheus

View File

@@ -0,0 +1,12 @@
apiVersion: 1
providers:
- name: 'VaultMesh'
orgId: 1
folder: 'VaultMesh'
folderUid: 'vaultmesh'
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards

View File

@@ -0,0 +1,9 @@
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus:9090
isDefault: true
uid: prometheus

View File

@@ -0,0 +1,9 @@
global:
scrape_interval: 5s
evaluation_interval: 5s
scrape_configs:
- job_name: 'vaultmesh_observability'
static_configs:
- targets: ['exporter:9108']
metrics_path: /metrics

View File

@@ -0,0 +1,551 @@
# VaultMesh Alchemical Patterns
> *Solve et Coagula — Dissolve and Coagulate*
## The Alchemical Framework
VaultMesh uses alchemical metaphors not as mysticism, but as precise operational language for system states and transformations.
## Phases (Operational States)
### Nigredo 🜁 — The Blackening
**Meaning**: Crisis, breakdown, decomposition
**Operational State**: System under stress, incident in progress
**Indicators**:
- Active security incident
- Service degradation
- Guardian anchor failures
- Constitutional violations detected
**Receipt Types During Nigredo**:
- `offsec_incident` (severity: high/critical)
- `obs_log_alert` (severity: critical)
- `gov_violation`
- `psi_phase_transition` (to_phase: nigredo)
**Actions**:
- Incident response procedures activated
- Enhanced monitoring enabled
- Emergency powers may be invoked
- Transmutation processes initiated
```json
{
"type": "psi_phase_transition",
"from_phase": "albedo",
"to_phase": "nigredo",
"trigger": {
"event_type": "security_incident",
"reference": "INC-2025-12-001",
"severity": "critical"
},
"indicators": [
"active_intrusion_detected",
"guardian_alert_level_elevated"
]
}
```
---
### Albedo 🜄 — The Whitening
**Meaning**: Purification, recovery, stabilization
**Operational State**: Post-incident recovery, learning phase
**Indicators**:
- Incident contained
- Systems stabilizing
- Root cause analysis in progress
- Remediation being verified
**Receipt Types During Albedo**:
- `offsec_remediation`
- `psi_transmutation` (steps: extract, dissolve, purify)
- `obs_health_snapshot` (improving trends)
**Actions**:
- Post-incident review
- IOC extraction
- Rule generation
- Documentation updates
```json
{
"type": "psi_phase_transition",
"from_phase": "nigredo",
"to_phase": "albedo",
"trigger": {
"event_type": "incident_contained",
"reference": "INC-2025-12-001"
},
"indicators": [
"threat_neutralized",
"services_recovering",
"rca_initiated"
],
"duration_in_nigredo_hours": 4.5
}
```
---
### Citrinitas 🜆 — The Yellowing
**Meaning**: Illumination, new capability emerging
**Operational State**: Optimization, enhancement
**Indicators**:
- New defensive capabilities deployed
- Performance improvements measured
- Knowledge crystallized into procedures
- Drills showing improved outcomes
**Receipt Types During Citrinitas**:
- `psi_transmutation` (steps: coagulate)
- `psi_integration`
- `security_drill_run` (outcomes: improved)
- `auto_workflow_run` (new capabilities)
**Actions**:
- Deploy new detection rules
- Update runbooks
- Train team on new procedures
- Measure improvement metrics
```json
{
"type": "psi_phase_transition",
"from_phase": "albedo",
"to_phase": "citrinitas",
"trigger": {
"event_type": "capability_deployed",
"reference": "transmute-2025-12-001"
},
"indicators": [
"detection_rules_active",
"playbook_updated",
"team_trained"
],
"capabilities_gained": [
"lateral_movement_detection_v2",
"automated_containment_k8s"
]
}
```
---
### Rubedo 🜂 — The Reddening
**Meaning**: Integration, completion, maturity
**Operational State**: Stable, sovereign operation
**Indicators**:
- All systems nominal
- Capabilities integrated into BAU
- Continuous improvement active
- High resilience demonstrated
**Receipt Types During Rubedo**:
- `psi_resonance` (harmony_score: high)
- `obs_health_snapshot` (all_green)
- `mesh_topology_snapshot` (healthy)
- `treasury_reconciliation` (balanced)
**Actions**:
- Regular drills maintain readiness
- Proactive threat hunting
- Continuous compliance monitoring
- Knowledge sharing with federation
```json
{
"type": "psi_phase_transition",
"from_phase": "citrinitas",
"to_phase": "rubedo",
"trigger": {
"event_type": "stability_achieved",
"reference": "phase-assessment-2025-12"
},
"indicators": [
"30_days_no_critical_incidents",
"slo_targets_met",
"drill_outcomes_excellent"
],
"maturity_score": 0.92
}
```
---
## Transmutation (Tem Pattern)
Transmutation converts negative events into defensive capabilities.
### The Process
```
┌─────────────────────────────────────────────────────────────────┐
│ PRIMA MATERIA │
│ (Raw Input: Incident/Vuln/Threat) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STEP 1: EXTRACT │
│ • Identify IOCs (IPs, domains, hashes, TTPs) │
│ • Document attack chain │
│ • Capture forensic artifacts │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STEP 2: DISSOLVE (Solve) │
│ • Break down into atomic components │
│ • Normalize to standard formats (STIX, Sigma) │
│ • Map to frameworks (MITRE ATT&CK) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STEP 3: PURIFY │
│ • Remove false positives │
│ • Validate against known-good │
│ • Test in isolated environment │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STEP 4: COAGULATE (Coagula) │
│ • Generate detection rules (Sigma, YARA, Suricata) │
│ • Create response playbooks │
│ • Deploy to production │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STEP 5: SEAL │
│ • Emit transmutation receipt │
│ • Link prima materia to philosopher's stone │
│ • Anchor evidence chain │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ PHILOSOPHER'S STONE │
│ (Output: Defensive Capability) │
└─────────────────────────────────────────────────────────────────┘
```
### Transmutation Contract
```json
{
"transmutation_id": "psi-transmute-2025-12-06-001",
"title": "SSH Brute Force to Detection Capability",
"initiated_by": "did:vm:human:sovereign",
"initiated_at": "2025-12-06T10:00:00Z",
"input_material": {
"type": "security_incident",
"reference": "INC-2025-12-001",
"prima_materia_hash": "blake3:incident_evidence..."
},
"target_phase": "citrinitas",
"transmutation_steps": [
{
"step_id": "step-1-extract",
"name": "Extract Prima Materia",
"action": "extract_iocs",
"expected_output": "cases/psi/transmute-001/extracted_iocs.json"
},
{
"step_id": "step-2-dissolve",
"name": "Dissolve (Solve)",
"action": "normalize_to_stix",
"expected_output": "cases/psi/transmute-001/stix_bundle.json"
},
{
"step_id": "step-3-purify",
"name": "Purify",
"action": "validate_iocs",
"expected_output": "cases/psi/transmute-001/validated_iocs.json"
},
{
"step_id": "step-4-coagulate",
"name": "Coagulate",
"action": "generate_sigma_rules",
"expected_output": "cases/psi/transmute-001/sigma_rules/"
},
{
"step_id": "step-5-seal",
"name": "Seal",
"action": "emit_receipt",
"expected_output": "receipts/psi/psi_events.jsonl"
}
],
"witnesses_required": ["brick-01", "brick-02"],
"success_criteria": {
"rules_deployed": true,
"detection_verified": true,
"no_false_positives_24h": true
}
}
```
### Transmutation Receipt
```json
{
"type": "psi_transmutation",
"transmutation_id": "psi-transmute-2025-12-06-001",
"timestamp": "2025-12-06T16:00:00Z",
"input_material": {
"type": "security_incident",
"reference": "INC-2025-12-001",
"prima_materia_hash": "blake3:abc123..."
},
"output_capability": {
"type": "detection_rules",
"reference": "sigma-rule-ssh-brute-force-v2",
"philosophers_stone_hash": "blake3:def456..."
},
"transformation_summary": {
"iocs_extracted": 47,
"rules_generated": 3,
"playbooks_updated": 1,
"ttps_mapped": ["T1110.001", "T1021.004"]
},
"alchemical_phase": "citrinitas",
"witnesses": [
{
"node": "did:vm:node:brick-01",
"witnessed_at": "2025-12-06T15:55:00Z",
"signature": "z58D..."
}
],
"tags": ["psi", "transmutation", "ssh", "brute-force"],
"root_hash": "blake3:transmute..."
}
```
---
## Resonance
Resonance measures cross-system synchronization and harmony.
### Resonance Factors
| Factor | Weight | Measurement |
|--------|--------|-------------|
| Anchor Health | 0.25 | Time since last anchor, failure rate |
| Receipt Consistency | 0.20 | Hash chain integrity, no gaps |
| Mesh Connectivity | 0.20 | Node health, route availability |
| Phase Alignment | 0.15 | All subsystems in compatible phases |
| Federation Sync | 0.10 | Witness success rate |
| Governance Compliance | 0.10 | No active violations |
### Harmony Score
```
harmony_score = Σ(factor_weight × factor_score) / Σ(factor_weight)
```
**Interpretation**:
- 0.90 - 1.00: **Rubedo** — Full sovereignty
- 0.70 - 0.89: **Citrinitas** — Optimizing
- 0.50 - 0.69: **Albedo** — Stabilizing
- 0.00 - 0.49: **Nigredo** — Crisis mode
### Resonance Receipt
```json
{
"type": "psi_resonance",
"resonance_id": "resonance-2025-12-06-12",
"timestamp": "2025-12-06T12:00:00Z",
"harmony_score": 0.94,
"factors": {
"anchor_health": 1.0,
"receipt_consistency": 0.98,
"mesh_connectivity": 0.95,
"phase_alignment": 0.90,
"federation_sync": 0.85,
"governance_compliance": 1.0
},
"current_phase": "rubedo",
"subsystem_phases": {
"guardian": "rubedo",
"oracle": "rubedo",
"mesh": "citrinitas",
"treasury": "rubedo"
},
"dissonance_notes": [
"mesh slightly below harmony due to pending node upgrade"
],
"tags": ["psi", "resonance", "harmony"],
"root_hash": "blake3:resonance..."
}
```
---
## Integration
Integration crystallizes learnings into permanent capability.
### Integration Types
| Type | Description | Example |
|------|-------------|---------|
| `rule_integration` | Detection rule becomes standard | Sigma rule added to baseline |
| `playbook_integration` | Response procedure formalized | IR playbook updated |
| `capability_integration` | New system feature | Auto-containment enabled |
| `knowledge_integration` | Documentation updated | Threat model revised |
| `training_integration` | Team skill acquired | Drill proficiency achieved |
### Integration Receipt
```json
{
"type": "psi_integration",
"integration_id": "integration-2025-12-06-001",
"timestamp": "2025-12-06T18:00:00Z",
"integration_type": "rule_integration",
"source": {
"transmutation_id": "psi-transmute-2025-12-06-001",
"capability_hash": "blake3:def456..."
},
"target": {
"system": "detection_pipeline",
"component": "sigma_rules",
"version": "v2.1.0"
},
"integration_proof": {
"deployed_at": "2025-12-06T17:30:00Z",
"verified_by": ["brick-01", "brick-02"],
"test_results": {
"true_positives": 5,
"false_positives": 0,
"detection_rate": 1.0
}
},
"crystallization_complete": true,
"tags": ["psi", "integration", "detection"],
"root_hash": "blake3:integration..."
}
```
---
## Oracle Insights
Significant findings from the Compliance Oracle that warrant receipting.
### Insight Types
| Type | Description |
|------|-------------|
| `compliance_gap` | New gap identified |
| `regulatory_change` | Regulation updated |
| `risk_elevation` | Risk level increased |
| `deadline_approaching` | Compliance deadline near |
| `cross_reference` | Connection between frameworks |
### Insight Receipt
```json
{
"type": "psi_oracle_insight",
"insight_id": "insight-2025-12-06-001",
"timestamp": "2025-12-06T14:00:00Z",
"insight_type": "compliance_gap",
"severity": "high",
"frameworks": ["AI_Act", "GDPR"],
"finding": {
"summary": "Model training data lineage documentation incomplete for Annex IV requirements",
"affected_articles": ["AI_Act.Annex_IV.2.b", "GDPR.Art_30"],
"current_state": "partial_documentation",
"required_state": "complete_lineage_from_source_to_model"
},
"recommended_actions": [
"Implement data provenance tracking",
"Document all training data sources",
"Create lineage visualization"
],
"deadline": "2026-08-02T00:00:00Z",
"confidence": 0.92,
"oracle_query_ref": "oracle-answer-2025-12-06-4721",
"tags": ["psi", "oracle", "insight", "ai_act", "gdpr"],
"root_hash": "blake3:insight..."
}
```
---
## Magnum Opus Dashboard
The Magnum Opus is the great work — the continuous refinement toward sovereignty.
### Dashboard Metrics
```
┌─────────────────────────────────────────────────────────────────┐
│ MAGNUM OPUS STATUS │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Current Phase: RUBEDO 🜂 Harmony: 0.94 │
│ Time in Phase: 47 days │
│ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Phase History (90 days) │ │
│ │ ████████████░░░░████████████████████████████████████████│ │
│ │ NNNAAACCCCCNNAACCCCCCCCCCRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR│ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
│ Transmutations Integrations │
│ ├─ Active: 2 ├─ This Month: 7 │
│ ├─ Completed: 34 ├─ Total: 156 │
│ └─ Success Rate: 94% └─ Crystallized: 142 │
│ │
│ Resonance Factors │
│ ├─ Anchor Health: ████████████████████ 1.00 │
│ ├─ Receipt Integrity: ███████████████████░ 0.98 │
│ ├─ Mesh Connectivity: ███████████████████░ 0.95 │
│ ├─ Phase Alignment: ██████████████████░░ 0.90 │
│ ├─ Federation Sync: █████████████████░░░ 0.85 │
│ └─ Governance: ████████████████████ 1.00 │
│ │
│ Recent Oracle Insights: 3 (1 high severity) │
│ Next Anchor: 47 min │
│ Last Incident: 47 days ago │
│ │
└─────────────────────────────────────────────────────────────────┘
```
### CLI Commands
```bash
# Phase status
vm-psi phase current
vm-psi phase history --days 90
# Transmutation
vm-psi transmute start --input INC-2025-12-001 --title "SSH Brute Force"
vm-psi transmute status transmute-2025-12-001
vm-psi transmute complete transmute-2025-12-001 --step coagulate
# Resonance
vm-psi resonance current
vm-psi resonance history --days 30
# Integration
vm-psi integrate --source transmute-2025-12-001 --target detection_pipeline
# Opus
vm-psi opus status
vm-psi opus report --format pdf --output opus-report.pdf
```

View File

@@ -0,0 +1,693 @@
# VaultMesh Code Templates
## Rust Templates
### Core Types
```rust
// Receipt Header
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ReceiptHeader {
pub receipt_type: String,
pub timestamp: DateTime<Utc>,
pub root_hash: String,
pub tags: Vec<String>,
}
// Receipt Metadata
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ReceiptMeta {
pub scroll: Scroll,
pub sequence: u64,
pub anchor_epoch: Option<u64>,
pub proof_path: Option<String>,
}
// Generic Receipt
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Receipt<T> {
#[serde(flatten)]
pub header: ReceiptHeader,
#[serde(flatten)]
pub meta: ReceiptMeta,
#[serde(flatten)]
pub body: T,
}
// Scroll Enum
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
#[serde(rename_all = "snake_case")]
pub enum Scroll {
Drills,
Compliance,
Guardian,
Treasury,
Mesh,
OffSec,
Identity,
Observability,
Automation,
PsiField,
Federation,
Governance,
}
impl Scroll {
pub fn jsonl_path(&self) -> &'static str {
match self {
Scroll::Drills => "receipts/drills/drill_runs.jsonl",
Scroll::Compliance => "receipts/compliance/oracle_answers.jsonl",
Scroll::Guardian => "receipts/guardian/anchor_events.jsonl",
Scroll::Treasury => "receipts/treasury/treasury_events.jsonl",
Scroll::Mesh => "receipts/mesh/mesh_events.jsonl",
Scroll::OffSec => "receipts/offsec/offsec_events.jsonl",
Scroll::Identity => "receipts/identity/identity_events.jsonl",
Scroll::Observability => "receipts/observability/observability_events.jsonl",
Scroll::Automation => "receipts/automation/automation_events.jsonl",
Scroll::PsiField => "receipts/psi/psi_events.jsonl",
Scroll::Federation => "receipts/federation/federation_events.jsonl",
Scroll::Governance => "receipts/governance/governance_events.jsonl",
}
}
pub fn root_file(&self) -> &'static str {
match self {
Scroll::Drills => "ROOT.drills.txt",
Scroll::Compliance => "ROOT.compliance.txt",
Scroll::Guardian => "ROOT.guardian.txt",
Scroll::Treasury => "ROOT.treasury.txt",
Scroll::Mesh => "ROOT.mesh.txt",
Scroll::OffSec => "ROOT.offsec.txt",
Scroll::Identity => "ROOT.identity.txt",
Scroll::Observability => "ROOT.observability.txt",
Scroll::Automation => "ROOT.automation.txt",
Scroll::PsiField => "ROOT.psi.txt",
Scroll::Federation => "ROOT.federation.txt",
Scroll::Governance => "ROOT.governance.txt",
}
}
}
```
### DID Types
```rust
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
pub struct Did(String);
impl Did {
pub fn new(did_type: DidType, identifier: &str) -> Self {
Did(format!("did:vm:{}:{}", did_type.as_str(), identifier))
}
pub fn parse(s: &str) -> Result<Self, DidParseError> {
if !s.starts_with("did:vm:") {
return Err(DidParseError::InvalidPrefix);
}
Ok(Did(s.to_string()))
}
pub fn as_str(&self) -> &str {
&self.0
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum DidType {
Node,
Human,
Agent,
Service,
Mesh,
}
impl DidType {
pub fn as_str(&self) -> &'static str {
match self {
DidType::Node => "node",
DidType::Human => "human",
DidType::Agent => "agent",
DidType::Service => "service",
DidType::Mesh => "mesh",
}
}
}
```
### Hash Utilities
```rust
use blake3::Hasher;
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct VmHash(String);
impl VmHash {
pub fn blake3(data: &[u8]) -> Self {
let hash = blake3::hash(data);
VmHash(format!("blake3:{}", hash.to_hex()))
}
pub fn from_json<T: Serialize>(value: &T) -> Result<Self, serde_json::Error> {
let json = serde_json::to_vec(value)?;
Ok(Self::blake3(&json))
}
pub fn hex(&self) -> &str {
self.0.strip_prefix("blake3:").unwrap_or(&self.0)
}
pub fn as_str(&self) -> &str {
&self.0
}
}
pub fn merkle_root(hashes: &[VmHash]) -> VmHash {
if hashes.is_empty() {
return VmHash::blake3(b"empty");
}
if hashes.len() == 1 {
return hashes[0].clone();
}
let mut current_level: Vec<VmHash> = hashes.to_vec();
while current_level.len() > 1 {
let mut next_level = Vec::new();
for chunk in current_level.chunks(2) {
let combined = if chunk.len() == 2 {
format!("{}{}", chunk[0].hex(), chunk[1].hex())
} else {
format!("{}{}", chunk[0].hex(), chunk[0].hex())
};
next_level.push(VmHash::blake3(combined.as_bytes()));
}
current_level = next_level;
}
current_level.remove(0)
}
```
### Engine Template
```rust
// Template for new engine implementation
pub struct MyEngine {
db: DatabasePool,
receipts_path: PathBuf,
}
impl MyEngine {
pub fn new(db: DatabasePool, receipts_path: PathBuf) -> Self {
MyEngine { db, receipts_path }
}
pub async fn create_contract(&self, params: CreateParams) -> Result<Contract, EngineError> {
let contract = Contract {
id: generate_id("contract"),
title: params.title,
created_at: Utc::now(),
// ... domain-specific fields
};
// Store contract
self.store_contract(&contract).await?;
Ok(contract)
}
pub async fn execute(&mut self, contract_id: &str) -> Result<State, EngineError> {
let contract = self.load_contract(contract_id).await?;
let mut state = State::new(&contract);
// Execute steps
for step in &contract.steps {
state.execute_step(step).await?;
}
// Seal with receipt
let receipt = self.seal(&contract, &state).await?;
Ok(state)
}
async fn seal(&self, contract: &Contract, state: &State) -> Result<Receipt<MyReceipt>, EngineError> {
let receipt_body = MyReceipt {
contract_id: contract.id.clone(),
status: state.status.clone(),
// ... domain-specific fields
};
let root_hash = VmHash::from_json(&receipt_body)?;
let receipt = Receipt {
header: ReceiptHeader {
receipt_type: "my_receipt_type".to_string(),
timestamp: Utc::now(),
root_hash: root_hash.as_str().to_string(),
tags: vec!["my_engine".to_string()],
},
meta: ReceiptMeta {
scroll: Scroll::MyScroll,
sequence: 0,
anchor_epoch: None,
proof_path: None,
},
body: receipt_body,
};
self.append_receipt(&receipt).await?;
Ok(receipt)
}
async fn append_receipt<T: Serialize>(&self, receipt: &Receipt<T>) -> Result<(), EngineError> {
let scroll_path = self.receipts_path.join(Scroll::MyScroll.jsonl_path());
let mut file = OpenOptions::new()
.create(true)
.append(true)
.open(&scroll_path)?;
let json = serde_json::to_string(receipt)?;
writeln!(file, "{}", json)?;
// Update Merkle root
self.update_merkle_root().await?;
Ok(())
}
}
```
### Prometheus Metrics
```rust
use prometheus::{Counter, CounterVec, Histogram, HistogramVec, Gauge, GaugeVec, Opts, Registry};
use lazy_static::lazy_static;
lazy_static! {
pub static ref REGISTRY: Registry = Registry::new();
pub static ref RECEIPTS_TOTAL: CounterVec = CounterVec::new(
Opts::new("vaultmesh_receipts_total", "Total receipts by scroll"),
&["scroll", "type"]
).unwrap();
pub static ref OPERATION_DURATION: HistogramVec = HistogramVec::new(
prometheus::HistogramOpts::new(
"vaultmesh_operation_duration_seconds",
"Operation duration"
).buckets(vec![0.001, 0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1.0, 2.5, 5.0, 10.0]),
&["operation"]
).unwrap();
pub static ref ACTIVE_OPERATIONS: GaugeVec = GaugeVec::new(
Opts::new("vaultmesh_active_operations", "Active operations"),
&["type"]
).unwrap();
}
pub fn register_metrics() {
REGISTRY.register(Box::new(RECEIPTS_TOTAL.clone())).unwrap();
REGISTRY.register(Box::new(OPERATION_DURATION.clone())).unwrap();
REGISTRY.register(Box::new(ACTIVE_OPERATIONS.clone())).unwrap();
}
```
---
## Python Templates
### CLI Command Group
```python
import click
import json
from datetime import datetime
from pathlib import Path
@click.group()
def my_engine():
"""My Engine - Description"""
pass
@my_engine.command("create")
@click.option("--title", required=True, help="Title")
@click.option("--config", type=click.Path(exists=True), help="Config file")
def create(title: str, config: str):
"""Create a new contract."""
contract_id = f"contract-{datetime.utcnow().strftime('%Y%m%d%H%M%S')}"
contract = {
"id": contract_id,
"title": title,
"created_at": datetime.utcnow().isoformat() + "Z",
}
if config:
with open(config) as f:
contract.update(json.load(f))
# Store contract
contract_path = Path(f"cases/my_engine/{contract_id}/contract.json")
contract_path.parent.mkdir(parents=True, exist_ok=True)
with open(contract_path, "w") as f:
json.dump(contract, f, indent=2)
click.echo(f"✓ Contract created: {contract_id}")
@my_engine.command("execute")
@click.argument("contract_id")
def execute(contract_id: str):
"""Execute a contract."""
# Load contract
contract_path = Path(f"cases/my_engine/{contract_id}/contract.json")
with open(contract_path) as f:
contract = json.load(f)
# Execute (implementation specific)
state = {"status": "completed"}
# Emit receipt
receipt = emit_receipt(
scroll="my_scroll",
receipt_type="my_receipt_type",
body={
"contract_id": contract_id,
"status": state["status"],
},
tags=["my_engine"]
)
click.echo(f"✓ Executed: {contract_id}")
click.echo(f" Receipt: {receipt['root_hash'][:20]}...")
@my_engine.command("query")
@click.option("--status", help="Filter by status")
@click.option("--from", "from_date", help="From date")
@click.option("--to", "to_date", help="To date")
@click.option("--format", "output_format", default="table", type=click.Choice(["table", "json", "csv"]))
def query(status: str, from_date: str, to_date: str, output_format: str):
"""Query receipts."""
filters = {}
if status:
filters["status"] = status
if from_date:
filters["from_date"] = from_date
if to_date:
filters["to_date"] = to_date
receipts = load_receipts("my_scroll", filters)
if output_format == "json":
click.echo(json.dumps(receipts, indent=2))
else:
click.echo(f"Found {len(receipts)} receipts")
for r in receipts:
click.echo(f" {r.get('timestamp', '')[:19]} | {r.get('type', '')}")
```
### Receipt Utilities
```python
import json
import hashlib
from datetime import datetime
from pathlib import Path
from typing import Optional
def emit_receipt(scroll: str, receipt_type: str, body: dict, tags: list[str]) -> dict:
"""Create and emit a receipt to the appropriate scroll."""
receipt = {
"schema_version": "2.0.0",
"type": receipt_type,
"timestamp": datetime.utcnow().isoformat() + "Z",
"tags": tags,
**body
}
# Compute root hash
receipt_json = json.dumps(receipt, sort_keys=True)
root_hash = f"blake3:{hashlib.blake3(receipt_json.encode()).hexdigest()}"
receipt["root_hash"] = root_hash
# Append to scroll
scroll_path = Path(f"receipts/{scroll}/{scroll}_events.jsonl")
scroll_path.parent.mkdir(parents=True, exist_ok=True)
with open(scroll_path, "a") as f:
f.write(json.dumps(receipt) + "\n")
# Update Merkle root
update_merkle_root(scroll)
return receipt
def load_receipts(scroll: str, filters: Optional[dict] = None) -> list[dict]:
"""Load and filter receipts from a scroll."""
scroll_path = Path(f"receipts/{scroll}/{scroll}_events.jsonl")
if not scroll_path.exists():
return []
receipts = []
with open(scroll_path) as f:
for line in f:
receipt = json.loads(line.strip())
if filters:
match = True
for key, value in filters.items():
if key == "from_date":
if receipt.get("timestamp", "") < value:
match = False
elif key == "to_date":
if receipt.get("timestamp", "") > value:
match = False
elif key == "type":
if receipt.get("type") not in (value if isinstance(value, list) else [value]):
match = False
elif receipt.get(key) != value:
match = False
if match:
receipts.append(receipt)
else:
receipts.append(receipt)
return receipts
def update_merkle_root(scroll: str):
"""Recompute and update Merkle root for a scroll."""
scroll_path = Path(f"receipts/{scroll}/{scroll}_events.jsonl")
root_file = Path(f"receipts/ROOT.{scroll}.txt")
if not scroll_path.exists():
root_file.write_text("blake3:empty")
return
hashes = []
with open(scroll_path) as f:
for line in f:
receipt = json.loads(line.strip())
hashes.append(receipt.get("root_hash", ""))
if not hashes:
root_file.write_text("blake3:empty")
return
# Simple merkle root (production would use proper tree)
combined = "".join(h.replace("blake3:", "") for h in hashes)
root = f"blake3:{hashlib.blake3(combined.encode()).hexdigest()}"
root_file.write_text(root)
def verify_receipt(receipt_hash: str, scroll: str) -> bool:
"""Verify a receipt exists and is valid."""
receipts = load_receipts(scroll, {"root_hash": receipt_hash})
return len(receipts) > 0
```
### MCP Server Template
```python
from mcp.server import Server
from mcp.types import Tool, TextContent
import json
server = Server("my-engine")
@server.tool()
async def my_operation(
param1: str,
param2: int = 10,
) -> str:
"""
Description of what this tool does.
Args:
param1: Description of param1
param2: Description of param2
Returns:
Description of return value
"""
# Verify caller capabilities
caller = await get_caller_identity()
await verify_capability(caller, "required_capability")
# Perform operation
result = perform_operation(param1, param2)
# Emit receipt
await emit_tool_call_receipt(
tool="my_operation",
caller=caller,
params={"param1": param1, "param2": param2},
result_hash=result.hash,
)
return json.dumps(result.to_dict(), indent=2)
@server.tool()
async def my_query(
filter_param: str = None,
limit: int = 50,
) -> str:
"""
Query operation description.
Args:
filter_param: Optional filter
limit: Maximum results
Returns:
Query results
"""
caller = await get_caller_identity()
await verify_capability(caller, "view_capability")
results = query_data(filter_param, limit)
return json.dumps([r.to_dict() for r in results], indent=2)
def main():
import asyncio
from mcp.server.stdio import stdio_server
async def run():
async with stdio_server() as (read_stream, write_stream):
await server.run(
read_stream,
write_stream,
server.create_initialization_options(),
)
asyncio.run(run())
if __name__ == "__main__":
main()
```
---
## Property Test Templates
### Rust (proptest)
```rust
use proptest::prelude::*;
proptest! {
/// Receipts roundtrip through serialization
#[test]
fn receipt_roundtrip(receipt in arb_receipt()) {
let json = serde_json::to_string(&receipt)?;
let restored: Receipt<serde_json::Value> = serde_json::from_str(&json)?;
prop_assert_eq!(receipt.header.root_hash, restored.header.root_hash);
}
/// Hash is deterministic
#[test]
fn hash_deterministic(data in prop::collection::vec(any::<u8>(), 0..1000)) {
let hash1 = VmHash::blake3(&data);
let hash2 = VmHash::blake3(&data);
prop_assert_eq!(hash1, hash2);
}
/// Different data produces different hashes
#[test]
fn different_data_different_hash(
data1 in prop::collection::vec(any::<u8>(), 1..100),
data2 in prop::collection::vec(any::<u8>(), 1..100)
) {
prop_assume!(data1 != data2);
let hash1 = VmHash::blake3(&data1);
let hash2 = VmHash::blake3(&data2);
prop_assert_ne!(hash1, hash2);
}
}
fn arb_receipt() -> impl Strategy<Value = Receipt<serde_json::Value>> {
(
"[a-z]{5,20}", // receipt_type
any::<i64>().prop_map(|ts| DateTime::from_timestamp(ts.abs() % 2000000000, 0).unwrap()),
prop::collection::vec("[a-z]{3,10}", 0..5), // tags
).prop_map(|(receipt_type, timestamp, tags)| {
Receipt {
header: ReceiptHeader {
receipt_type,
timestamp,
root_hash: "blake3:placeholder".to_string(),
tags,
},
meta: ReceiptMeta {
scroll: Scroll::Drills,
sequence: 0,
anchor_epoch: None,
proof_path: None,
},
body: serde_json::json!({"test": true}),
}
})
}
```
### Python (hypothesis)
```python
from hypothesis import given, strategies as st
import json
@given(st.dictionaries(st.text(min_size=1, max_size=20), st.text(max_size=100), max_size=10))
def test_receipt_roundtrip(body):
"""Receipts survive JSON roundtrip."""
receipt = emit_receipt("test", "test_type", body, ["test"])
json_str = json.dumps(receipt)
restored = json.loads(json_str)
assert receipt["root_hash"] == restored["root_hash"]
assert receipt["type"] == restored["type"]
@given(st.binary(min_size=1, max_size=1000))
def test_hash_deterministic(data):
"""Hash is deterministic."""
hash1 = hashlib.blake3(data).hexdigest()
hash2 = hashlib.blake3(data).hexdigest()
assert hash1 == hash2
@given(
st.binary(min_size=1, max_size=100),
st.binary(min_size=1, max_size=100)
)
def test_different_data_different_hash(data1, data2):
"""Different data produces different hashes."""
if data1 == data2:
return # Skip if same
hash1 = hashlib.blake3(data1).hexdigest()
hash2 = hashlib.blake3(data2).hexdigest()
assert hash1 != hash2
```

315
docs/skill/ENGINE_SPECS.md Normal file
View File

@@ -0,0 +1,315 @@
# VaultMesh Engine Specifications
## Receipt Types by Scroll
### Drills
| Type | When Emitted |
|------|--------------|
| `security_drill_run` | Drill completed |
### Compliance
| Type | When Emitted |
|------|--------------|
| `oracle_answer` | Compliance question answered |
### Guardian
| Type | When Emitted |
|------|--------------|
| `anchor_success` | Anchor cycle succeeded |
| `anchor_failure` | Anchor cycle failed |
| `anchor_divergence` | Root mismatch detected |
### Treasury
| Type | When Emitted |
|------|--------------|
| `treasury_credit` | Credit entry recorded |
| `treasury_debit` | Debit entry recorded |
| `treasury_settlement` | Multi-party settlement completed |
| `treasury_reconciliation` | Periodic balance verification |
### Mesh
| Type | When Emitted |
|------|--------------|
| `mesh_node_join` | Node registered |
| `mesh_node_leave` | Node deregistered |
| `mesh_route_change` | Route added/removed/modified |
| `mesh_capability_grant` | Capability granted |
| `mesh_capability_revoke` | Capability revoked |
| `mesh_topology_snapshot` | Periodic topology capture |
### OffSec
| Type | When Emitted |
|------|--------------|
| `offsec_incident` | Incident closed |
| `offsec_redteam` | Red team engagement closed |
| `offsec_vuln_discovery` | Vulnerability confirmed |
| `offsec_remediation` | Remediation verified |
| `offsec_threat_intel` | New IOC/TTP added |
| `offsec_forensic_snapshot` | Forensic capture taken |
### Identity
| Type | When Emitted |
|------|--------------|
| `identity_did_create` | New DID registered |
| `identity_did_rotate` | Key rotation completed |
| `identity_credential_issue` | Credential issued |
| `identity_credential_revoke` | Credential revoked |
| `identity_auth_event` | Authentication attempt |
| `identity_capability_grant` | Capability granted |
| `identity_capability_exercise` | Capability used |
### Observability
| Type | When Emitted |
|------|--------------|
| `obs_metric_anomaly` | Anomaly detected/resolved |
| `obs_log_alert` | Log-based alert triggered |
| `obs_trace_summary` | Critical operation traced |
| `obs_health_snapshot` | Daily health capture |
| `obs_slo_breach` | SLO target missed |
| `obs_capacity_event` | Resource threshold crossed |
### Automation
| Type | When Emitted |
|------|--------------|
| `auto_workflow_run` | Workflow execution completed |
| `auto_scheduled_task` | Scheduled task executed |
| `auto_agent_action` | Agent took action |
| `auto_trigger_event` | External trigger received |
| `auto_approval_gate` | Approval gate resolved |
| `auto_error_recovery` | Error recovery completed |
### PsiField
| Type | When Emitted |
|------|--------------|
| `psi_phase_transition` | Phase change |
| `psi_emergence_event` | Emergent behavior detected |
| `psi_transmutation` | Negative → capability transform |
| `psi_resonance` | Cross-system synchronization |
| `psi_integration` | Learning crystallized |
| `psi_oracle_insight` | Significant Oracle insight |
### Federation
| Type | When Emitted |
|------|--------------|
| `fed_trust_proposal` | Trust proposal submitted |
| `fed_trust_established` | Federation agreement active |
| `fed_trust_revoked` | Federation terminated |
| `fed_witness_event` | Remote root witnessed |
| `fed_cross_anchor` | Remote root included in anchor |
| `fed_schema_sync` | Schema versions synchronized |
### Governance
| Type | When Emitted |
|------|--------------|
| `gov_proposal` | Proposal submitted |
| `gov_vote` | Vote cast |
| `gov_ratification` | Proposal ratified |
| `gov_amendment` | Constitution amended |
| `gov_executive_order` | Executive order issued |
| `gov_violation` | Violation detected |
| `gov_enforcement` | Enforcement action taken |
---
## Engine Contract Templates
### Treasury Settlement Contract
```json
{
"settlement_id": "settle-YYYY-MM-DD-NNN",
"title": "Settlement Title",
"initiated_by": "did:vm:node:portal-01",
"initiated_at": "ISO8601",
"parties": ["did:vm:node:...", "did:vm:node:..."],
"entries": [
{
"entry_id": "entry-NNN",
"type": "debit|credit",
"account": "acct:vm:node:...:type",
"amount": 0.00,
"currency": "EUR",
"memo": "Description"
}
],
"requires_signatures": ["node-id", "node-id"],
"settlement_type": "inter_node_resource|vendor_payment|..."
}
```
### Mesh Change Contract
```json
{
"change_id": "mesh-change-YYYY-MM-DD-NNN",
"title": "Change Title",
"initiated_by": "did:vm:node:portal-01",
"initiated_at": "ISO8601",
"change_type": "node_expansion|route_update|...",
"operations": [
{
"op_id": "op-NNN",
"operation": "node_join|route_add|capability_grant|...",
"target": "did:vm:node:...",
"config": {}
}
],
"requires_approval": ["node-id"],
"rollback_on_failure": true
}
```
### OffSec Incident Contract
```json
{
"case_id": "INC-YYYY-MM-NNN",
"case_type": "incident",
"title": "Incident Title",
"severity": "critical|high|medium|low",
"created_at": "ISO8601",
"phases": [
{
"phase_id": "phase-N-name",
"name": "Triage|Containment|Eradication|Recovery",
"objectives": ["..."],
"checklist": ["..."]
}
],
"assigned_responders": ["did:vm:human:..."],
"escalation_path": ["..."]
}
```
### Identity Operation Contract
```json
{
"operation_id": "idop-YYYY-MM-DD-NNN",
"operation_type": "key_rotation_ceremony|...",
"title": "Operation Title",
"initiated_by": "did:vm:human:...",
"initiated_at": "ISO8601",
"target_did": "did:vm:node:...",
"steps": [
{
"step_id": "step-N-name",
"action": "action_name",
"params": {}
}
],
"rollback_on_failure": true
}
```
### Transmutation Contract
```json
{
"transmutation_id": "psi-transmute-YYYY-MM-DD-NNN",
"title": "Transmutation Title",
"initiated_by": "did:vm:human:...",
"initiated_at": "ISO8601",
"input_material": {
"type": "security_incident|vulnerability|...",
"reference": "INC-YYYY-MM-NNN"
},
"target_phase": "citrinitas",
"transmutation_steps": [
{
"step_id": "step-N-name",
"name": "Step Name",
"action": "action_name",
"expected_output": "output_path"
}
],
"witnesses_required": ["node-id", "node-id"],
"success_criteria": {}
}
```
---
## State Machine Transitions
### Settlement Status
```
draft → pending_signatures → executing → completed
↘ disputed → resolved → completed
↘ expired
```
### Incident Status
```
reported → triaging → investigating → contained → eradicating → recovered → closed
↘ false_positive → closed
```
### Mesh Change Status
```
draft → pending_approval → in_progress → completed
↘ partial_failure → rollback → rolled_back
↘ failed → rollback → rolled_back
```
### Alchemical Phase
```
nigredo → albedo → citrinitas → rubedo
↑ │
└──────────────────────────────┘
(cycle continues)
```
---
## Capability Types
| Capability | Description | Typical Holders |
|------------|-------------|-----------------|
| `anchor` | Submit roots to anchor backends | Guardian nodes |
| `storage` | Store receipts and artifacts | Infrastructure nodes |
| `compute` | Execute drills, run agents | BRICK nodes |
| `oracle` | Issue compliance answers | Oracle nodes |
| `admin` | Grant/revoke capabilities | Portal, Sovereign |
| `federate` | Establish cross-mesh trust | Portal |
---
## Trust Levels (Federation)
| Level | Name | Description |
|-------|------|-------------|
| 0 | `isolated` | No federation |
| 1 | `observe` | Read-only witness |
| 2 | `verify` | Mutual verification |
| 3 | `attest` | Cross-attestation |
| 4 | `integrate` | Shared scrolls |
---
## Account Types (Treasury)
| Type | Purpose |
|------|---------|
| `operational` | Day-to-day infrastructure spend |
| `reserve` | Long-term holdings, runway |
| `escrow` | Held pending settlement |
| `external` | Counterparty accounts |
---
## Node Types (Mesh)
| Type | Purpose |
|------|---------|
| `infrastructure` | BRICK servers, compute |
| `edge` | Mobile devices, field endpoints |
| `oracle` | Compliance oracle instances |
| `guardian` | Dedicated anchor/sentinel |
| `external` | Federated nodes |
---
## Severity Levels
| Level | Description |
|-------|-------------|
| `critical` | Active breach, data exfiltration |
| `high` | Confirmed attack, potential breach |
| `medium` | Suspicious activity, policy violation |
| `low` | Anomaly, informational |

View File

@@ -0,0 +1,711 @@
# VaultMesh Infrastructure Templates
## Kubernetes Deployment
### Namespace
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: vaultmesh
labels:
app.kubernetes.io/name: vaultmesh
app.kubernetes.io/part-of: civilization-ledger
pod-security.kubernetes.io/enforce: restricted
```
### Generic Deployment Template
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: vaultmesh-{component}
namespace: vaultmesh
labels:
app.kubernetes.io/name: {component}
app.kubernetes.io/component: {role}
app.kubernetes.io/part-of: vaultmesh
spec:
replicas: {replicas}
selector:
matchLabels:
app.kubernetes.io/name: {component}
template:
metadata:
labels:
app.kubernetes.io/name: {component}
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
prometheus.io/path: "/metrics"
spec:
serviceAccountName: vaultmesh-{component}
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: {component}
image: ghcr.io/vaultmesh/{component}:{version}
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
ports:
- name: http
containerPort: {http_port}
protocol: TCP
- name: metrics
containerPort: 9090
protocol: TCP
env:
- name: RUST_LOG
value: "info,vaultmesh=debug"
- name: CONFIG_PATH
value: "/config/{component}.toml"
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: vaultmesh-db-credentials
key: {component}-url
volumeMounts:
- name: config
mountPath: /config
readOnly: true
- name: receipts
mountPath: /data/receipts
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: {cpu_request}
memory: {memory_request}
limits:
cpu: {cpu_limit}
memory: {memory_limit}
livenessProbe:
httpGet:
path: /health/live
port: http
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /health/ready
port: http
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: config
configMap:
name: vaultmesh-{component}-config
- name: receipts
persistentVolumeClaim:
claimName: vaultmesh-receipts
- name: tmp
emptyDir: {}
```
### Service Template
```yaml
apiVersion: v1
kind: Service
metadata:
name: vaultmesh-{component}
namespace: vaultmesh
spec:
selector:
app.kubernetes.io/name: {component}
ports:
- name: http
port: 80
targetPort: http
- name: metrics
port: 9090
targetPort: metrics
type: ClusterIP
```
### ConfigMap Template
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: vaultmesh-{component}-config
namespace: vaultmesh
data:
{component}.toml: |
[server]
bind = "0.0.0.0:{port}"
metrics_bind = "0.0.0.0:9090"
[database]
max_connections = 20
min_connections = 5
[receipts]
base_path = "/data/receipts"
# Component-specific configuration
```
### PersistentVolumeClaim
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vaultmesh-receipts
namespace: vaultmesh
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-csi
resources:
requests:
storage: 100Gi
```
### Ingress
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: vaultmesh-ingress
namespace: vaultmesh
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/rate-limit-window: "1m"
spec:
ingressClassName: nginx
tls:
- hosts:
- portal.vaultmesh.io
- guardian.vaultmesh.io
- oracle.vaultmesh.io
secretName: vaultmesh-tls
rules:
- host: portal.vaultmesh.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: vaultmesh-portal
port:
name: http
```
---
## Component Configurations
### Portal
```yaml
# Deployment overrides
replicas: 2
http_port: 8080
cpu_request: 100m
memory_request: 256Mi
cpu_limit: 1000m
memory_limit: 1Gi
```
```toml
# portal.toml
[server]
bind = "0.0.0.0:8080"
metrics_bind = "0.0.0.0:9090"
[database]
max_connections = 20
min_connections = 5
[receipts]
base_path = "/data/receipts"
[scrolls]
enabled = [
"Drills", "Compliance", "Guardian", "Treasury", "Mesh",
"OffSec", "Identity", "Observability", "Automation",
"PsiField", "Federation", "Governance"
]
[auth]
jwt_issuer = "vaultmesh-portal"
session_ttl_hours = 24
```
### Guardian
```yaml
# Deployment overrides
replicas: 1 # Single for coordination
strategy:
type: Recreate
http_port: 8081
cpu_request: 200m
memory_request: 512Mi
cpu_limit: 2000m
memory_limit: 2Gi
```
```toml
# guardian.toml
[server]
bind = "0.0.0.0:8081"
metrics_bind = "0.0.0.0:9090"
[proofchain]
receipts_path = "/data/receipts"
roots_path = "/data/receipts"
[anchor]
primary = "ethereum"
interval_seconds = 3600
min_receipts_threshold = 10
[anchor.ethereum]
rpc_url = "https://mainnet.infura.io/v3/${INFURA_PROJECT_ID}"
contract_address = "0x..."
chain_id = 1
[anchor.ots]
enabled = true
calendar_urls = [
"https://a.pool.opentimestamps.org",
"https://b.pool.opentimestamps.org"
]
[sentinel]
enabled = true
alert_webhook = "http://alertmanager:9093/api/v2/alerts"
```
### Oracle
```yaml
# Deployment overrides
replicas: 2
http_port: 8082
mcp_port: 8083
cpu_request: 200m
memory_request: 512Mi
cpu_limit: 2000m
memory_limit: 4Gi
```
```toml
# oracle.toml
[server]
http_bind = "0.0.0.0:8082"
mcp_bind = "0.0.0.0:8083"
metrics_bind = "0.0.0.0:9090"
[corpus]
path = "/data/corpus"
index_path = "/data/cache/index"
supported_formats = ["docx", "pdf", "md", "txt"]
[llm]
primary_provider = "anthropic"
primary_model = "claude-sonnet-4-20250514"
fallback_provider = "openai"
fallback_model = "gpt-4o"
temperature = 0.1
max_tokens = 4096
[receipts]
endpoint = "http://vaultmesh-portal/api/receipts/oracle"
```
---
## Docker Compose (Development)
```yaml
version: "3.9"
services:
portal:
build:
context: .
dockerfile: docker/portal/Dockerfile
ports:
- "8080:8080"
- "9090:9090"
environment:
- RUST_LOG=info,vaultmesh=debug
- VAULTMESH_CONFIG=/config/portal.toml
- DATABASE_URL=postgresql://vaultmesh:vaultmesh@postgres:5432/vaultmesh
- REDIS_URL=redis://redis:6379
volumes:
- ./config/portal.toml:/config/portal.toml:ro
- receipts:/data/receipts
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
guardian:
build:
context: .
dockerfile: docker/guardian/Dockerfile
ports:
- "8081:8081"
environment:
- RUST_LOG=info,guardian=debug
- GUARDIAN_CONFIG=/config/guardian.toml
- DATABASE_URL=postgresql://vaultmesh:vaultmesh@postgres:5432/vaultmesh
volumes:
- ./config/guardian.toml:/config/guardian.toml:ro
- receipts:/data/receipts
- guardian-state:/data/guardian
depends_on:
portal:
condition: service_healthy
oracle:
build:
context: .
dockerfile: docker/oracle/Dockerfile
ports:
- "8082:8082"
- "8083:8083"
environment:
- ORACLE_CONFIG=/config/oracle.toml
- OPENAI_API_KEY=${OPENAI_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- VAULTMESH_RECEIPT_ENDPOINT=http://portal:8080/api/receipts
volumes:
- ./config/oracle.toml:/config/oracle.toml:ro
- ./corpus:/data/corpus:ro
depends_on:
portal:
condition: service_healthy
postgres:
image: postgres:16-alpine
environment:
- POSTGRES_USER=vaultmesh
- POSTGRES_PASSWORD=vaultmesh
- POSTGRES_DB=vaultmesh
volumes:
- postgres-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U vaultmesh"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
volumes:
- redis-data:/data
command: redis-server --appendonly yes
prometheus:
image: prom/prometheus:v2.47.0
ports:
- "9091:9090"
volumes:
- ./config/prometheus.yaml:/etc/prometheus/prometheus.yml:ro
- prometheus-data:/prometheus
grafana:
image: grafana/grafana:10.1.0
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- ./config/grafana/provisioning:/etc/grafana/provisioning:ro
- grafana-data:/var/lib/grafana
volumes:
receipts:
guardian-state:
postgres-data:
redis-data:
prometheus-data:
grafana-data:
networks:
default:
name: vaultmesh
```
---
## Dockerfile Templates
### Rust Service
```dockerfile
# Build stage
FROM rust:1.75-alpine AS builder
RUN apk add --no-cache musl-dev openssl-dev openssl-libs-static
WORKDIR /build
COPY Cargo.toml Cargo.lock ./
COPY src ./src
RUN cargo build --release --target x86_64-unknown-linux-musl
# Runtime stage
FROM alpine:3.19
RUN apk add --no-cache ca-certificates tzdata
RUN adduser -D -u 1000 vaultmesh
USER vaultmesh
WORKDIR /app
COPY --from=builder /build/target/x86_64-unknown-linux-musl/release/{binary} /app/
EXPOSE 8080 9090
ENTRYPOINT ["/app/{binary}"]
```
### Python Service
```dockerfile
FROM python:3.12-slim
RUN useradd -m -u 1000 vaultmesh
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY --chown=vaultmesh:vaultmesh . .
USER vaultmesh
EXPOSE 8080 9090
CMD ["python", "-m", "{module}"]
```
---
## Prometheus Rules
```yaml
groups:
- name: vaultmesh.receipts
rules:
- alert: ReceiptWriteFailure
expr: rate(vaultmesh_receipt_write_errors_total[5m]) > 0
for: 1m
labels:
severity: critical
annotations:
summary: "Receipt write failures detected"
- alert: ReceiptRateAnomaly
expr: |
abs(rate(vaultmesh_receipts_total[5m]) -
avg_over_time(rate(vaultmesh_receipts_total[5m])[1h:5m]))
> 2 * stddev_over_time(rate(vaultmesh_receipts_total[5m])[1h:5m])
for: 10m
labels:
severity: warning
annotations:
summary: "Unusual receipt rate"
- name: vaultmesh.guardian
rules:
- alert: AnchorDelayed
expr: time() - vaultmesh_guardian_last_anchor_timestamp > 7200
for: 5m
labels:
severity: warning
annotations:
summary: "Guardian anchor delayed"
- alert: AnchorCriticallyDelayed
expr: time() - vaultmesh_guardian_last_anchor_timestamp > 14400
for: 5m
labels:
severity: critical
annotations:
summary: "No anchor in over 4 hours"
- alert: ProofChainDivergence
expr: vaultmesh_guardian_proofchain_divergence == 1
for: 1m
labels:
severity: critical
annotations:
summary: "ProofChain divergence detected"
- name: vaultmesh.governance
rules:
- alert: ConstitutionalViolation
expr: increase(vaultmesh_governance_violations_total[1h]) > 0
for: 0m
labels:
severity: critical
annotations:
summary: "Constitutional violation detected"
- alert: EmergencyActive
expr: vaultmesh_governance_emergency_active == 1
for: 0m
labels:
severity: warning
annotations:
summary: "Emergency powers in effect"
```
---
## Kustomization
### Base
```yaml
# kubernetes/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: vaultmesh
resources:
- namespace.yaml
- rbac.yaml
- portal/
- guardian/
- oracle/
- database/
- storage/
- ingress/
commonLabels:
app.kubernetes.io/part-of: vaultmesh
app.kubernetes.io/managed-by: kustomize
```
### Production Overlay
```yaml
# kubernetes/overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: vaultmesh
resources:
- ../../base
patches:
- path: portal-resources.yaml
- path: guardian-resources.yaml
- path: oracle-resources.yaml
configMapGenerator:
- name: vaultmesh-portal-config
behavior: merge
files:
- portal.toml=configs/portal-prod.toml
replicas:
- name: vaultmesh-portal
count: 3
- name: vaultmesh-oracle
count: 3
```
---
## Terraform (Infrastructure)
```hcl
# main.tf
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.23"
}
helm = {
source = "hashicorp/helm"
version = "~> 2.11"
}
}
}
resource "kubernetes_namespace" "vaultmesh" {
metadata {
name = "vaultmesh"
labels = {
"app.kubernetes.io/name" = "vaultmesh"
"app.kubernetes.io/part-of" = "civilization-ledger"
}
}
}
resource "helm_release" "vaultmesh" {
name = "vaultmesh"
namespace = kubernetes_namespace.vaultmesh.metadata[0].name
chart = "./charts/vaultmesh"
values = [
file("values-${var.environment}.yaml")
]
set {
name = "portal.replicas"
value = var.portal_replicas
}
set {
name = "guardian.anchor.ethereum.rpcUrl"
value = var.ethereum_rpc_url
}
set_sensitive {
name = "secrets.anthropicApiKey"
value = var.anthropic_api_key
}
}
variable "environment" {
type = string
default = "production"
}
variable "portal_replicas" {
type = number
default = 3
}
variable "ethereum_rpc_url" {
type = string
}
variable "anthropic_api_key" {
type = string
sensitive = true
}
```

View File

@@ -0,0 +1,493 @@
# VaultMesh MCP Integration Patterns
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ CLAUDE │
└───────────────────────────┬─────────────────────────────────┘
│ MCP Protocol
┌─────────────────────────────────────────────────────────────┐
│ MCP GATEWAY │
│ • Authentication (capability verification) │
│ • Rate limiting │
│ • Audit logging (all tool calls receipted) │
│ • Constitutional compliance checking │
└───────────────────────────┬─────────────────────────────────┘
┌───────────────┼───────────────┐
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ Oracle │ │ Drills │ │ Mesh │
│ Server │ │ Server │ │ Server │
└───────────┘ └───────────┘ └───────────┘
```
## Tool Categories
### Read-Only Tools (Default Access)
| Tool | Capability | Description |
|------|------------|-------------|
| `oracle_answer` | `oracle_query` | Ask compliance questions |
| `oracle_corpus_search` | `oracle_query` | Search compliance corpus |
| `drills_status` | `drills_view` | View drill status |
| `mesh_topology` | `mesh_view` | View mesh topology |
| `mesh_node_status` | `mesh_view` | View node status |
| `treasury_balance` | `treasury_view` | View balances |
| `guardian_anchor_status` | `guardian_view` | View anchor status |
| `guardian_verify_receipt` | `guardian_view` | Verify receipts |
| `identity_resolve_did` | `identity_view` | Resolve DIDs |
| `identity_whoami` | (any) | View own identity |
| `psi_phase_status` | `psi_view` | View phase status |
| `psi_opus_status` | `psi_view` | View opus status |
| `governance_constitution_summary` | `governance_view` | View constitution |
| `receipts_search` | `receipts_view` | Search receipts |
| `system_health` | `system_view` | View system health |
### Write Tools (Elevated Access)
| Tool | Capability | Description |
|------|------------|-------------|
| `drills_create` | `drills_create` | Create new drill |
| `drills_complete_stage` | `drills_execute` | Complete drill stage |
| `treasury_record_entry` | `treasury_write` | Record financial entry |
| `guardian_anchor_now` | `anchor` | Trigger anchor cycle |
| `psi_transmute` | `psi_transmute` | Start transmutation |
## Tool Implementation Patterns
### Basic Read Tool
```python
@server.tool()
async def my_read_tool(
filter_param: str = None,
limit: int = 50,
) -> str:
"""
Description of what this tool does.
Args:
filter_param: Optional filter
limit: Maximum results
Returns:
Query results as JSON
"""
# Verify capability
caller = await get_caller_identity()
await verify_capability(caller, "my_view")
# Perform query
results = await engine.query(filter_param, limit)
return json.dumps([r.to_dict() for r in results], indent=2)
```
### Write Tool with Receipt
```python
@server.tool()
async def my_write_tool(
param1: str,
param2: int,
) -> str:
"""
Description of write operation.
Args:
param1: First parameter
param2: Second parameter
Returns:
Operation result as JSON
"""
# Verify elevated capability
caller = await get_caller_identity()
await verify_capability(caller, "my_write")
# Perform operation
result = await engine.perform_operation(param1, param2)
# Emit receipt for audit
await emit_tool_call_receipt(
tool="my_write_tool",
caller=caller,
params={"param1": param1, "param2": param2},
result_hash=result.hash,
)
return json.dumps(result.to_dict(), indent=2)
```
### Tool with Constitutional Check
```python
@server.tool()
async def sensitive_operation(
target: str,
action: str,
) -> str:
"""
Operation requiring constitutional compliance check.
"""
caller = await get_caller_identity()
await verify_capability(caller, "admin")
# Check constitutional compliance BEFORE executing
compliance = await governance_engine.check_compliance(
action=action,
actor=caller,
target=target,
)
if not compliance.allowed:
return json.dumps({
"error": "constitutional_violation",
"violated_articles": compliance.violated_articles,
"message": compliance.message,
}, indent=2)
# Execute if compliant
result = await engine.execute(target, action)
await emit_tool_call_receipt(
tool="sensitive_operation",
caller=caller,
params={"target": target, "action": action},
result_hash=result.hash,
)
return json.dumps(result.to_dict(), indent=2)
```
## Tool Call Receipt
Every MCP tool call is receipted:
```json
{
"type": "mcp_tool_call",
"call_id": "mcp-call-2025-12-06-001",
"timestamp": "2025-12-06T14:30:00Z",
"caller": "did:vm:agent:claude-session-abc123",
"tool": "oracle_answer",
"params_hash": "blake3:params...",
"result_hash": "blake3:result...",
"duration_ms": 1250,
"capability_used": "oracle_query",
"session_id": "session-xyz789",
"tags": ["mcp", "oracle", "tool-call"],
"root_hash": "blake3:aaa111..."
}
```
## Authentication
### Session Identity
```python
async def get_caller_identity() -> str:
"""Get the DID of the current MCP caller."""
session = get_current_session()
if session.authenticated_did:
return session.authenticated_did
# Anonymous callers get session-scoped agent DID
return f"did:vm:agent:mcp-session-{session.id}"
```
### Capability Verification
```python
async def verify_capability(caller: str, capability: str) -> bool:
"""Verify the caller has the required capability."""
has_cap = await identity_engine.check_capability(caller, capability)
if not has_cap:
raise PermissionError(
f"Caller {caller} lacks capability: {capability}"
)
# Log capability exercise
await identity_engine.log_capability_exercise(
caller=caller,
capability=capability,
action="mcp_tool_call",
)
return True
```
## Rate Limiting
```python
class RateLimiter:
def __init__(self):
self.limits = {
"oracle_answer": (10, timedelta(minutes=1)),
"guardian_anchor_now": (5, timedelta(hours=1)),
"treasury_record_entry": (100, timedelta(hours=1)),
"default": (60, timedelta(minutes=1)),
}
async def check(self, caller: str, tool: str) -> bool:
key = f"{caller}:{tool}"
limit, window = self.limits.get(tool, self.limits["default"])
# Check against limit
current_count = await self.get_count(key, window)
if current_count >= limit:
raise RateLimitExceeded(
f"Rate limit exceeded: {limit} per {window}"
)
await self.increment(key)
return True
```
## Claude Desktop Configuration
### config.json
```json
{
"mcpServers": {
"vaultmesh": {
"command": "python",
"args": ["-m", "vaultmesh_mcp.server"],
"env": {
"VAULTMESH_CONFIG": "/path/to/config.toml",
"VAULTMESH_IDENTITY": "did:vm:agent:claude-desktop"
}
}
}
}
```
### Capability Configuration
```toml
# config.toml
[mcp.capabilities]
default_capabilities = [
"oracle_query",
"drills_view",
"mesh_view",
"treasury_view",
"guardian_view",
"identity_view",
"psi_view",
"governance_view",
"receipts_view",
"system_view",
]
elevated_capabilities = [
"drills_create",
"drills_execute",
"treasury_write",
"anchor",
"psi_transmute",
]
[mcp.rate_limits]
oracle_answer = { limit = 10, window_minutes = 1 }
guardian_anchor_now = { limit = 5, window_hours = 1 }
default = { limit = 60, window_minutes = 1 }
[mcp.audit]
receipt_all_calls = true
log_params = true
log_results = false
```
## Example Interactions
### Compliance Query
```
Human: What's our compliance status for AI Act Annex IV?
Claude: [Calls oracle_answer]
Based on the Oracle's assessment:
**Compliance Status: Partial (73%)**
Key findings:
- Technical documentation: ✅ Compliant
- Risk management: ⚠️ Needs update
- Data governance: ✅ Compliant
- Human oversight: ⚠️ Documentation incomplete
```
### Security Drill
```
Human: Start a drill for Kubernetes lateral movement
Claude: [Calls drills_create]
Created drill: drill-1764691432
**Stages:**
1. Initial Access Simulation
2. Lateral Movement Attempt
3. Detection Validation
4. Response Execution
Ready to begin?
```
### System Health
```
Human: How's the system?
Claude: [Calls system_health]
**VaultMesh Health: 🟢 Healthy**
- Nodes: 5 active
- Last anchor: 47 min ago
- Phase: RUBEDO 🜂
- Receipts today: 34
```
## Server Entry Point
```python
# vaultmesh_mcp/server.py
from mcp.server import Server
from mcp.server.stdio import stdio_server
server = Server("vaultmesh")
# Register all tools
from .tools import (
oracle_tools,
drills_tools,
mesh_tools,
treasury_tools,
guardian_tools,
identity_tools,
psi_tools,
governance_tools,
)
def main():
import asyncio
async def run():
async with stdio_server() as (read, write):
await server.run(read, write, server.create_initialization_options())
asyncio.run(run())
if __name__ == "__main__":
main()
```
## Custom VaultMesh Nodes for n8n
When integrating with n8n workflows:
```javascript
// VaultMesh Receipt Emit Node
{
name: 'vaultmesh-receipt-emit',
displayName: 'VaultMesh Receipt',
description: 'Emit a receipt to VaultMesh',
properties: [
{
displayName: 'Scroll',
name: 'scroll',
type: 'options',
options: [
{ name: 'Automation', value: 'automation' },
{ name: 'Compliance', value: 'compliance' },
// ...
],
},
{
displayName: 'Receipt Type',
name: 'receiptType',
type: 'string',
},
{
displayName: 'Body',
name: 'body',
type: 'json',
},
{
displayName: 'Tags',
name: 'tags',
type: 'string',
description: 'Comma-separated tags',
},
],
async execute() {
const scroll = this.getNodeParameter('scroll', 0);
const receiptType = this.getNodeParameter('receiptType', 0);
const body = this.getNodeParameter('body', 0);
const tags = this.getNodeParameter('tags', 0).split(',');
const receipt = await vaultmesh.emitReceipt({
scroll,
receiptType,
body,
tags,
});
return [{ json: receipt }];
},
}
```
## Error Handling
```python
@server.tool()
async def robust_tool(param: str) -> str:
"""Tool with comprehensive error handling."""
try:
caller = await get_caller_identity()
await verify_capability(caller, "required_cap")
result = await engine.operation(param)
return json.dumps(result.to_dict(), indent=2)
except PermissionError as e:
return json.dumps({
"error": "permission_denied",
"message": str(e),
"required_capability": "required_cap",
}, indent=2)
except RateLimitExceeded as e:
return json.dumps({
"error": "rate_limit_exceeded",
"message": str(e),
"retry_after_seconds": e.retry_after,
}, indent=2)
except ConstitutionalViolation as e:
return json.dumps({
"error": "constitutional_violation",
"violated_axioms": e.axioms,
"message": str(e),
}, indent=2)
except Exception as e:
logger.error(f"Tool error: {e}")
return json.dumps({
"error": "internal_error",
"message": "An unexpected error occurred",
}, indent=2)
```

537
docs/skill/OPERATIONS.md Normal file
View File

@@ -0,0 +1,537 @@
# VaultMesh Operations Guide
## Daily Operations
### Morning Health Check
```bash
#!/bin/bash
# scripts/morning-check.sh
echo "=== VaultMesh Morning Health Check ==="
echo "Date: $(date -u +%Y-%m-%dT%H:%M:%SZ)"
# 1. System health
echo -e "\n1. System Health"
vm-cli system health
# 2. Guardian status
echo -e "\n2. Guardian Status"
vm-guardian anchor-status
# 3. Phase status
echo -e "\n3. Current Phase"
vm-psi phase current
# 4. Overnight receipts
echo -e "\n4. Receipts (last 12h)"
vm-cli receipts count --since 12h
# 5. Any violations
echo -e "\n5. Governance Violations"
vm-gov violations list --since 24h --severity high,critical
# 6. Federation health
echo -e "\n6. Federation Status"
vm-federation health --all-peers
echo -e "\n=== Check Complete ==="
```
### Anchor Monitoring
```bash
# Check anchor status
vm-guardian anchor-status
# View anchor history
vm-guardian anchor-history --last 24h
# Trigger manual anchor if needed
vm-guardian anchor-now --wait
# Verify specific receipt
vm-guardian verify-receipt blake3:abc123... --scroll Compliance
```
### Receipt Queries
```bash
# Count receipts by scroll
vm-cli receipts count --by-scroll
# Search receipts
vm-cli receipts search --scroll Drills --from 2025-12-01 --to 2025-12-06
# Export receipts
vm-cli receipts export --scroll Compliance --format csv --output compliance.csv
# Verify integrity
vm-guardian verify-all --scroll all
```
---
## Common Tasks
### Add New Node to Mesh
```bash
# 1. Create DID for new node
vm-identity did create --type node --id new-node-01
# 2. Issue node credential
vm-identity credential issue \
--type VaultMeshNodeCredential \
--subject did:vm:node:new-node-01 \
--issuer did:vm:node:portal-01
# 3. Add to mesh
vm-mesh node add \
--did did:vm:node:new-node-01 \
--endpoint https://new-node-01.vaultmesh.io \
--type infrastructure
# 4. Grant capabilities
vm-identity capability grant \
--subject did:vm:node:new-node-01 \
--capability storage,compute
# 5. Verify
vm-mesh node status new-node-01
```
### Key Rotation Ceremony
```bash
# 1. Initiate ceremony
vm-identity key-rotate \
--did did:vm:node:brick-01 \
--ceremony-type standard
# 2. Generate new keypair (on target node)
vm-identity key-generate --algorithm ed25519
# 3. Witness signatures (from other nodes)
vm-identity key-witness \
--ceremony ceremony-2025-12-001 \
--witness did:vm:node:brick-02
# 4. Publish new key
vm-identity key-publish --ceremony ceremony-2025-12-001
# 5. Verify propagation
vm-identity did resolve did:vm:node:brick-01
```
### Create Security Drill
```bash
# 1. Create drill from prompt
vm-drills create \
--prompt "Detect and respond to ransomware encryption" \
--severity high \
--skills detection-defense-ir,kubernetes-security
# 2. Review generated contract
vm-drills show drill-2025-12-001
# 3. Start execution
vm-drills start drill-2025-12-001
# 4. Complete stages
vm-drills complete-stage drill-2025-12-001 stage-1 \
--outputs cases/drills/drill-2025-12-001/stage-1/ \
--findings "Identified encryption patterns"
# 5. Seal drill
vm-drills seal drill-2025-12-001
```
### Initiate Transmutation
```bash
# 1. Start transmutation from incident
vm-psi transmute start \
--input INC-2025-12-001 \
--input-type security_incident \
--title "SSH Brute Force to Detection"
# 2. Extract IOCs
vm-psi transmute step transmute-2025-12-001 extract
# 3. Dissolve to standard format
vm-psi transmute step transmute-2025-12-001 dissolve
# 4. Purify (validate)
vm-psi transmute step transmute-2025-12-001 purify
# 5. Coagulate (generate rules)
vm-psi transmute step transmute-2025-12-001 coagulate
# 6. Seal
vm-psi transmute seal transmute-2025-12-001
```
---
## Troubleshooting
### Anchor Failures
**Symptom**: `vm-guardian anchor-status` shows failures
**Diagnosis**:
```bash
# Check guardian logs
kubectl logs -n vaultmesh -l app.kubernetes.io/name=guardian --tail=100
# Check anchor backend connectivity
vm-guardian test-backend ethereum
vm-guardian test-backend ots
# Check pending receipts
vm-guardian pending-receipts
```
**Common Causes**:
1. **Network issues**: Check Ethereum RPC connectivity
2. **Insufficient funds**: Check anchor wallet balance
3. **Rate limiting**: Check if backend is rate limiting
4. **Configuration**: Verify anchor config
**Resolution**:
```bash
# Retry anchor
vm-guardian anchor-now --backend ots --wait
# If Ethereum issues, switch to OTS temporarily
vm-guardian config set anchor.primary ots
# Check and top up wallet
vm-guardian wallet balance
vm-guardian wallet fund --amount 0.1
```
### Receipt Integrity Errors
**Symptom**: `verify-all` reports mismatches
**Diagnosis**:
```bash
# Identify affected scroll
vm-guardian verify-all --scroll all --verbose
# Check specific receipt
vm-guardian verify-receipt blake3:... --scroll Compliance --debug
# Compare computed vs stored root
vm-guardian compute-root --scroll Compliance
cat receipts/ROOT.compliance.txt
```
**Common Causes**:
1. **Corrupted JSONL**: File system issues
2. **Incomplete write**: Process interrupted
3. **Manual modification**: Violation of AXIOM-001
**Resolution**:
```bash
# If corruption detected, restore from backup
vm-cli backup restore --backup-id backup-2025-12-05 --scroll Compliance
# Recompute root after restore
vm-guardian recompute-root --scroll Compliance
# Trigger anchor to seal restored state
vm-guardian anchor-now --scroll Compliance --wait
```
### Node Connectivity Issues
**Symptom**: Node showing unhealthy in mesh
**Diagnosis**:
```bash
# Check node status
vm-mesh node status brick-02
# Test connectivity
vm-mesh ping brick-02
# Check routes
vm-mesh routes list --node brick-02
# Check node logs
kubectl logs -n vaultmesh pod/brick-02 --tail=100
```
**Common Causes**:
1. **Network partition**: Firewall/network issues
2. **Resource exhaustion**: Node overloaded
3. **Certificate expiry**: TLS cert expired
4. **Process crash**: Service died
**Resolution**:
```bash
# Restart node pod
kubectl rollout restart deployment/brick-02 -n vaultmesh
# If cert expired
vm-identity cert-renew --node brick-02
# If persistent issues, remove and re-add
vm-mesh node remove brick-02 --force
vm-mesh node add --did did:vm:node:brick-02 --endpoint https://...
```
### Oracle Query Failures
**Symptom**: Oracle returning errors
**Diagnosis**:
```bash
# Check oracle health
vm-oracle health
# Check LLM connectivity
vm-oracle test-llm anthropic
vm-oracle test-llm openai
# Check corpus status
vm-oracle corpus status
# Check logs
kubectl logs -n vaultmesh -l app.kubernetes.io/name=oracle --tail=100
```
**Common Causes**:
1. **LLM API issues**: Rate limiting, key expiry
2. **Corpus empty**: Documents not loaded
3. **Index corruption**: Vector index issues
4. **Memory exhaustion**: OOM conditions
**Resolution**:
```bash
# Rotate API key if expired
kubectl create secret generic oracle-llm-credentials \
--from-literal=anthropic-key=NEW_KEY \
-n vaultmesh --dry-run=client -o yaml | kubectl apply -f -
# Reload corpus
vm-oracle corpus reload
# Rebuild index
vm-oracle corpus reindex
# Restart oracle
kubectl rollout restart deployment/vaultmesh-oracle -n vaultmesh
```
### Phase Stuck in Nigredo
**Symptom**: System in Nigredo for extended period
**Diagnosis**:
```bash
# Check phase details
vm-psi phase current --verbose
# Check active incidents
vm-offsec incidents list --status open
# Check for blocking issues
vm-psi blockers
# Review phase history
vm-psi phase history --last 7d
```
**Common Causes**:
1. **Unresolved incident**: Active security issue
2. **Failed transmutation**: Stuck in process
3. **Missing witness**: Transmutation waiting for signature
4. **Metric threshold**: Health metrics below threshold
**Resolution**:
```bash
# Close incident if resolved
vm-offsec incident close INC-2025-12-001 \
--resolution "Threat neutralized, systems restored"
# Complete stuck transmutation
vm-psi transmute force-complete transmute-2025-12-001
# Manual phase transition (requires justification)
vm-psi phase transition albedo \
--reason "Incident resolved, metrics stable" \
--evidence evidence-report.md
```
### Constitutional Violation Detected
**Symptom**: `gov_violation` alert fired
**Diagnosis**:
```bash
# View violation details
vm-gov violations show VIOL-2025-12-001
# Check what was attempted
vm-gov violations evidence VIOL-2025-12-001
# Review enforcement action
vm-gov enforcement show ENF-2025-12-001
```
**Common Causes**:
1. **Agent misconfiguration**: Automation tried unauthorized action
2. **Capability expiry**: Token expired mid-operation
3. **Bug in engine**: Logic error attempting violation
4. **Attack attempt**: Malicious action blocked
**Resolution**:
```bash
# If false positive, dismiss
vm-gov violations review VIOL-2025-12-001 \
--decision dismiss \
--reason "False positive due to timing issue"
# If real, review and uphold enforcement
vm-gov enforcement review ENF-2025-12-001 --decision uphold
# Fix underlying issue
# (depends on specific violation)
```
---
## Backup & Recovery
### Scheduled Backups
```bash
# Full backup
vm-cli backup create --type full
# Incremental backup
vm-cli backup create --type incremental
# List backups
vm-cli backup list
# Verify backup integrity
vm-cli backup verify backup-2025-12-05
```
### Recovery Procedures
```bash
# 1. Stop services
kubectl scale deployment -n vaultmesh --replicas=0 --all
# 2. Restore from backup
vm-cli backup restore --backup-id backup-2025-12-05
# 3. Verify integrity
vm-guardian verify-all --scroll all
# 4. Restart services
kubectl scale deployment -n vaultmesh --replicas=2 \
vaultmesh-portal vaultmesh-oracle
kubectl scale deployment -n vaultmesh --replicas=1 vaultmesh-guardian
# 5. Trigger anchor to seal restored state
vm-guardian anchor-now --wait
```
### Disaster Recovery
```bash
# Full rebuild from backup
./scripts/disaster-recovery.sh --backup backup-2025-12-05
# Verify federation peers
vm-federation verify-all
# Re-establish federation trust if needed
vm-federation re-establish --peer vaultmesh-berlin
```
---
## Performance Tuning
### Receipt Write Optimization
```toml
# config.toml
[receipts]
# Batch writes for better throughput
batch_size = 100
batch_timeout_ms = 100
# Compression
compression = "zstd"
compression_level = 3
# Index configuration
index_cache_size_mb = 512
```
### Database Tuning
```sql
-- Vacuum and analyze
VACUUM ANALYZE receipts;
-- Check slow queries
SELECT query, calls, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;
-- Index usage
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes
ORDER BY idx_scan;
```
### Memory Optimization
```bash
# Check memory usage
kubectl top pods -n vaultmesh
# Adjust limits if needed
kubectl patch deployment vaultmesh-oracle -n vaultmesh \
-p '{"spec":{"template":{"spec":{"containers":[{"name":"oracle","resources":{"limits":{"memory":"8Gi"}}}]}}}}'
```
---
## Monitoring Dashboards
### Key Metrics to Watch
| Metric | Warning | Critical |
|--------|---------|----------|
| `vaultmesh_guardian_last_anchor_age` | > 2h | > 4h |
| `vaultmesh_receipt_write_errors_total` | > 0 | > 10/min |
| `vaultmesh_mesh_node_unhealthy` | any | multiple |
| `vaultmesh_oracle_latency_p95` | > 30s | > 60s |
| `vaultmesh_governance_violations` | any | critical |
| `vaultmesh_psi_phase` | nigredo > 24h | nigredo > 72h |
### Alert Response
```bash
# Acknowledge alert
vm-alerts ack ALERT-2025-12-001
# Silence alert (for maintenance)
vm-alerts silence --matcher 'alertname="AnchorDelayed"' --duration 2h
# View active alerts
vm-alerts list --active
```

605
docs/skill/PROTOCOLS.md Normal file
View File

@@ -0,0 +1,605 @@
# VaultMesh Federation & Governance Protocols
## Federation Protocol
### Trust Establishment Flow
```
┌──────────────┐ ┌──────────────┐
│ MESH-A │ │ MESH-B │
│ (Dublin) │ │ (Berlin) │
└──────┬───────┘ └──────┬───────┘
│ │
│ 1. Discovery │
│ GET /federation/discovery │
│──────────────────────────────────►│
│ │
│ 2. Proposal │
│ POST /federation/proposals │
│──────────────────────────────────►│
│ │
│ 3. Counter/Accept │
│◄──────────────────────────────────│
│ │
│ 4. Mutual Signature │
│◄─────────────────────────────────►│
│ │
│ 5. Begin Witness Cycle │
│◄─────────────────────────────────►│
│ │
```
### Trust Levels
| Level | Name | Capabilities |
|-------|------|--------------|
| 0 | `isolated` | No federation |
| 1 | `observe` | Read-only witness, public receipts only |
| 2 | `verify` | Mutual verification, receipt sampling |
| 3 | `attest` | Cross-attestation, shared roots |
| 4 | `integrate` | Shared scrolls, joint governance |
### Discovery Record
```json
{
"mesh_id": "did:vm:mesh:vaultmesh-dublin",
"display_name": "VaultMesh Dublin",
"endpoints": {
"federation": "https://federation.vaultmesh-dublin.io",
"verification": "https://verify.vaultmesh-dublin.io"
},
"public_key": "ed25519:z6Mk...",
"scrolls_available": ["Compliance", "Drills"],
"trust_policy": {
"accepts_proposals": true,
"min_trust_level": 1,
"requires_mutual": true
},
"attestations": []
}
```
### Trust Proposal
```json
{
"proposal_id": "fed-proposal-2025-12-06-001",
"proposer": "did:vm:mesh:vaultmesh-dublin",
"target": "did:vm:mesh:vaultmesh-berlin",
"proposed_at": "2025-12-06T10:00:00Z",
"expires_at": "2025-12-13T10:00:00Z",
"proposed_trust_level": 2,
"proposed_terms": {
"scrolls_to_share": ["Compliance"],
"verification_frequency": "hourly",
"retention_period_days": 365,
"data_jurisdiction": "EU",
"audit_rights": true
},
"proposer_attestations": {
"identity_proof": "...",
"compliance_credentials": ["ISO27001", "SOC2"]
},
"signature": "z58D..."
}
```
### Federation Agreement
```json
{
"agreement_id": "fed-agreement-2025-12-06-001",
"parties": [
"did:vm:mesh:vaultmesh-dublin",
"did:vm:mesh:vaultmesh-berlin"
],
"established_at": "2025-12-06T16:00:00Z",
"trust_level": 2,
"terms": {
"scrolls_shared": ["Compliance", "Drills"],
"verification_frequency": "daily",
"retention_period_days": 180,
"data_jurisdiction": "EU",
"audit_rights": true,
"dispute_resolution": "arbitration_zurich"
},
"key_exchange": {
"dublin_federation_key": "ed25519:z6MkDublin...",
"berlin_federation_key": "ed25519:z6MkBerlin..."
},
"signatures": {
"did:vm:mesh:vaultmesh-dublin": {
"signed_at": "2025-12-06T15:30:00Z",
"signature": "z58D..."
},
"did:vm:mesh:vaultmesh-berlin": {
"signed_at": "2025-12-06T16:00:00Z",
"signature": "z47C..."
}
},
"agreement_hash": "blake3:abc123..."
}
```
### Witness Protocol
```
Anchor Completes → Notify Peer → Peer Verifies → Witness Receipt
```
**Witness Receipt**:
```json
{
"type": "fed_witness_event",
"witness_id": "witness-2025-12-06-001",
"witnessed_mesh": "did:vm:mesh:vaultmesh-dublin",
"witnessing_mesh": "did:vm:mesh:vaultmesh-berlin",
"timestamp": "2025-12-06T12:05:00Z",
"scroll": "Compliance",
"witnessed_root": "blake3:aaa111...",
"witnessed_anchor": {
"backend": "ethereum",
"tx_hash": "0x123...",
"block_number": 12345678
},
"verification_method": "anchor_proof_validation",
"verification_result": "verified",
"samples_checked": 5,
"discrepancies": [],
"witness_signature": "z47C..."
}
```
### Cross-Anchor
At trust level 3+, meshes include each other's roots:
```json
{
"type": "fed_cross_anchor",
"anchoring_mesh": "did:vm:mesh:vaultmesh-berlin",
"anchored_mesh": "did:vm:mesh:vaultmesh-dublin",
"dublin_roots_included": {
"Compliance": "blake3:aaa111...",
"Drills": "blake3:bbb222..."
},
"combined_root": "blake3:ccc333...",
"anchor_proof": {
"backend": "bitcoin",
"tx_hash": "abc123..."
}
}
```
### Federation API Endpoints
| Endpoint | Method | Purpose |
|----------|--------|---------|
| `/federation/discovery` | GET | Get mesh discovery record |
| `/federation/proposals` | POST | Submit trust proposal |
| `/federation/proposals/{id}` | GET, PUT | View/respond to proposal |
| `/federation/agreements` | GET | List active agreements |
| `/federation/agreements/{id}` | GET, DELETE | View/revoke agreement |
| `/federation/notify` | POST | Notify of new anchor |
| `/federation/witness` | POST | Submit witness attestation |
| `/federation/roots` | GET | Get current Merkle roots |
| `/federation/receipts/{scroll}` | GET | Fetch receipt samples |
| `/federation/verify` | POST | Request receipt verification |
### CLI Commands
```bash
# Discovery
vm-federation discover --mesh vaultmesh-berlin.io
vm-federation list-known
# Proposals
vm-federation propose \
--target did:vm:mesh:vaultmesh-berlin \
--trust-level 2 \
--scrolls Compliance,Drills
vm-federation proposals list
vm-federation proposals accept fed-proposal-001
vm-federation proposals reject fed-proposal-001 --reason "..."
# Agreements
vm-federation agreements list
vm-federation agreements revoke fed-agreement-001 --notice-days 30
# Verification
vm-federation verify --mesh vaultmesh-berlin --scroll Compliance
vm-federation witness-history --mesh vaultmesh-berlin --last 30d
# Status
vm-federation status
vm-federation health --all-peers
```
---
## Constitutional Governance
### Hierarchy
```
┌─────────────────────────────────────────────────────────────────┐
│ IMMUTABLE AXIOMS │
│ (Cannot be changed, ever) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ CONSTITUTIONAL ARTICLES │
│ (Amendable with supermajority + ratification) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STATUTORY RULES │
│ (Changeable with standard procedures) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ EXECUTIVE ORDERS │
│ (Issued by authorized actors) │
└─────────────────────────────────────────────────────────────────┘
```
### Immutable Axioms
| ID | Name | Statement |
|----|------|-----------|
| AXIOM-001 | Append-Only Receipts | Receipts, once written, shall never be modified or deleted |
| AXIOM-002 | Cryptographic Integrity | All receipts include cryptographic hashes |
| AXIOM-003 | Universal Receipting | All significant changes produce receipts |
| AXIOM-004 | Constitutional Supremacy | No action may violate the Constitution |
| AXIOM-005 | Axiom Immutability | These axioms cannot be amended |
### Constitutional Articles
| Article | Name | Content |
|---------|------|---------|
| I | Governance Structure | Sovereign authority, engine authorities, agent delegation |
| II | Amendment Procedure | Proposal, deliberation, ratification |
| III | Engine Governance | Engine registry, boundaries, lifecycle |
| IV | Rights and Protections | Audit rights, data sovereignty, due process |
| V | Federation | Authority, limits, termination |
| VI | Emergency Powers | Declaration, powers, duration |
### Amendment Workflow
```
PROPOSAL → DELIBERATION (7+ days) → VOTING → RATIFICATION → ACTIVATION
↘ REJECTED → Archive
```
### Proposal Receipt
```json
{
"type": "gov_proposal",
"proposal_id": "PROP-2025-12-001",
"proposal_type": "amendment",
"title": "Add Data Retention Article",
"author": "did:vm:human:sovereign",
"submitted_at": "2025-12-06T10:00:00Z",
"deliberation_ends": "2025-12-13T10:00:00Z",
"content": {
"target": "ARTICLE-VII",
"action": "add",
"text": {
"id": "ARTICLE-VII",
"name": "Data Retention",
"sections": [...]
}
},
"rationale": "Compliance with EU regulations",
"status": "deliberation"
}
```
### Vote Receipt
```json
{
"type": "gov_vote",
"vote_id": "VOTE-2025-12-001-sovereign",
"proposal_id": "PROP-2025-12-001",
"voter": "did:vm:human:sovereign",
"voted_at": "2025-12-14T10:00:00Z",
"vote": "approve",
"weight": 1.0,
"comments": "Essential for compliance",
"signature": "z58D..."
}
```
### Ratification Receipt
```json
{
"type": "gov_ratification",
"ratification_id": "RAT-2025-12-001",
"proposal_id": "PROP-2025-12-001",
"ratified_at": "2025-12-14T12:00:00Z",
"ratified_by": "did:vm:human:sovereign",
"vote_summary": {
"approve": 1,
"reject": 0,
"abstain": 0
},
"quorum_met": true,
"constitution_version_before": "1.0.0",
"constitution_version_after": "1.1.0"
}
```
### Amendment Receipt
```json
{
"type": "gov_amendment",
"amendment_id": "AMEND-2025-12-001",
"proposal_id": "PROP-2025-12-001",
"effective_at": "2025-12-14T14:00:00Z",
"anchor_proof": {
"backend": "ethereum",
"tx_hash": "0x123..."
},
"constitution_hash_before": "blake3:const_v1.0...",
"constitution_hash_after": "blake3:const_v1.1..."
}
```
### Executive Orders
For operational decisions without full amendment:
```json
{
"type": "gov_executive_order",
"order_id": "EO-2025-12-001",
"title": "Temporary Rate Limit Increase",
"issued_by": "did:vm:human:sovereign",
"issued_at": "2025-12-06T15:00:00Z",
"authority": "ARTICLE-I.1",
"order_type": "parameter_change",
"content": {
"parameter": "guardian.anchor_rate_limit",
"old_value": "100/day",
"new_value": "500/day"
},
"duration": {
"type": "temporary",
"expires_at": "2026-01-01T00:00:00Z"
}
}
```
### Emergency Declaration
```json
{
"type": "gov_executive_order",
"order_id": "EO-2025-12-002",
"title": "Security Emergency",
"issued_by": "did:vm:human:sovereign",
"authority": "ARTICLE-VI.1",
"order_type": "emergency",
"content": {
"emergency_type": "security_incident",
"threat_description": "Active intrusion on BRICK-02",
"powers_invoked": [
"Suspend authentication delays",
"Enhanced logging",
"Immediate capability revocation"
]
},
"duration": {
"type": "emergency",
"expires_at": "2025-12-09T03:50:00Z",
"renewable": true
}
}
```
### Violation Detection
```json
{
"type": "gov_violation",
"violation_id": "VIOL-2025-12-001",
"detected_at": "2025-12-06T16:00:00Z",
"detected_by": "engine:guardian",
"violation_type": "unauthorized_action",
"severity": "high",
"details": {
"actor": "did:vm:agent:automation-01",
"action_attempted": "modify_receipt",
"rule_violated": "AXIOM-001",
"action_result": "blocked"
},
"evidence": {
"log_entries": ["..."],
"request_hash": "blake3:..."
}
}
```
### Enforcement Action
```json
{
"type": "gov_enforcement",
"enforcement_id": "ENF-2025-12-001",
"violation_id": "VIOL-2025-12-001",
"enforced_at": "2025-12-06T16:05:00Z",
"enforcement_type": "capability_suspension",
"target": "did:vm:agent:automation-01",
"action_taken": {
"capability_suspended": "write",
"scope": "all_scrolls",
"duration": "pending_review"
},
"review_required": true,
"review_deadline": "2025-12-07T16:05:00Z"
}
```
### CLI Commands
```bash
# Constitution
vm-gov constitution show
vm-gov constitution version
vm-gov constitution diff v1.0.0 v1.1.0
# Proposals
vm-gov proposal create --type amendment --file proposal.json
vm-gov proposal list --status deliberation
vm-gov proposal show PROP-2025-12-001
# Voting
vm-gov vote PROP-2025-12-001 --vote approve
vm-gov vote PROP-2025-12-001 --vote reject --reason "..."
# Ratification
vm-gov ratify PROP-2025-12-001
# Executive Orders
vm-gov order create --type parameter_change --file order.json
vm-gov order list --active
vm-gov order revoke EO-2025-12-001
# Emergencies
vm-gov emergency declare --type security_incident --description "..."
vm-gov emergency status
vm-gov emergency extend --hours 24
vm-gov emergency end
# Violations
vm-gov violations list --severity high,critical
vm-gov violations review VIOL-2025-12-001 --decision dismiss
# Enforcement
vm-gov enforcement list --pending-review
vm-gov enforcement review ENF-2025-12-001 --decision uphold
```
---
## Engine Registry
All engines must be registered in the Constitution:
```json
{
"registered_engines": [
{
"engine_id": "engine:drills",
"name": "Security Drills",
"scroll": "Drills",
"authority": "Security training and exercise management",
"status": "active"
},
{
"engine_id": "engine:oracle",
"name": "Compliance Oracle",
"scroll": "Compliance",
"authority": "Compliance question answering",
"status": "active"
},
{
"engine_id": "engine:guardian",
"name": "Guardian",
"scroll": "Guardian",
"authority": "Anchoring and sentinel",
"status": "active"
},
{
"engine_id": "engine:treasury",
"name": "Treasury",
"scroll": "Treasury",
"authority": "Financial tracking",
"status": "active"
},
{
"engine_id": "engine:mesh",
"name": "Mesh",
"scroll": "Mesh",
"authority": "Topology management",
"status": "active"
},
{
"engine_id": "engine:offsec",
"name": "OffSec",
"scroll": "OffSec",
"authority": "Security operations",
"status": "active"
},
{
"engine_id": "engine:identity",
"name": "Identity",
"scroll": "Identity",
"authority": "DID and capability management",
"status": "active"
},
{
"engine_id": "engine:observability",
"name": "Observability",
"scroll": "Observability",
"authority": "Telemetry monitoring",
"status": "active"
},
{
"engine_id": "engine:automation",
"name": "Automation",
"scroll": "Automation",
"authority": "Workflow execution",
"status": "active"
},
{
"engine_id": "engine:psi",
"name": "Ψ-Field",
"scroll": "PsiField",
"authority": "Consciousness tracking",
"status": "active"
},
{
"engine_id": "engine:federation",
"name": "Federation",
"scroll": "Federation",
"authority": "Cross-mesh trust",
"status": "active"
},
{
"engine_id": "engine:governance",
"name": "Governance",
"scroll": "Governance",
"authority": "Constitutional enforcement",
"status": "active"
}
]
}
```
### Adding New Engines
New engines require constitutional amendment:
1. Draft proposal with engine specification
2. 7-day deliberation period
3. Sovereign approval
4. Anchor confirmation activates engine
```bash
vm-gov proposal create \
--type add_engine \
--engine-id engine:analytics \
--name "Analytics" \
--scroll Analytics \
--authority "Data analysis and insights"
```

View File

@@ -0,0 +1,196 @@
# VaultMesh Quick Reference
## Eternal Pattern
```
Intent → Engine → Receipt → Scroll → Anchor
```
## Three Layers
| Layer | Components | Artifacts |
|-------|------------|-----------|
| L1 Experience | CLI, UI, MCP | Commands, requests |
| L2 Engine | Domain logic | contract.json, state.json |
| L3 Ledger | Receipts, anchors | JSONL, ROOT.*.txt |
## Scrolls
| Scroll | Path | Root File |
|--------|------|-----------|
| Drills | `receipts/drills/` | `ROOT.drills.txt` |
| Compliance | `receipts/compliance/` | `ROOT.compliance.txt` |
| Guardian | `receipts/guardian/` | `ROOT.guardian.txt` |
| Treasury | `receipts/treasury/` | `ROOT.treasury.txt` |
| Mesh | `receipts/mesh/` | `ROOT.mesh.txt` |
| OffSec | `receipts/offsec/` | `ROOT.offsec.txt` |
| Identity | `receipts/identity/` | `ROOT.identity.txt` |
| Observability | `receipts/observability/` | `ROOT.observability.txt` |
| Automation | `receipts/automation/` | `ROOT.automation.txt` |
| PsiField | `receipts/psi/` | `ROOT.psi.txt` |
| Federation | `receipts/federation/` | `ROOT.federation.txt` |
| Governance | `receipts/governance/` | `ROOT.governance.txt` |
## DIDs
```
did:vm:<type>:<identifier>
node → did:vm:node:brick-01
human → did:vm:human:sovereign
agent → did:vm:agent:copilot-01
service → did:vm:service:oracle
mesh → did:vm:mesh:vaultmesh-dublin
```
## Phases
| Symbol | Phase | State |
|--------|-------|-------|
| 🜁 | Nigredo | Crisis |
| 🜄 | Albedo | Recovery |
| 🜆 | Citrinitas | Optimization |
| 🜂 | Rubedo | Integration |
## Axioms
1. Receipts are append-only
2. Hashes are cryptographic
3. All changes produce receipts
4. Constitution is supreme
5. Axioms are immutable
## CLI Cheatsheet
```bash
# Guardian
vm-guardian anchor-status
vm-guardian anchor-now --wait
vm-guardian verify-receipt <hash> --scroll <scroll>
# Identity
vm-identity did create --type node --id <id>
vm-identity capability grant --subject <did> --capability <cap>
vm-identity whoami
# Mesh
vm-mesh node list
vm-mesh node status <id>
vm-mesh topology
# Oracle
vm-oracle query "What are the GDPR requirements?"
vm-oracle corpus status
# Drills
vm-drills create --prompt "<scenario>"
vm-drills status <drill-id>
# Psi
vm-psi phase current
vm-psi transmute start --input <ref>
vm-psi opus status
# Treasury
vm-treasury balance
vm-treasury debit --from <acct> --amount <amt>
# Governance
vm-gov constitution version
vm-gov violations list
vm-gov emergency status
# Federation
vm-federation status
vm-federation verify --mesh <peer>
# System
vm-cli system health
vm-cli receipts count --by-scroll
```
## Receipt Structure
```json
{
"schema_version": "2.0.0",
"type": "<scroll>_<operation>",
"timestamp": "ISO8601",
"header": {
"root_hash": "blake3:...",
"tags": [],
"previous_hash": "blake3:..."
},
"meta": {
"scroll": "ScrollName",
"sequence": 0,
"anchor_epoch": null,
"proof_path": null
},
"body": {}
}
```
## Capabilities
| Capability | Description |
|------------|-------------|
| `anchor` | Submit to anchor backends |
| `storage` | Store receipts/artifacts |
| `compute` | Execute drills/agents |
| `oracle` | Issue compliance answers |
| `admin` | Grant/revoke capabilities |
| `federate` | Establish cross-mesh trust |
## Trust Levels
| Level | Name | Access |
|-------|------|--------|
| 0 | isolated | None |
| 1 | observe | Read-only |
| 2 | verify | Mutual verification |
| 3 | attest | Cross-attestation |
| 4 | integrate | Shared scrolls |
## Severity Levels
| Level | Description |
|-------|-------------|
| critical | Active breach |
| high | Confirmed attack |
| medium | Suspicious activity |
| low | Anomaly/info |
## Key Ports
| Service | HTTP | Metrics |
|---------|------|---------|
| Portal | 8080 | 9090 |
| Guardian | 8081 | 9090 |
| Oracle | 8082 | 9090 |
| MCP | 8083 | - |
## Health Endpoints
```
GET /health/live → Liveness
GET /health/ready → Readiness
GET /metrics → Prometheus
```
## Transmutation Steps
```
Extract → Dissolve → Purify → Coagulate → Seal
```
## Design Gate
- [ ] Clear entrypoint?
- [ ] Contract produced?
- [ ] State object?
- [ ] Receipts emitted?
- [ ] Append-only JSONL?
- [ ] Merkle root?
- [ ] Guardian anchor path?
- [ ] Query tool?

338
docs/skill/SKILL.md Normal file
View File

@@ -0,0 +1,338 @@
# VaultMesh Architect Skill
> *Building Earth's Civilization Ledger — one receipt at a time.*
## Overview
This skill enables Claude to architect, develop, and operate VaultMesh — a sovereign digital infrastructure system that combines cryptographic proofs, blockchain anchoring, and AI governance to create durable, auditable civilization-scale evidence.
## When to Use This Skill
Activate this skill when:
- Designing or implementing VaultMesh engines or subsystems
- Creating receipts, scrolls, or anchor cycles
- Working with the Eternal Pattern architecture
- Implementing federation, governance, or identity systems
- Building MCP server integrations
- Deploying or operating VaultMesh infrastructure
- Writing code that interacts with the Civilization Ledger
## Core Architecture: The Eternal Pattern
Every VaultMesh subsystem follows this arc:
```
Real-world intent → Engine → Structured JSON → Receipt → Scroll → Guardian Anchor
```
### Three-Layer Stack
```
┌───────────────────────────────────────────────┐
│ L1 — Experience Layer │
│ (Humans & Agents) │
│ • CLI / UI / MCP tools / agents │
└───────────────────────────────────────────────┘
┌───────────────────────────────────────────────┐
│ L2 — Engine Layer │
│ (Domain Engines & Contracts) │
│ • contract.json → state.json → outputs/ │
└───────────────────────────────────────────────┘
┌───────────────────────────────────────────────┐
│ L3 — Ledger Layer │
│ (Receipts, Scrolls, ProofChain, Anchors) │
│ • JSONL files → Merkle roots → anchors │
└───────────────────────────────────────────────┘
```
## Registered Engines (Scrolls)
| Engine | Scroll | Purpose |
|--------|--------|---------|
| Drills | `Drills` | Security training and exercises |
| Oracle | `Compliance` | Regulatory compliance Q&A |
| Guardian | `Guardian` | Anchoring and sentinel |
| Treasury | `Treasury` | Financial tracking and settlement |
| Mesh | `Mesh` | Federation topology |
| OffSec | `OffSec` | Security operations and IR |
| Identity | `Identity` | DIDs, credentials, capabilities |
| Observability | `Observability` | Telemetry events |
| Automation | `Automation` | Workflow execution |
| Ψ-Field | `PsiField` | Alchemical consciousness |
| Federation | `Federation` | Cross-mesh trust |
| Governance | `Governance` | Constitutional enforcement |
## File Structure
```
vaultmesh/
├── receipts/ # Receipt storage
│ ├── drills/
│ │ └── drill_runs.jsonl
│ ├── compliance/
│ │ └── oracle_answers.jsonl
│ ├── treasury/
│ │ └── treasury_events.jsonl
│ ├── mesh/
│ │ └── mesh_events.jsonl
│ ├── [scroll]/
│ │ └── [scroll]_events.jsonl
│ ├── ROOT.drills.txt
│ ├── ROOT.compliance.txt
│ └── ROOT.[scroll].txt
├── cases/ # Artifact storage
│ ├── drills/[drill-id]/
│ ├── treasury/[settlement-id]/
│ ├── offsec/[incident-id]/
│ └── psi/[transmutation-id]/
├── corpus/ # Oracle documents
└── config/ # Configuration
```
## Receipt Schema (v2)
```json
{
"schema_version": "2.0.0",
"type": "receipt_type_name",
"timestamp": "2025-12-06T12:00:00Z",
"header": {
"root_hash": "blake3:abc123...",
"tags": ["tag1", "tag2"],
"previous_hash": "blake3:prev..."
},
"meta": {
"scroll": "ScrollName",
"sequence": 42,
"anchor_epoch": 7,
"proof_path": "cases/[scroll]/[id]/PROOF.json"
},
"body": {
// Domain-specific fields
}
}
```
## DID Format
```
did:vm:<type>:<identifier>
Types:
- node → did:vm:node:brick-01
- human → did:vm:human:sovereign
- agent → did:vm:agent:copilot-01
- service → did:vm:service:oracle-openai
- mesh → did:vm:mesh:vaultmesh-dublin
```
## Alchemical Phases
| Phase | Symbol | Meaning | Operational State |
|-------|--------|---------|-------------------|
| Nigredo | 🜁 | Blackening | Crisis, incident |
| Albedo | 🜄 | Whitening | Recovery, stabilization |
| Citrinitas | 🜆 | Yellowing | Optimization, new capability |
| Rubedo | 🜂 | Reddening | Integration, maturity |
## Constitutional Axioms (Immutable)
1. **AXIOM-001**: Receipts are append-only
2. **AXIOM-002**: Hashes are cryptographically verified
3. **AXIOM-003**: All significant changes produce receipts
4. **AXIOM-004**: Constitution is supreme
5. **AXIOM-005**: Axioms cannot be amended
## Design Gate Checklist
When creating any new feature, verify:
### Experience Layer (L1)
- [ ] Clear entrypoint (CLI, MCP tool, HTTP route)?
- [ ] Intent clearly represented in structured form?
### Engine Layer (L2)
- [ ] Produces a contract (explicit or implicit)?
- [ ] State object tracking progress/outcomes?
- [ ] Actions and outputs inspectable (JSON + files)?
### Ledger Layer (L3)
- [ ] Emits receipt for important operations?
- [ ] Receipts written to append-only JSONL?
- [ ] JSONL covered by Merkle root (ROOT.[scroll].txt)?
- [ ] Guardian can anchor the relevant root?
- [ ] Query tool exists for this scroll?
## Code Patterns
### Rust Receipt Emission
```rust
use vaultmesh_core::{Receipt, ReceiptHeader, ReceiptMeta, Scroll, VmHash};
let receipt_body = MyReceiptBody { /* ... */ };
let root_hash = VmHash::from_json(&receipt_body)?;
let receipt = Receipt {
header: ReceiptHeader {
receipt_type: "my_receipt_type".to_string(),
timestamp: Utc::now(),
root_hash: root_hash.as_str().to_string(),
tags: vec!["tag1".to_string()],
},
meta: ReceiptMeta {
scroll: Scroll::MyScroll,
sequence: 0, // Set by receipt store
anchor_epoch: None,
proof_path: None,
},
body: receipt_body,
};
```
### Python Receipt Emission
```python
def emit_receipt(scroll: str, receipt_type: str, body: dict, tags: list[str]) -> dict:
import hashlib
import json
from datetime import datetime
from pathlib import Path
receipt = {
"type": receipt_type,
"timestamp": datetime.utcnow().isoformat() + "Z",
"tags": tags,
**body
}
# Compute root hash
receipt_json = json.dumps(receipt, sort_keys=True)
root_hash = f"blake3:{hashlib.blake3(receipt_json.encode()).hexdigest()}"
receipt["root_hash"] = root_hash
# Append to scroll
scroll_path = Path(f"receipts/{scroll}/{scroll}_events.jsonl")
scroll_path.parent.mkdir(parents=True, exist_ok=True)
with open(scroll_path, "a") as f:
f.write(json.dumps(receipt) + "\n")
# Update Merkle root
root_file = Path(f"ROOT.{scroll}.txt")
root_file.write_text(root_hash)
return receipt
```
### MCP Tool Pattern
```python
@server.tool()
async def my_tool(param: str) -> str:
"""Tool description."""
caller = await get_caller_identity()
await verify_capability(caller, "required_capability")
result = await engine.do_operation(param)
await emit_tool_call_receipt(
tool="my_tool",
caller=caller,
params={"param": param},
result_hash=result.hash,
)
return json.dumps(result.to_dict(), indent=2)
```
## CLI Naming Convention
```bash
vm-<engine> <command> [subcommand] [options]
Examples:
vm-treasury debit --from acct:ops --amount 150 --currency EUR
vm-mesh node list
vm-identity did create --type human --id sovereign
vm-psi phase current
vm-guardian anchor-now
vm-gov proposal create --type amendment
```
## Receipt Type Naming
```
<scroll>_<operation>
Examples:
treasury_credit
treasury_debit
treasury_settlement
mesh_node_join
mesh_route_change
identity_did_create
identity_capability_grant
psi_phase_transition
psi_transmutation
gov_proposal
gov_amendment
```
## Key Integrations
### Guardian Anchor Cycle
```
Receipts → ProofChain → Merkle Root → Anchor Backend (OTS/ETH/BTC)
```
### Federation Witness Protocol
```
Mesh-A anchors → Notifies Mesh-B → Mesh-B verifies → Emits witness receipt
```
### Transmutation (Tem) Pattern
```
Incident (Nigredo) → Extract IOCs → Generate rules → Integrate defenses (Citrinitas)
```
## Testing Requirements
1. **Property Tests**: All receipt operations must be tested with proptest/hypothesis
2. **Invariant Tests**: Core axioms verified after every test
3. **Integration Tests**: Full cycles from intent to anchored receipt
4. **Chaos Tests**: Resilience under network partition, pod failure
## Deployment Targets
- **Kubernetes**: Production deployment via Kustomize
- **Docker Compose**: Local development
- **Akash**: Decentralized compute option
## Related Skills
- `sovereign-operator` — Security operations and MCP tools
- `offsec-mcp` — Offensive security tooling
- `vaultmesh-architect` — This skill
## References
- VAULTMESH-ETERNAL-PATTERN.md — Core architecture
- VAULTMESH-TREASURY-ENGINE.md — Financial primitive
- VAULTMESH-MESH-ENGINE.md — Federation topology
- VAULTMESH-OFFSEC-ENGINE.md — Security operations
- VAULTMESH-IDENTITY-ENGINE.md — Trust primitive
- VAULTMESH-OBSERVABILITY-ENGINE.md — Telemetry
- VAULTMESH-AUTOMATION-ENGINE.md — Workflows
- VAULTMESH-PSI-FIELD-ENGINE.md — Consciousness layer
- VAULTMESH-FEDERATION-PROTOCOL.md — Cross-mesh trust
- VAULTMESH-CONSTITUTIONAL-GOVERNANCE.md — Rules
- VAULTMESH-MCP-SERVERS.md — Claude integration
- VAULTMESH-DEPLOYMENT-MANIFESTS.md — Infrastructure
- VAULTMESH-MONITORING-STACK.md — Observability
- VAULTMESH-TESTING-FRAMEWORK.md — Testing
- VAULTMESH-MIGRATION-GUIDE.md — Upgrades