Initialize repository snapshot

This commit is contained in:
Vault Sovereign
2025-12-27 00:10:32 +00:00
commit 110d644e10
281 changed files with 40331 additions and 0 deletions

View File

@@ -0,0 +1,551 @@
# VaultMesh Alchemical Patterns
> *Solve et Coagula — Dissolve and Coagulate*
## The Alchemical Framework
VaultMesh uses alchemical metaphors not as mysticism, but as precise operational language for system states and transformations.
## Phases (Operational States)
### Nigredo 🜁 — The Blackening
**Meaning**: Crisis, breakdown, decomposition
**Operational State**: System under stress, incident in progress
**Indicators**:
- Active security incident
- Service degradation
- Guardian anchor failures
- Constitutional violations detected
**Receipt Types During Nigredo**:
- `offsec_incident` (severity: high/critical)
- `obs_log_alert` (severity: critical)
- `gov_violation`
- `psi_phase_transition` (to_phase: nigredo)
**Actions**:
- Incident response procedures activated
- Enhanced monitoring enabled
- Emergency powers may be invoked
- Transmutation processes initiated
```json
{
"type": "psi_phase_transition",
"from_phase": "albedo",
"to_phase": "nigredo",
"trigger": {
"event_type": "security_incident",
"reference": "INC-2025-12-001",
"severity": "critical"
},
"indicators": [
"active_intrusion_detected",
"guardian_alert_level_elevated"
]
}
```
---
### Albedo 🜄 — The Whitening
**Meaning**: Purification, recovery, stabilization
**Operational State**: Post-incident recovery, learning phase
**Indicators**:
- Incident contained
- Systems stabilizing
- Root cause analysis in progress
- Remediation being verified
**Receipt Types During Albedo**:
- `offsec_remediation`
- `psi_transmutation` (steps: extract, dissolve, purify)
- `obs_health_snapshot` (improving trends)
**Actions**:
- Post-incident review
- IOC extraction
- Rule generation
- Documentation updates
```json
{
"type": "psi_phase_transition",
"from_phase": "nigredo",
"to_phase": "albedo",
"trigger": {
"event_type": "incident_contained",
"reference": "INC-2025-12-001"
},
"indicators": [
"threat_neutralized",
"services_recovering",
"rca_initiated"
],
"duration_in_nigredo_hours": 4.5
}
```
---
### Citrinitas 🜆 — The Yellowing
**Meaning**: Illumination, new capability emerging
**Operational State**: Optimization, enhancement
**Indicators**:
- New defensive capabilities deployed
- Performance improvements measured
- Knowledge crystallized into procedures
- Drills showing improved outcomes
**Receipt Types During Citrinitas**:
- `psi_transmutation` (steps: coagulate)
- `psi_integration`
- `security_drill_run` (outcomes: improved)
- `auto_workflow_run` (new capabilities)
**Actions**:
- Deploy new detection rules
- Update runbooks
- Train team on new procedures
- Measure improvement metrics
```json
{
"type": "psi_phase_transition",
"from_phase": "albedo",
"to_phase": "citrinitas",
"trigger": {
"event_type": "capability_deployed",
"reference": "transmute-2025-12-001"
},
"indicators": [
"detection_rules_active",
"playbook_updated",
"team_trained"
],
"capabilities_gained": [
"lateral_movement_detection_v2",
"automated_containment_k8s"
]
}
```
---
### Rubedo 🜂 — The Reddening
**Meaning**: Integration, completion, maturity
**Operational State**: Stable, sovereign operation
**Indicators**:
- All systems nominal
- Capabilities integrated into BAU
- Continuous improvement active
- High resilience demonstrated
**Receipt Types During Rubedo**:
- `psi_resonance` (harmony_score: high)
- `obs_health_snapshot` (all_green)
- `mesh_topology_snapshot` (healthy)
- `treasury_reconciliation` (balanced)
**Actions**:
- Regular drills maintain readiness
- Proactive threat hunting
- Continuous compliance monitoring
- Knowledge sharing with federation
```json
{
"type": "psi_phase_transition",
"from_phase": "citrinitas",
"to_phase": "rubedo",
"trigger": {
"event_type": "stability_achieved",
"reference": "phase-assessment-2025-12"
},
"indicators": [
"30_days_no_critical_incidents",
"slo_targets_met",
"drill_outcomes_excellent"
],
"maturity_score": 0.92
}
```
---
## Transmutation (Tem Pattern)
Transmutation converts negative events into defensive capabilities.
### The Process
```
┌─────────────────────────────────────────────────────────────────┐
│ PRIMA MATERIA │
│ (Raw Input: Incident/Vuln/Threat) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STEP 1: EXTRACT │
│ • Identify IOCs (IPs, domains, hashes, TTPs) │
│ • Document attack chain │
│ • Capture forensic artifacts │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STEP 2: DISSOLVE (Solve) │
│ • Break down into atomic components │
│ • Normalize to standard formats (STIX, Sigma) │
│ • Map to frameworks (MITRE ATT&CK) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STEP 3: PURIFY │
│ • Remove false positives │
│ • Validate against known-good │
│ • Test in isolated environment │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STEP 4: COAGULATE (Coagula) │
│ • Generate detection rules (Sigma, YARA, Suricata) │
│ • Create response playbooks │
│ • Deploy to production │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STEP 5: SEAL │
│ • Emit transmutation receipt │
│ • Link prima materia to philosopher's stone │
│ • Anchor evidence chain │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ PHILOSOPHER'S STONE │
│ (Output: Defensive Capability) │
└─────────────────────────────────────────────────────────────────┘
```
### Transmutation Contract
```json
{
"transmutation_id": "psi-transmute-2025-12-06-001",
"title": "SSH Brute Force to Detection Capability",
"initiated_by": "did:vm:human:sovereign",
"initiated_at": "2025-12-06T10:00:00Z",
"input_material": {
"type": "security_incident",
"reference": "INC-2025-12-001",
"prima_materia_hash": "blake3:incident_evidence..."
},
"target_phase": "citrinitas",
"transmutation_steps": [
{
"step_id": "step-1-extract",
"name": "Extract Prima Materia",
"action": "extract_iocs",
"expected_output": "cases/psi/transmute-001/extracted_iocs.json"
},
{
"step_id": "step-2-dissolve",
"name": "Dissolve (Solve)",
"action": "normalize_to_stix",
"expected_output": "cases/psi/transmute-001/stix_bundle.json"
},
{
"step_id": "step-3-purify",
"name": "Purify",
"action": "validate_iocs",
"expected_output": "cases/psi/transmute-001/validated_iocs.json"
},
{
"step_id": "step-4-coagulate",
"name": "Coagulate",
"action": "generate_sigma_rules",
"expected_output": "cases/psi/transmute-001/sigma_rules/"
},
{
"step_id": "step-5-seal",
"name": "Seal",
"action": "emit_receipt",
"expected_output": "receipts/psi/psi_events.jsonl"
}
],
"witnesses_required": ["brick-01", "brick-02"],
"success_criteria": {
"rules_deployed": true,
"detection_verified": true,
"no_false_positives_24h": true
}
}
```
### Transmutation Receipt
```json
{
"type": "psi_transmutation",
"transmutation_id": "psi-transmute-2025-12-06-001",
"timestamp": "2025-12-06T16:00:00Z",
"input_material": {
"type": "security_incident",
"reference": "INC-2025-12-001",
"prima_materia_hash": "blake3:abc123..."
},
"output_capability": {
"type": "detection_rules",
"reference": "sigma-rule-ssh-brute-force-v2",
"philosophers_stone_hash": "blake3:def456..."
},
"transformation_summary": {
"iocs_extracted": 47,
"rules_generated": 3,
"playbooks_updated": 1,
"ttps_mapped": ["T1110.001", "T1021.004"]
},
"alchemical_phase": "citrinitas",
"witnesses": [
{
"node": "did:vm:node:brick-01",
"witnessed_at": "2025-12-06T15:55:00Z",
"signature": "z58D..."
}
],
"tags": ["psi", "transmutation", "ssh", "brute-force"],
"root_hash": "blake3:transmute..."
}
```
---
## Resonance
Resonance measures cross-system synchronization and harmony.
### Resonance Factors
| Factor | Weight | Measurement |
|--------|--------|-------------|
| Anchor Health | 0.25 | Time since last anchor, failure rate |
| Receipt Consistency | 0.20 | Hash chain integrity, no gaps |
| Mesh Connectivity | 0.20 | Node health, route availability |
| Phase Alignment | 0.15 | All subsystems in compatible phases |
| Federation Sync | 0.10 | Witness success rate |
| Governance Compliance | 0.10 | No active violations |
### Harmony Score
```
harmony_score = Σ(factor_weight × factor_score) / Σ(factor_weight)
```
**Interpretation**:
- 0.90 - 1.00: **Rubedo** — Full sovereignty
- 0.70 - 0.89: **Citrinitas** — Optimizing
- 0.50 - 0.69: **Albedo** — Stabilizing
- 0.00 - 0.49: **Nigredo** — Crisis mode
### Resonance Receipt
```json
{
"type": "psi_resonance",
"resonance_id": "resonance-2025-12-06-12",
"timestamp": "2025-12-06T12:00:00Z",
"harmony_score": 0.94,
"factors": {
"anchor_health": 1.0,
"receipt_consistency": 0.98,
"mesh_connectivity": 0.95,
"phase_alignment": 0.90,
"federation_sync": 0.85,
"governance_compliance": 1.0
},
"current_phase": "rubedo",
"subsystem_phases": {
"guardian": "rubedo",
"oracle": "rubedo",
"mesh": "citrinitas",
"treasury": "rubedo"
},
"dissonance_notes": [
"mesh slightly below harmony due to pending node upgrade"
],
"tags": ["psi", "resonance", "harmony"],
"root_hash": "blake3:resonance..."
}
```
---
## Integration
Integration crystallizes learnings into permanent capability.
### Integration Types
| Type | Description | Example |
|------|-------------|---------|
| `rule_integration` | Detection rule becomes standard | Sigma rule added to baseline |
| `playbook_integration` | Response procedure formalized | IR playbook updated |
| `capability_integration` | New system feature | Auto-containment enabled |
| `knowledge_integration` | Documentation updated | Threat model revised |
| `training_integration` | Team skill acquired | Drill proficiency achieved |
### Integration Receipt
```json
{
"type": "psi_integration",
"integration_id": "integration-2025-12-06-001",
"timestamp": "2025-12-06T18:00:00Z",
"integration_type": "rule_integration",
"source": {
"transmutation_id": "psi-transmute-2025-12-06-001",
"capability_hash": "blake3:def456..."
},
"target": {
"system": "detection_pipeline",
"component": "sigma_rules",
"version": "v2.1.0"
},
"integration_proof": {
"deployed_at": "2025-12-06T17:30:00Z",
"verified_by": ["brick-01", "brick-02"],
"test_results": {
"true_positives": 5,
"false_positives": 0,
"detection_rate": 1.0
}
},
"crystallization_complete": true,
"tags": ["psi", "integration", "detection"],
"root_hash": "blake3:integration..."
}
```
---
## Oracle Insights
Significant findings from the Compliance Oracle that warrant receipting.
### Insight Types
| Type | Description |
|------|-------------|
| `compliance_gap` | New gap identified |
| `regulatory_change` | Regulation updated |
| `risk_elevation` | Risk level increased |
| `deadline_approaching` | Compliance deadline near |
| `cross_reference` | Connection between frameworks |
### Insight Receipt
```json
{
"type": "psi_oracle_insight",
"insight_id": "insight-2025-12-06-001",
"timestamp": "2025-12-06T14:00:00Z",
"insight_type": "compliance_gap",
"severity": "high",
"frameworks": ["AI_Act", "GDPR"],
"finding": {
"summary": "Model training data lineage documentation incomplete for Annex IV requirements",
"affected_articles": ["AI_Act.Annex_IV.2.b", "GDPR.Art_30"],
"current_state": "partial_documentation",
"required_state": "complete_lineage_from_source_to_model"
},
"recommended_actions": [
"Implement data provenance tracking",
"Document all training data sources",
"Create lineage visualization"
],
"deadline": "2026-08-02T00:00:00Z",
"confidence": 0.92,
"oracle_query_ref": "oracle-answer-2025-12-06-4721",
"tags": ["psi", "oracle", "insight", "ai_act", "gdpr"],
"root_hash": "blake3:insight..."
}
```
---
## Magnum Opus Dashboard
The Magnum Opus is the great work — the continuous refinement toward sovereignty.
### Dashboard Metrics
```
┌─────────────────────────────────────────────────────────────────┐
│ MAGNUM OPUS STATUS │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Current Phase: RUBEDO 🜂 Harmony: 0.94 │
│ Time in Phase: 47 days │
│ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Phase History (90 days) │ │
│ │ ████████████░░░░████████████████████████████████████████│ │
│ │ NNNAAACCCCCNNAACCCCCCCCCCRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR│ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
│ Transmutations Integrations │
│ ├─ Active: 2 ├─ This Month: 7 │
│ ├─ Completed: 34 ├─ Total: 156 │
│ └─ Success Rate: 94% └─ Crystallized: 142 │
│ │
│ Resonance Factors │
│ ├─ Anchor Health: ████████████████████ 1.00 │
│ ├─ Receipt Integrity: ███████████████████░ 0.98 │
│ ├─ Mesh Connectivity: ███████████████████░ 0.95 │
│ ├─ Phase Alignment: ██████████████████░░ 0.90 │
│ ├─ Federation Sync: █████████████████░░░ 0.85 │
│ └─ Governance: ████████████████████ 1.00 │
│ │
│ Recent Oracle Insights: 3 (1 high severity) │
│ Next Anchor: 47 min │
│ Last Incident: 47 days ago │
│ │
└─────────────────────────────────────────────────────────────────┘
```
### CLI Commands
```bash
# Phase status
vm-psi phase current
vm-psi phase history --days 90
# Transmutation
vm-psi transmute start --input INC-2025-12-001 --title "SSH Brute Force"
vm-psi transmute status transmute-2025-12-001
vm-psi transmute complete transmute-2025-12-001 --step coagulate
# Resonance
vm-psi resonance current
vm-psi resonance history --days 30
# Integration
vm-psi integrate --source transmute-2025-12-001 --target detection_pipeline
# Opus
vm-psi opus status
vm-psi opus report --format pdf --output opus-report.pdf
```

View File

@@ -0,0 +1,693 @@
# VaultMesh Code Templates
## Rust Templates
### Core Types
```rust
// Receipt Header
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ReceiptHeader {
pub receipt_type: String,
pub timestamp: DateTime<Utc>,
pub root_hash: String,
pub tags: Vec<String>,
}
// Receipt Metadata
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ReceiptMeta {
pub scroll: Scroll,
pub sequence: u64,
pub anchor_epoch: Option<u64>,
pub proof_path: Option<String>,
}
// Generic Receipt
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Receipt<T> {
#[serde(flatten)]
pub header: ReceiptHeader,
#[serde(flatten)]
pub meta: ReceiptMeta,
#[serde(flatten)]
pub body: T,
}
// Scroll Enum
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
#[serde(rename_all = "snake_case")]
pub enum Scroll {
Drills,
Compliance,
Guardian,
Treasury,
Mesh,
OffSec,
Identity,
Observability,
Automation,
PsiField,
Federation,
Governance,
}
impl Scroll {
pub fn jsonl_path(&self) -> &'static str {
match self {
Scroll::Drills => "receipts/drills/drill_runs.jsonl",
Scroll::Compliance => "receipts/compliance/oracle_answers.jsonl",
Scroll::Guardian => "receipts/guardian/anchor_events.jsonl",
Scroll::Treasury => "receipts/treasury/treasury_events.jsonl",
Scroll::Mesh => "receipts/mesh/mesh_events.jsonl",
Scroll::OffSec => "receipts/offsec/offsec_events.jsonl",
Scroll::Identity => "receipts/identity/identity_events.jsonl",
Scroll::Observability => "receipts/observability/observability_events.jsonl",
Scroll::Automation => "receipts/automation/automation_events.jsonl",
Scroll::PsiField => "receipts/psi/psi_events.jsonl",
Scroll::Federation => "receipts/federation/federation_events.jsonl",
Scroll::Governance => "receipts/governance/governance_events.jsonl",
}
}
pub fn root_file(&self) -> &'static str {
match self {
Scroll::Drills => "ROOT.drills.txt",
Scroll::Compliance => "ROOT.compliance.txt",
Scroll::Guardian => "ROOT.guardian.txt",
Scroll::Treasury => "ROOT.treasury.txt",
Scroll::Mesh => "ROOT.mesh.txt",
Scroll::OffSec => "ROOT.offsec.txt",
Scroll::Identity => "ROOT.identity.txt",
Scroll::Observability => "ROOT.observability.txt",
Scroll::Automation => "ROOT.automation.txt",
Scroll::PsiField => "ROOT.psi.txt",
Scroll::Federation => "ROOT.federation.txt",
Scroll::Governance => "ROOT.governance.txt",
}
}
}
```
### DID Types
```rust
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
pub struct Did(String);
impl Did {
pub fn new(did_type: DidType, identifier: &str) -> Self {
Did(format!("did:vm:{}:{}", did_type.as_str(), identifier))
}
pub fn parse(s: &str) -> Result<Self, DidParseError> {
if !s.starts_with("did:vm:") {
return Err(DidParseError::InvalidPrefix);
}
Ok(Did(s.to_string()))
}
pub fn as_str(&self) -> &str {
&self.0
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum DidType {
Node,
Human,
Agent,
Service,
Mesh,
}
impl DidType {
pub fn as_str(&self) -> &'static str {
match self {
DidType::Node => "node",
DidType::Human => "human",
DidType::Agent => "agent",
DidType::Service => "service",
DidType::Mesh => "mesh",
}
}
}
```
### Hash Utilities
```rust
use blake3::Hasher;
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct VmHash(String);
impl VmHash {
pub fn blake3(data: &[u8]) -> Self {
let hash = blake3::hash(data);
VmHash(format!("blake3:{}", hash.to_hex()))
}
pub fn from_json<T: Serialize>(value: &T) -> Result<Self, serde_json::Error> {
let json = serde_json::to_vec(value)?;
Ok(Self::blake3(&json))
}
pub fn hex(&self) -> &str {
self.0.strip_prefix("blake3:").unwrap_or(&self.0)
}
pub fn as_str(&self) -> &str {
&self.0
}
}
pub fn merkle_root(hashes: &[VmHash]) -> VmHash {
if hashes.is_empty() {
return VmHash::blake3(b"empty");
}
if hashes.len() == 1 {
return hashes[0].clone();
}
let mut current_level: Vec<VmHash> = hashes.to_vec();
while current_level.len() > 1 {
let mut next_level = Vec::new();
for chunk in current_level.chunks(2) {
let combined = if chunk.len() == 2 {
format!("{}{}", chunk[0].hex(), chunk[1].hex())
} else {
format!("{}{}", chunk[0].hex(), chunk[0].hex())
};
next_level.push(VmHash::blake3(combined.as_bytes()));
}
current_level = next_level;
}
current_level.remove(0)
}
```
### Engine Template
```rust
// Template for new engine implementation
pub struct MyEngine {
db: DatabasePool,
receipts_path: PathBuf,
}
impl MyEngine {
pub fn new(db: DatabasePool, receipts_path: PathBuf) -> Self {
MyEngine { db, receipts_path }
}
pub async fn create_contract(&self, params: CreateParams) -> Result<Contract, EngineError> {
let contract = Contract {
id: generate_id("contract"),
title: params.title,
created_at: Utc::now(),
// ... domain-specific fields
};
// Store contract
self.store_contract(&contract).await?;
Ok(contract)
}
pub async fn execute(&mut self, contract_id: &str) -> Result<State, EngineError> {
let contract = self.load_contract(contract_id).await?;
let mut state = State::new(&contract);
// Execute steps
for step in &contract.steps {
state.execute_step(step).await?;
}
// Seal with receipt
let receipt = self.seal(&contract, &state).await?;
Ok(state)
}
async fn seal(&self, contract: &Contract, state: &State) -> Result<Receipt<MyReceipt>, EngineError> {
let receipt_body = MyReceipt {
contract_id: contract.id.clone(),
status: state.status.clone(),
// ... domain-specific fields
};
let root_hash = VmHash::from_json(&receipt_body)?;
let receipt = Receipt {
header: ReceiptHeader {
receipt_type: "my_receipt_type".to_string(),
timestamp: Utc::now(),
root_hash: root_hash.as_str().to_string(),
tags: vec!["my_engine".to_string()],
},
meta: ReceiptMeta {
scroll: Scroll::MyScroll,
sequence: 0,
anchor_epoch: None,
proof_path: None,
},
body: receipt_body,
};
self.append_receipt(&receipt).await?;
Ok(receipt)
}
async fn append_receipt<T: Serialize>(&self, receipt: &Receipt<T>) -> Result<(), EngineError> {
let scroll_path = self.receipts_path.join(Scroll::MyScroll.jsonl_path());
let mut file = OpenOptions::new()
.create(true)
.append(true)
.open(&scroll_path)?;
let json = serde_json::to_string(receipt)?;
writeln!(file, "{}", json)?;
// Update Merkle root
self.update_merkle_root().await?;
Ok(())
}
}
```
### Prometheus Metrics
```rust
use prometheus::{Counter, CounterVec, Histogram, HistogramVec, Gauge, GaugeVec, Opts, Registry};
use lazy_static::lazy_static;
lazy_static! {
pub static ref REGISTRY: Registry = Registry::new();
pub static ref RECEIPTS_TOTAL: CounterVec = CounterVec::new(
Opts::new("vaultmesh_receipts_total", "Total receipts by scroll"),
&["scroll", "type"]
).unwrap();
pub static ref OPERATION_DURATION: HistogramVec = HistogramVec::new(
prometheus::HistogramOpts::new(
"vaultmesh_operation_duration_seconds",
"Operation duration"
).buckets(vec![0.001, 0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1.0, 2.5, 5.0, 10.0]),
&["operation"]
).unwrap();
pub static ref ACTIVE_OPERATIONS: GaugeVec = GaugeVec::new(
Opts::new("vaultmesh_active_operations", "Active operations"),
&["type"]
).unwrap();
}
pub fn register_metrics() {
REGISTRY.register(Box::new(RECEIPTS_TOTAL.clone())).unwrap();
REGISTRY.register(Box::new(OPERATION_DURATION.clone())).unwrap();
REGISTRY.register(Box::new(ACTIVE_OPERATIONS.clone())).unwrap();
}
```
---
## Python Templates
### CLI Command Group
```python
import click
import json
from datetime import datetime
from pathlib import Path
@click.group()
def my_engine():
"""My Engine - Description"""
pass
@my_engine.command("create")
@click.option("--title", required=True, help="Title")
@click.option("--config", type=click.Path(exists=True), help="Config file")
def create(title: str, config: str):
"""Create a new contract."""
contract_id = f"contract-{datetime.utcnow().strftime('%Y%m%d%H%M%S')}"
contract = {
"id": contract_id,
"title": title,
"created_at": datetime.utcnow().isoformat() + "Z",
}
if config:
with open(config) as f:
contract.update(json.load(f))
# Store contract
contract_path = Path(f"cases/my_engine/{contract_id}/contract.json")
contract_path.parent.mkdir(parents=True, exist_ok=True)
with open(contract_path, "w") as f:
json.dump(contract, f, indent=2)
click.echo(f"✓ Contract created: {contract_id}")
@my_engine.command("execute")
@click.argument("contract_id")
def execute(contract_id: str):
"""Execute a contract."""
# Load contract
contract_path = Path(f"cases/my_engine/{contract_id}/contract.json")
with open(contract_path) as f:
contract = json.load(f)
# Execute (implementation specific)
state = {"status": "completed"}
# Emit receipt
receipt = emit_receipt(
scroll="my_scroll",
receipt_type="my_receipt_type",
body={
"contract_id": contract_id,
"status": state["status"],
},
tags=["my_engine"]
)
click.echo(f"✓ Executed: {contract_id}")
click.echo(f" Receipt: {receipt['root_hash'][:20]}...")
@my_engine.command("query")
@click.option("--status", help="Filter by status")
@click.option("--from", "from_date", help="From date")
@click.option("--to", "to_date", help="To date")
@click.option("--format", "output_format", default="table", type=click.Choice(["table", "json", "csv"]))
def query(status: str, from_date: str, to_date: str, output_format: str):
"""Query receipts."""
filters = {}
if status:
filters["status"] = status
if from_date:
filters["from_date"] = from_date
if to_date:
filters["to_date"] = to_date
receipts = load_receipts("my_scroll", filters)
if output_format == "json":
click.echo(json.dumps(receipts, indent=2))
else:
click.echo(f"Found {len(receipts)} receipts")
for r in receipts:
click.echo(f" {r.get('timestamp', '')[:19]} | {r.get('type', '')}")
```
### Receipt Utilities
```python
import json
import hashlib
from datetime import datetime
from pathlib import Path
from typing import Optional
def emit_receipt(scroll: str, receipt_type: str, body: dict, tags: list[str]) -> dict:
"""Create and emit a receipt to the appropriate scroll."""
receipt = {
"schema_version": "2.0.0",
"type": receipt_type,
"timestamp": datetime.utcnow().isoformat() + "Z",
"tags": tags,
**body
}
# Compute root hash
receipt_json = json.dumps(receipt, sort_keys=True)
root_hash = f"blake3:{hashlib.blake3(receipt_json.encode()).hexdigest()}"
receipt["root_hash"] = root_hash
# Append to scroll
scroll_path = Path(f"receipts/{scroll}/{scroll}_events.jsonl")
scroll_path.parent.mkdir(parents=True, exist_ok=True)
with open(scroll_path, "a") as f:
f.write(json.dumps(receipt) + "\n")
# Update Merkle root
update_merkle_root(scroll)
return receipt
def load_receipts(scroll: str, filters: Optional[dict] = None) -> list[dict]:
"""Load and filter receipts from a scroll."""
scroll_path = Path(f"receipts/{scroll}/{scroll}_events.jsonl")
if not scroll_path.exists():
return []
receipts = []
with open(scroll_path) as f:
for line in f:
receipt = json.loads(line.strip())
if filters:
match = True
for key, value in filters.items():
if key == "from_date":
if receipt.get("timestamp", "") < value:
match = False
elif key == "to_date":
if receipt.get("timestamp", "") > value:
match = False
elif key == "type":
if receipt.get("type") not in (value if isinstance(value, list) else [value]):
match = False
elif receipt.get(key) != value:
match = False
if match:
receipts.append(receipt)
else:
receipts.append(receipt)
return receipts
def update_merkle_root(scroll: str):
"""Recompute and update Merkle root for a scroll."""
scroll_path = Path(f"receipts/{scroll}/{scroll}_events.jsonl")
root_file = Path(f"receipts/ROOT.{scroll}.txt")
if not scroll_path.exists():
root_file.write_text("blake3:empty")
return
hashes = []
with open(scroll_path) as f:
for line in f:
receipt = json.loads(line.strip())
hashes.append(receipt.get("root_hash", ""))
if not hashes:
root_file.write_text("blake3:empty")
return
# Simple merkle root (production would use proper tree)
combined = "".join(h.replace("blake3:", "") for h in hashes)
root = f"blake3:{hashlib.blake3(combined.encode()).hexdigest()}"
root_file.write_text(root)
def verify_receipt(receipt_hash: str, scroll: str) -> bool:
"""Verify a receipt exists and is valid."""
receipts = load_receipts(scroll, {"root_hash": receipt_hash})
return len(receipts) > 0
```
### MCP Server Template
```python
from mcp.server import Server
from mcp.types import Tool, TextContent
import json
server = Server("my-engine")
@server.tool()
async def my_operation(
param1: str,
param2: int = 10,
) -> str:
"""
Description of what this tool does.
Args:
param1: Description of param1
param2: Description of param2
Returns:
Description of return value
"""
# Verify caller capabilities
caller = await get_caller_identity()
await verify_capability(caller, "required_capability")
# Perform operation
result = perform_operation(param1, param2)
# Emit receipt
await emit_tool_call_receipt(
tool="my_operation",
caller=caller,
params={"param1": param1, "param2": param2},
result_hash=result.hash,
)
return json.dumps(result.to_dict(), indent=2)
@server.tool()
async def my_query(
filter_param: str = None,
limit: int = 50,
) -> str:
"""
Query operation description.
Args:
filter_param: Optional filter
limit: Maximum results
Returns:
Query results
"""
caller = await get_caller_identity()
await verify_capability(caller, "view_capability")
results = query_data(filter_param, limit)
return json.dumps([r.to_dict() for r in results], indent=2)
def main():
import asyncio
from mcp.server.stdio import stdio_server
async def run():
async with stdio_server() as (read_stream, write_stream):
await server.run(
read_stream,
write_stream,
server.create_initialization_options(),
)
asyncio.run(run())
if __name__ == "__main__":
main()
```
---
## Property Test Templates
### Rust (proptest)
```rust
use proptest::prelude::*;
proptest! {
/// Receipts roundtrip through serialization
#[test]
fn receipt_roundtrip(receipt in arb_receipt()) {
let json = serde_json::to_string(&receipt)?;
let restored: Receipt<serde_json::Value> = serde_json::from_str(&json)?;
prop_assert_eq!(receipt.header.root_hash, restored.header.root_hash);
}
/// Hash is deterministic
#[test]
fn hash_deterministic(data in prop::collection::vec(any::<u8>(), 0..1000)) {
let hash1 = VmHash::blake3(&data);
let hash2 = VmHash::blake3(&data);
prop_assert_eq!(hash1, hash2);
}
/// Different data produces different hashes
#[test]
fn different_data_different_hash(
data1 in prop::collection::vec(any::<u8>(), 1..100),
data2 in prop::collection::vec(any::<u8>(), 1..100)
) {
prop_assume!(data1 != data2);
let hash1 = VmHash::blake3(&data1);
let hash2 = VmHash::blake3(&data2);
prop_assert_ne!(hash1, hash2);
}
}
fn arb_receipt() -> impl Strategy<Value = Receipt<serde_json::Value>> {
(
"[a-z]{5,20}", // receipt_type
any::<i64>().prop_map(|ts| DateTime::from_timestamp(ts.abs() % 2000000000, 0).unwrap()),
prop::collection::vec("[a-z]{3,10}", 0..5), // tags
).prop_map(|(receipt_type, timestamp, tags)| {
Receipt {
header: ReceiptHeader {
receipt_type,
timestamp,
root_hash: "blake3:placeholder".to_string(),
tags,
},
meta: ReceiptMeta {
scroll: Scroll::Drills,
sequence: 0,
anchor_epoch: None,
proof_path: None,
},
body: serde_json::json!({"test": true}),
}
})
}
```
### Python (hypothesis)
```python
from hypothesis import given, strategies as st
import json
@given(st.dictionaries(st.text(min_size=1, max_size=20), st.text(max_size=100), max_size=10))
def test_receipt_roundtrip(body):
"""Receipts survive JSON roundtrip."""
receipt = emit_receipt("test", "test_type", body, ["test"])
json_str = json.dumps(receipt)
restored = json.loads(json_str)
assert receipt["root_hash"] == restored["root_hash"]
assert receipt["type"] == restored["type"]
@given(st.binary(min_size=1, max_size=1000))
def test_hash_deterministic(data):
"""Hash is deterministic."""
hash1 = hashlib.blake3(data).hexdigest()
hash2 = hashlib.blake3(data).hexdigest()
assert hash1 == hash2
@given(
st.binary(min_size=1, max_size=100),
st.binary(min_size=1, max_size=100)
)
def test_different_data_different_hash(data1, data2):
"""Different data produces different hashes."""
if data1 == data2:
return # Skip if same
hash1 = hashlib.blake3(data1).hexdigest()
hash2 = hashlib.blake3(data2).hexdigest()
assert hash1 != hash2
```

315
docs/skill/ENGINE_SPECS.md Normal file
View File

@@ -0,0 +1,315 @@
# VaultMesh Engine Specifications
## Receipt Types by Scroll
### Drills
| Type | When Emitted |
|------|--------------|
| `security_drill_run` | Drill completed |
### Compliance
| Type | When Emitted |
|------|--------------|
| `oracle_answer` | Compliance question answered |
### Guardian
| Type | When Emitted |
|------|--------------|
| `anchor_success` | Anchor cycle succeeded |
| `anchor_failure` | Anchor cycle failed |
| `anchor_divergence` | Root mismatch detected |
### Treasury
| Type | When Emitted |
|------|--------------|
| `treasury_credit` | Credit entry recorded |
| `treasury_debit` | Debit entry recorded |
| `treasury_settlement` | Multi-party settlement completed |
| `treasury_reconciliation` | Periodic balance verification |
### Mesh
| Type | When Emitted |
|------|--------------|
| `mesh_node_join` | Node registered |
| `mesh_node_leave` | Node deregistered |
| `mesh_route_change` | Route added/removed/modified |
| `mesh_capability_grant` | Capability granted |
| `mesh_capability_revoke` | Capability revoked |
| `mesh_topology_snapshot` | Periodic topology capture |
### OffSec
| Type | When Emitted |
|------|--------------|
| `offsec_incident` | Incident closed |
| `offsec_redteam` | Red team engagement closed |
| `offsec_vuln_discovery` | Vulnerability confirmed |
| `offsec_remediation` | Remediation verified |
| `offsec_threat_intel` | New IOC/TTP added |
| `offsec_forensic_snapshot` | Forensic capture taken |
### Identity
| Type | When Emitted |
|------|--------------|
| `identity_did_create` | New DID registered |
| `identity_did_rotate` | Key rotation completed |
| `identity_credential_issue` | Credential issued |
| `identity_credential_revoke` | Credential revoked |
| `identity_auth_event` | Authentication attempt |
| `identity_capability_grant` | Capability granted |
| `identity_capability_exercise` | Capability used |
### Observability
| Type | When Emitted |
|------|--------------|
| `obs_metric_anomaly` | Anomaly detected/resolved |
| `obs_log_alert` | Log-based alert triggered |
| `obs_trace_summary` | Critical operation traced |
| `obs_health_snapshot` | Daily health capture |
| `obs_slo_breach` | SLO target missed |
| `obs_capacity_event` | Resource threshold crossed |
### Automation
| Type | When Emitted |
|------|--------------|
| `auto_workflow_run` | Workflow execution completed |
| `auto_scheduled_task` | Scheduled task executed |
| `auto_agent_action` | Agent took action |
| `auto_trigger_event` | External trigger received |
| `auto_approval_gate` | Approval gate resolved |
| `auto_error_recovery` | Error recovery completed |
### PsiField
| Type | When Emitted |
|------|--------------|
| `psi_phase_transition` | Phase change |
| `psi_emergence_event` | Emergent behavior detected |
| `psi_transmutation` | Negative → capability transform |
| `psi_resonance` | Cross-system synchronization |
| `psi_integration` | Learning crystallized |
| `psi_oracle_insight` | Significant Oracle insight |
### Federation
| Type | When Emitted |
|------|--------------|
| `fed_trust_proposal` | Trust proposal submitted |
| `fed_trust_established` | Federation agreement active |
| `fed_trust_revoked` | Federation terminated |
| `fed_witness_event` | Remote root witnessed |
| `fed_cross_anchor` | Remote root included in anchor |
| `fed_schema_sync` | Schema versions synchronized |
### Governance
| Type | When Emitted |
|------|--------------|
| `gov_proposal` | Proposal submitted |
| `gov_vote` | Vote cast |
| `gov_ratification` | Proposal ratified |
| `gov_amendment` | Constitution amended |
| `gov_executive_order` | Executive order issued |
| `gov_violation` | Violation detected |
| `gov_enforcement` | Enforcement action taken |
---
## Engine Contract Templates
### Treasury Settlement Contract
```json
{
"settlement_id": "settle-YYYY-MM-DD-NNN",
"title": "Settlement Title",
"initiated_by": "did:vm:node:portal-01",
"initiated_at": "ISO8601",
"parties": ["did:vm:node:...", "did:vm:node:..."],
"entries": [
{
"entry_id": "entry-NNN",
"type": "debit|credit",
"account": "acct:vm:node:...:type",
"amount": 0.00,
"currency": "EUR",
"memo": "Description"
}
],
"requires_signatures": ["node-id", "node-id"],
"settlement_type": "inter_node_resource|vendor_payment|..."
}
```
### Mesh Change Contract
```json
{
"change_id": "mesh-change-YYYY-MM-DD-NNN",
"title": "Change Title",
"initiated_by": "did:vm:node:portal-01",
"initiated_at": "ISO8601",
"change_type": "node_expansion|route_update|...",
"operations": [
{
"op_id": "op-NNN",
"operation": "node_join|route_add|capability_grant|...",
"target": "did:vm:node:...",
"config": {}
}
],
"requires_approval": ["node-id"],
"rollback_on_failure": true
}
```
### OffSec Incident Contract
```json
{
"case_id": "INC-YYYY-MM-NNN",
"case_type": "incident",
"title": "Incident Title",
"severity": "critical|high|medium|low",
"created_at": "ISO8601",
"phases": [
{
"phase_id": "phase-N-name",
"name": "Triage|Containment|Eradication|Recovery",
"objectives": ["..."],
"checklist": ["..."]
}
],
"assigned_responders": ["did:vm:human:..."],
"escalation_path": ["..."]
}
```
### Identity Operation Contract
```json
{
"operation_id": "idop-YYYY-MM-DD-NNN",
"operation_type": "key_rotation_ceremony|...",
"title": "Operation Title",
"initiated_by": "did:vm:human:...",
"initiated_at": "ISO8601",
"target_did": "did:vm:node:...",
"steps": [
{
"step_id": "step-N-name",
"action": "action_name",
"params": {}
}
],
"rollback_on_failure": true
}
```
### Transmutation Contract
```json
{
"transmutation_id": "psi-transmute-YYYY-MM-DD-NNN",
"title": "Transmutation Title",
"initiated_by": "did:vm:human:...",
"initiated_at": "ISO8601",
"input_material": {
"type": "security_incident|vulnerability|...",
"reference": "INC-YYYY-MM-NNN"
},
"target_phase": "citrinitas",
"transmutation_steps": [
{
"step_id": "step-N-name",
"name": "Step Name",
"action": "action_name",
"expected_output": "output_path"
}
],
"witnesses_required": ["node-id", "node-id"],
"success_criteria": {}
}
```
---
## State Machine Transitions
### Settlement Status
```
draft → pending_signatures → executing → completed
↘ disputed → resolved → completed
↘ expired
```
### Incident Status
```
reported → triaging → investigating → contained → eradicating → recovered → closed
↘ false_positive → closed
```
### Mesh Change Status
```
draft → pending_approval → in_progress → completed
↘ partial_failure → rollback → rolled_back
↘ failed → rollback → rolled_back
```
### Alchemical Phase
```
nigredo → albedo → citrinitas → rubedo
↑ │
└──────────────────────────────┘
(cycle continues)
```
---
## Capability Types
| Capability | Description | Typical Holders |
|------------|-------------|-----------------|
| `anchor` | Submit roots to anchor backends | Guardian nodes |
| `storage` | Store receipts and artifacts | Infrastructure nodes |
| `compute` | Execute drills, run agents | BRICK nodes |
| `oracle` | Issue compliance answers | Oracle nodes |
| `admin` | Grant/revoke capabilities | Portal, Sovereign |
| `federate` | Establish cross-mesh trust | Portal |
---
## Trust Levels (Federation)
| Level | Name | Description |
|-------|------|-------------|
| 0 | `isolated` | No federation |
| 1 | `observe` | Read-only witness |
| 2 | `verify` | Mutual verification |
| 3 | `attest` | Cross-attestation |
| 4 | `integrate` | Shared scrolls |
---
## Account Types (Treasury)
| Type | Purpose |
|------|---------|
| `operational` | Day-to-day infrastructure spend |
| `reserve` | Long-term holdings, runway |
| `escrow` | Held pending settlement |
| `external` | Counterparty accounts |
---
## Node Types (Mesh)
| Type | Purpose |
|------|---------|
| `infrastructure` | BRICK servers, compute |
| `edge` | Mobile devices, field endpoints |
| `oracle` | Compliance oracle instances |
| `guardian` | Dedicated anchor/sentinel |
| `external` | Federated nodes |
---
## Severity Levels
| Level | Description |
|-------|-------------|
| `critical` | Active breach, data exfiltration |
| `high` | Confirmed attack, potential breach |
| `medium` | Suspicious activity, policy violation |
| `low` | Anomaly, informational |

View File

@@ -0,0 +1,711 @@
# VaultMesh Infrastructure Templates
## Kubernetes Deployment
### Namespace
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: vaultmesh
labels:
app.kubernetes.io/name: vaultmesh
app.kubernetes.io/part-of: civilization-ledger
pod-security.kubernetes.io/enforce: restricted
```
### Generic Deployment Template
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: vaultmesh-{component}
namespace: vaultmesh
labels:
app.kubernetes.io/name: {component}
app.kubernetes.io/component: {role}
app.kubernetes.io/part-of: vaultmesh
spec:
replicas: {replicas}
selector:
matchLabels:
app.kubernetes.io/name: {component}
template:
metadata:
labels:
app.kubernetes.io/name: {component}
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
prometheus.io/path: "/metrics"
spec:
serviceAccountName: vaultmesh-{component}
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: {component}
image: ghcr.io/vaultmesh/{component}:{version}
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
ports:
- name: http
containerPort: {http_port}
protocol: TCP
- name: metrics
containerPort: 9090
protocol: TCP
env:
- name: RUST_LOG
value: "info,vaultmesh=debug"
- name: CONFIG_PATH
value: "/config/{component}.toml"
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: vaultmesh-db-credentials
key: {component}-url
volumeMounts:
- name: config
mountPath: /config
readOnly: true
- name: receipts
mountPath: /data/receipts
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: {cpu_request}
memory: {memory_request}
limits:
cpu: {cpu_limit}
memory: {memory_limit}
livenessProbe:
httpGet:
path: /health/live
port: http
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /health/ready
port: http
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: config
configMap:
name: vaultmesh-{component}-config
- name: receipts
persistentVolumeClaim:
claimName: vaultmesh-receipts
- name: tmp
emptyDir: {}
```
### Service Template
```yaml
apiVersion: v1
kind: Service
metadata:
name: vaultmesh-{component}
namespace: vaultmesh
spec:
selector:
app.kubernetes.io/name: {component}
ports:
- name: http
port: 80
targetPort: http
- name: metrics
port: 9090
targetPort: metrics
type: ClusterIP
```
### ConfigMap Template
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: vaultmesh-{component}-config
namespace: vaultmesh
data:
{component}.toml: |
[server]
bind = "0.0.0.0:{port}"
metrics_bind = "0.0.0.0:9090"
[database]
max_connections = 20
min_connections = 5
[receipts]
base_path = "/data/receipts"
# Component-specific configuration
```
### PersistentVolumeClaim
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vaultmesh-receipts
namespace: vaultmesh
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-csi
resources:
requests:
storage: 100Gi
```
### Ingress
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: vaultmesh-ingress
namespace: vaultmesh
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/rate-limit-window: "1m"
spec:
ingressClassName: nginx
tls:
- hosts:
- portal.vaultmesh.io
- guardian.vaultmesh.io
- oracle.vaultmesh.io
secretName: vaultmesh-tls
rules:
- host: portal.vaultmesh.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: vaultmesh-portal
port:
name: http
```
---
## Component Configurations
### Portal
```yaml
# Deployment overrides
replicas: 2
http_port: 8080
cpu_request: 100m
memory_request: 256Mi
cpu_limit: 1000m
memory_limit: 1Gi
```
```toml
# portal.toml
[server]
bind = "0.0.0.0:8080"
metrics_bind = "0.0.0.0:9090"
[database]
max_connections = 20
min_connections = 5
[receipts]
base_path = "/data/receipts"
[scrolls]
enabled = [
"Drills", "Compliance", "Guardian", "Treasury", "Mesh",
"OffSec", "Identity", "Observability", "Automation",
"PsiField", "Federation", "Governance"
]
[auth]
jwt_issuer = "vaultmesh-portal"
session_ttl_hours = 24
```
### Guardian
```yaml
# Deployment overrides
replicas: 1 # Single for coordination
strategy:
type: Recreate
http_port: 8081
cpu_request: 200m
memory_request: 512Mi
cpu_limit: 2000m
memory_limit: 2Gi
```
```toml
# guardian.toml
[server]
bind = "0.0.0.0:8081"
metrics_bind = "0.0.0.0:9090"
[proofchain]
receipts_path = "/data/receipts"
roots_path = "/data/receipts"
[anchor]
primary = "ethereum"
interval_seconds = 3600
min_receipts_threshold = 10
[anchor.ethereum]
rpc_url = "https://mainnet.infura.io/v3/${INFURA_PROJECT_ID}"
contract_address = "0x..."
chain_id = 1
[anchor.ots]
enabled = true
calendar_urls = [
"https://a.pool.opentimestamps.org",
"https://b.pool.opentimestamps.org"
]
[sentinel]
enabled = true
alert_webhook = "http://alertmanager:9093/api/v2/alerts"
```
### Oracle
```yaml
# Deployment overrides
replicas: 2
http_port: 8082
mcp_port: 8083
cpu_request: 200m
memory_request: 512Mi
cpu_limit: 2000m
memory_limit: 4Gi
```
```toml
# oracle.toml
[server]
http_bind = "0.0.0.0:8082"
mcp_bind = "0.0.0.0:8083"
metrics_bind = "0.0.0.0:9090"
[corpus]
path = "/data/corpus"
index_path = "/data/cache/index"
supported_formats = ["docx", "pdf", "md", "txt"]
[llm]
primary_provider = "anthropic"
primary_model = "claude-sonnet-4-20250514"
fallback_provider = "openai"
fallback_model = "gpt-4o"
temperature = 0.1
max_tokens = 4096
[receipts]
endpoint = "http://vaultmesh-portal/api/receipts/oracle"
```
---
## Docker Compose (Development)
```yaml
version: "3.9"
services:
portal:
build:
context: .
dockerfile: docker/portal/Dockerfile
ports:
- "8080:8080"
- "9090:9090"
environment:
- RUST_LOG=info,vaultmesh=debug
- VAULTMESH_CONFIG=/config/portal.toml
- DATABASE_URL=postgresql://vaultmesh:vaultmesh@postgres:5432/vaultmesh
- REDIS_URL=redis://redis:6379
volumes:
- ./config/portal.toml:/config/portal.toml:ro
- receipts:/data/receipts
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
guardian:
build:
context: .
dockerfile: docker/guardian/Dockerfile
ports:
- "8081:8081"
environment:
- RUST_LOG=info,guardian=debug
- GUARDIAN_CONFIG=/config/guardian.toml
- DATABASE_URL=postgresql://vaultmesh:vaultmesh@postgres:5432/vaultmesh
volumes:
- ./config/guardian.toml:/config/guardian.toml:ro
- receipts:/data/receipts
- guardian-state:/data/guardian
depends_on:
portal:
condition: service_healthy
oracle:
build:
context: .
dockerfile: docker/oracle/Dockerfile
ports:
- "8082:8082"
- "8083:8083"
environment:
- ORACLE_CONFIG=/config/oracle.toml
- OPENAI_API_KEY=${OPENAI_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- VAULTMESH_RECEIPT_ENDPOINT=http://portal:8080/api/receipts
volumes:
- ./config/oracle.toml:/config/oracle.toml:ro
- ./corpus:/data/corpus:ro
depends_on:
portal:
condition: service_healthy
postgres:
image: postgres:16-alpine
environment:
- POSTGRES_USER=vaultmesh
- POSTGRES_PASSWORD=vaultmesh
- POSTGRES_DB=vaultmesh
volumes:
- postgres-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U vaultmesh"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
volumes:
- redis-data:/data
command: redis-server --appendonly yes
prometheus:
image: prom/prometheus:v2.47.0
ports:
- "9091:9090"
volumes:
- ./config/prometheus.yaml:/etc/prometheus/prometheus.yml:ro
- prometheus-data:/prometheus
grafana:
image: grafana/grafana:10.1.0
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- ./config/grafana/provisioning:/etc/grafana/provisioning:ro
- grafana-data:/var/lib/grafana
volumes:
receipts:
guardian-state:
postgres-data:
redis-data:
prometheus-data:
grafana-data:
networks:
default:
name: vaultmesh
```
---
## Dockerfile Templates
### Rust Service
```dockerfile
# Build stage
FROM rust:1.75-alpine AS builder
RUN apk add --no-cache musl-dev openssl-dev openssl-libs-static
WORKDIR /build
COPY Cargo.toml Cargo.lock ./
COPY src ./src
RUN cargo build --release --target x86_64-unknown-linux-musl
# Runtime stage
FROM alpine:3.19
RUN apk add --no-cache ca-certificates tzdata
RUN adduser -D -u 1000 vaultmesh
USER vaultmesh
WORKDIR /app
COPY --from=builder /build/target/x86_64-unknown-linux-musl/release/{binary} /app/
EXPOSE 8080 9090
ENTRYPOINT ["/app/{binary}"]
```
### Python Service
```dockerfile
FROM python:3.12-slim
RUN useradd -m -u 1000 vaultmesh
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY --chown=vaultmesh:vaultmesh . .
USER vaultmesh
EXPOSE 8080 9090
CMD ["python", "-m", "{module}"]
```
---
## Prometheus Rules
```yaml
groups:
- name: vaultmesh.receipts
rules:
- alert: ReceiptWriteFailure
expr: rate(vaultmesh_receipt_write_errors_total[5m]) > 0
for: 1m
labels:
severity: critical
annotations:
summary: "Receipt write failures detected"
- alert: ReceiptRateAnomaly
expr: |
abs(rate(vaultmesh_receipts_total[5m]) -
avg_over_time(rate(vaultmesh_receipts_total[5m])[1h:5m]))
> 2 * stddev_over_time(rate(vaultmesh_receipts_total[5m])[1h:5m])
for: 10m
labels:
severity: warning
annotations:
summary: "Unusual receipt rate"
- name: vaultmesh.guardian
rules:
- alert: AnchorDelayed
expr: time() - vaultmesh_guardian_last_anchor_timestamp > 7200
for: 5m
labels:
severity: warning
annotations:
summary: "Guardian anchor delayed"
- alert: AnchorCriticallyDelayed
expr: time() - vaultmesh_guardian_last_anchor_timestamp > 14400
for: 5m
labels:
severity: critical
annotations:
summary: "No anchor in over 4 hours"
- alert: ProofChainDivergence
expr: vaultmesh_guardian_proofchain_divergence == 1
for: 1m
labels:
severity: critical
annotations:
summary: "ProofChain divergence detected"
- name: vaultmesh.governance
rules:
- alert: ConstitutionalViolation
expr: increase(vaultmesh_governance_violations_total[1h]) > 0
for: 0m
labels:
severity: critical
annotations:
summary: "Constitutional violation detected"
- alert: EmergencyActive
expr: vaultmesh_governance_emergency_active == 1
for: 0m
labels:
severity: warning
annotations:
summary: "Emergency powers in effect"
```
---
## Kustomization
### Base
```yaml
# kubernetes/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: vaultmesh
resources:
- namespace.yaml
- rbac.yaml
- portal/
- guardian/
- oracle/
- database/
- storage/
- ingress/
commonLabels:
app.kubernetes.io/part-of: vaultmesh
app.kubernetes.io/managed-by: kustomize
```
### Production Overlay
```yaml
# kubernetes/overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: vaultmesh
resources:
- ../../base
patches:
- path: portal-resources.yaml
- path: guardian-resources.yaml
- path: oracle-resources.yaml
configMapGenerator:
- name: vaultmesh-portal-config
behavior: merge
files:
- portal.toml=configs/portal-prod.toml
replicas:
- name: vaultmesh-portal
count: 3
- name: vaultmesh-oracle
count: 3
```
---
## Terraform (Infrastructure)
```hcl
# main.tf
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.23"
}
helm = {
source = "hashicorp/helm"
version = "~> 2.11"
}
}
}
resource "kubernetes_namespace" "vaultmesh" {
metadata {
name = "vaultmesh"
labels = {
"app.kubernetes.io/name" = "vaultmesh"
"app.kubernetes.io/part-of" = "civilization-ledger"
}
}
}
resource "helm_release" "vaultmesh" {
name = "vaultmesh"
namespace = kubernetes_namespace.vaultmesh.metadata[0].name
chart = "./charts/vaultmesh"
values = [
file("values-${var.environment}.yaml")
]
set {
name = "portal.replicas"
value = var.portal_replicas
}
set {
name = "guardian.anchor.ethereum.rpcUrl"
value = var.ethereum_rpc_url
}
set_sensitive {
name = "secrets.anthropicApiKey"
value = var.anthropic_api_key
}
}
variable "environment" {
type = string
default = "production"
}
variable "portal_replicas" {
type = number
default = 3
}
variable "ethereum_rpc_url" {
type = string
}
variable "anthropic_api_key" {
type = string
sensitive = true
}
```

View File

@@ -0,0 +1,493 @@
# VaultMesh MCP Integration Patterns
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ CLAUDE │
└───────────────────────────┬─────────────────────────────────┘
│ MCP Protocol
┌─────────────────────────────────────────────────────────────┐
│ MCP GATEWAY │
│ • Authentication (capability verification) │
│ • Rate limiting │
│ • Audit logging (all tool calls receipted) │
│ • Constitutional compliance checking │
└───────────────────────────┬─────────────────────────────────┘
┌───────────────┼───────────────┐
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ Oracle │ │ Drills │ │ Mesh │
│ Server │ │ Server │ │ Server │
└───────────┘ └───────────┘ └───────────┘
```
## Tool Categories
### Read-Only Tools (Default Access)
| Tool | Capability | Description |
|------|------------|-------------|
| `oracle_answer` | `oracle_query` | Ask compliance questions |
| `oracle_corpus_search` | `oracle_query` | Search compliance corpus |
| `drills_status` | `drills_view` | View drill status |
| `mesh_topology` | `mesh_view` | View mesh topology |
| `mesh_node_status` | `mesh_view` | View node status |
| `treasury_balance` | `treasury_view` | View balances |
| `guardian_anchor_status` | `guardian_view` | View anchor status |
| `guardian_verify_receipt` | `guardian_view` | Verify receipts |
| `identity_resolve_did` | `identity_view` | Resolve DIDs |
| `identity_whoami` | (any) | View own identity |
| `psi_phase_status` | `psi_view` | View phase status |
| `psi_opus_status` | `psi_view` | View opus status |
| `governance_constitution_summary` | `governance_view` | View constitution |
| `receipts_search` | `receipts_view` | Search receipts |
| `system_health` | `system_view` | View system health |
### Write Tools (Elevated Access)
| Tool | Capability | Description |
|------|------------|-------------|
| `drills_create` | `drills_create` | Create new drill |
| `drills_complete_stage` | `drills_execute` | Complete drill stage |
| `treasury_record_entry` | `treasury_write` | Record financial entry |
| `guardian_anchor_now` | `anchor` | Trigger anchor cycle |
| `psi_transmute` | `psi_transmute` | Start transmutation |
## Tool Implementation Patterns
### Basic Read Tool
```python
@server.tool()
async def my_read_tool(
filter_param: str = None,
limit: int = 50,
) -> str:
"""
Description of what this tool does.
Args:
filter_param: Optional filter
limit: Maximum results
Returns:
Query results as JSON
"""
# Verify capability
caller = await get_caller_identity()
await verify_capability(caller, "my_view")
# Perform query
results = await engine.query(filter_param, limit)
return json.dumps([r.to_dict() for r in results], indent=2)
```
### Write Tool with Receipt
```python
@server.tool()
async def my_write_tool(
param1: str,
param2: int,
) -> str:
"""
Description of write operation.
Args:
param1: First parameter
param2: Second parameter
Returns:
Operation result as JSON
"""
# Verify elevated capability
caller = await get_caller_identity()
await verify_capability(caller, "my_write")
# Perform operation
result = await engine.perform_operation(param1, param2)
# Emit receipt for audit
await emit_tool_call_receipt(
tool="my_write_tool",
caller=caller,
params={"param1": param1, "param2": param2},
result_hash=result.hash,
)
return json.dumps(result.to_dict(), indent=2)
```
### Tool with Constitutional Check
```python
@server.tool()
async def sensitive_operation(
target: str,
action: str,
) -> str:
"""
Operation requiring constitutional compliance check.
"""
caller = await get_caller_identity()
await verify_capability(caller, "admin")
# Check constitutional compliance BEFORE executing
compliance = await governance_engine.check_compliance(
action=action,
actor=caller,
target=target,
)
if not compliance.allowed:
return json.dumps({
"error": "constitutional_violation",
"violated_articles": compliance.violated_articles,
"message": compliance.message,
}, indent=2)
# Execute if compliant
result = await engine.execute(target, action)
await emit_tool_call_receipt(
tool="sensitive_operation",
caller=caller,
params={"target": target, "action": action},
result_hash=result.hash,
)
return json.dumps(result.to_dict(), indent=2)
```
## Tool Call Receipt
Every MCP tool call is receipted:
```json
{
"type": "mcp_tool_call",
"call_id": "mcp-call-2025-12-06-001",
"timestamp": "2025-12-06T14:30:00Z",
"caller": "did:vm:agent:claude-session-abc123",
"tool": "oracle_answer",
"params_hash": "blake3:params...",
"result_hash": "blake3:result...",
"duration_ms": 1250,
"capability_used": "oracle_query",
"session_id": "session-xyz789",
"tags": ["mcp", "oracle", "tool-call"],
"root_hash": "blake3:aaa111..."
}
```
## Authentication
### Session Identity
```python
async def get_caller_identity() -> str:
"""Get the DID of the current MCP caller."""
session = get_current_session()
if session.authenticated_did:
return session.authenticated_did
# Anonymous callers get session-scoped agent DID
return f"did:vm:agent:mcp-session-{session.id}"
```
### Capability Verification
```python
async def verify_capability(caller: str, capability: str) -> bool:
"""Verify the caller has the required capability."""
has_cap = await identity_engine.check_capability(caller, capability)
if not has_cap:
raise PermissionError(
f"Caller {caller} lacks capability: {capability}"
)
# Log capability exercise
await identity_engine.log_capability_exercise(
caller=caller,
capability=capability,
action="mcp_tool_call",
)
return True
```
## Rate Limiting
```python
class RateLimiter:
def __init__(self):
self.limits = {
"oracle_answer": (10, timedelta(minutes=1)),
"guardian_anchor_now": (5, timedelta(hours=1)),
"treasury_record_entry": (100, timedelta(hours=1)),
"default": (60, timedelta(minutes=1)),
}
async def check(self, caller: str, tool: str) -> bool:
key = f"{caller}:{tool}"
limit, window = self.limits.get(tool, self.limits["default"])
# Check against limit
current_count = await self.get_count(key, window)
if current_count >= limit:
raise RateLimitExceeded(
f"Rate limit exceeded: {limit} per {window}"
)
await self.increment(key)
return True
```
## Claude Desktop Configuration
### config.json
```json
{
"mcpServers": {
"vaultmesh": {
"command": "python",
"args": ["-m", "vaultmesh_mcp.server"],
"env": {
"VAULTMESH_CONFIG": "/path/to/config.toml",
"VAULTMESH_IDENTITY": "did:vm:agent:claude-desktop"
}
}
}
}
```
### Capability Configuration
```toml
# config.toml
[mcp.capabilities]
default_capabilities = [
"oracle_query",
"drills_view",
"mesh_view",
"treasury_view",
"guardian_view",
"identity_view",
"psi_view",
"governance_view",
"receipts_view",
"system_view",
]
elevated_capabilities = [
"drills_create",
"drills_execute",
"treasury_write",
"anchor",
"psi_transmute",
]
[mcp.rate_limits]
oracle_answer = { limit = 10, window_minutes = 1 }
guardian_anchor_now = { limit = 5, window_hours = 1 }
default = { limit = 60, window_minutes = 1 }
[mcp.audit]
receipt_all_calls = true
log_params = true
log_results = false
```
## Example Interactions
### Compliance Query
```
Human: What's our compliance status for AI Act Annex IV?
Claude: [Calls oracle_answer]
Based on the Oracle's assessment:
**Compliance Status: Partial (73%)**
Key findings:
- Technical documentation: ✅ Compliant
- Risk management: ⚠️ Needs update
- Data governance: ✅ Compliant
- Human oversight: ⚠️ Documentation incomplete
```
### Security Drill
```
Human: Start a drill for Kubernetes lateral movement
Claude: [Calls drills_create]
Created drill: drill-1764691432
**Stages:**
1. Initial Access Simulation
2. Lateral Movement Attempt
3. Detection Validation
4. Response Execution
Ready to begin?
```
### System Health
```
Human: How's the system?
Claude: [Calls system_health]
**VaultMesh Health: 🟢 Healthy**
- Nodes: 5 active
- Last anchor: 47 min ago
- Phase: RUBEDO 🜂
- Receipts today: 34
```
## Server Entry Point
```python
# vaultmesh_mcp/server.py
from mcp.server import Server
from mcp.server.stdio import stdio_server
server = Server("vaultmesh")
# Register all tools
from .tools import (
oracle_tools,
drills_tools,
mesh_tools,
treasury_tools,
guardian_tools,
identity_tools,
psi_tools,
governance_tools,
)
def main():
import asyncio
async def run():
async with stdio_server() as (read, write):
await server.run(read, write, server.create_initialization_options())
asyncio.run(run())
if __name__ == "__main__":
main()
```
## Custom VaultMesh Nodes for n8n
When integrating with n8n workflows:
```javascript
// VaultMesh Receipt Emit Node
{
name: 'vaultmesh-receipt-emit',
displayName: 'VaultMesh Receipt',
description: 'Emit a receipt to VaultMesh',
properties: [
{
displayName: 'Scroll',
name: 'scroll',
type: 'options',
options: [
{ name: 'Automation', value: 'automation' },
{ name: 'Compliance', value: 'compliance' },
// ...
],
},
{
displayName: 'Receipt Type',
name: 'receiptType',
type: 'string',
},
{
displayName: 'Body',
name: 'body',
type: 'json',
},
{
displayName: 'Tags',
name: 'tags',
type: 'string',
description: 'Comma-separated tags',
},
],
async execute() {
const scroll = this.getNodeParameter('scroll', 0);
const receiptType = this.getNodeParameter('receiptType', 0);
const body = this.getNodeParameter('body', 0);
const tags = this.getNodeParameter('tags', 0).split(',');
const receipt = await vaultmesh.emitReceipt({
scroll,
receiptType,
body,
tags,
});
return [{ json: receipt }];
},
}
```
## Error Handling
```python
@server.tool()
async def robust_tool(param: str) -> str:
"""Tool with comprehensive error handling."""
try:
caller = await get_caller_identity()
await verify_capability(caller, "required_cap")
result = await engine.operation(param)
return json.dumps(result.to_dict(), indent=2)
except PermissionError as e:
return json.dumps({
"error": "permission_denied",
"message": str(e),
"required_capability": "required_cap",
}, indent=2)
except RateLimitExceeded as e:
return json.dumps({
"error": "rate_limit_exceeded",
"message": str(e),
"retry_after_seconds": e.retry_after,
}, indent=2)
except ConstitutionalViolation as e:
return json.dumps({
"error": "constitutional_violation",
"violated_axioms": e.axioms,
"message": str(e),
}, indent=2)
except Exception as e:
logger.error(f"Tool error: {e}")
return json.dumps({
"error": "internal_error",
"message": "An unexpected error occurred",
}, indent=2)
```

537
docs/skill/OPERATIONS.md Normal file
View File

@@ -0,0 +1,537 @@
# VaultMesh Operations Guide
## Daily Operations
### Morning Health Check
```bash
#!/bin/bash
# scripts/morning-check.sh
echo "=== VaultMesh Morning Health Check ==="
echo "Date: $(date -u +%Y-%m-%dT%H:%M:%SZ)"
# 1. System health
echo -e "\n1. System Health"
vm-cli system health
# 2. Guardian status
echo -e "\n2. Guardian Status"
vm-guardian anchor-status
# 3. Phase status
echo -e "\n3. Current Phase"
vm-psi phase current
# 4. Overnight receipts
echo -e "\n4. Receipts (last 12h)"
vm-cli receipts count --since 12h
# 5. Any violations
echo -e "\n5. Governance Violations"
vm-gov violations list --since 24h --severity high,critical
# 6. Federation health
echo -e "\n6. Federation Status"
vm-federation health --all-peers
echo -e "\n=== Check Complete ==="
```
### Anchor Monitoring
```bash
# Check anchor status
vm-guardian anchor-status
# View anchor history
vm-guardian anchor-history --last 24h
# Trigger manual anchor if needed
vm-guardian anchor-now --wait
# Verify specific receipt
vm-guardian verify-receipt blake3:abc123... --scroll Compliance
```
### Receipt Queries
```bash
# Count receipts by scroll
vm-cli receipts count --by-scroll
# Search receipts
vm-cli receipts search --scroll Drills --from 2025-12-01 --to 2025-12-06
# Export receipts
vm-cli receipts export --scroll Compliance --format csv --output compliance.csv
# Verify integrity
vm-guardian verify-all --scroll all
```
---
## Common Tasks
### Add New Node to Mesh
```bash
# 1. Create DID for new node
vm-identity did create --type node --id new-node-01
# 2. Issue node credential
vm-identity credential issue \
--type VaultMeshNodeCredential \
--subject did:vm:node:new-node-01 \
--issuer did:vm:node:portal-01
# 3. Add to mesh
vm-mesh node add \
--did did:vm:node:new-node-01 \
--endpoint https://new-node-01.vaultmesh.io \
--type infrastructure
# 4. Grant capabilities
vm-identity capability grant \
--subject did:vm:node:new-node-01 \
--capability storage,compute
# 5. Verify
vm-mesh node status new-node-01
```
### Key Rotation Ceremony
```bash
# 1. Initiate ceremony
vm-identity key-rotate \
--did did:vm:node:brick-01 \
--ceremony-type standard
# 2. Generate new keypair (on target node)
vm-identity key-generate --algorithm ed25519
# 3. Witness signatures (from other nodes)
vm-identity key-witness \
--ceremony ceremony-2025-12-001 \
--witness did:vm:node:brick-02
# 4. Publish new key
vm-identity key-publish --ceremony ceremony-2025-12-001
# 5. Verify propagation
vm-identity did resolve did:vm:node:brick-01
```
### Create Security Drill
```bash
# 1. Create drill from prompt
vm-drills create \
--prompt "Detect and respond to ransomware encryption" \
--severity high \
--skills detection-defense-ir,kubernetes-security
# 2. Review generated contract
vm-drills show drill-2025-12-001
# 3. Start execution
vm-drills start drill-2025-12-001
# 4. Complete stages
vm-drills complete-stage drill-2025-12-001 stage-1 \
--outputs cases/drills/drill-2025-12-001/stage-1/ \
--findings "Identified encryption patterns"
# 5. Seal drill
vm-drills seal drill-2025-12-001
```
### Initiate Transmutation
```bash
# 1. Start transmutation from incident
vm-psi transmute start \
--input INC-2025-12-001 \
--input-type security_incident \
--title "SSH Brute Force to Detection"
# 2. Extract IOCs
vm-psi transmute step transmute-2025-12-001 extract
# 3. Dissolve to standard format
vm-psi transmute step transmute-2025-12-001 dissolve
# 4. Purify (validate)
vm-psi transmute step transmute-2025-12-001 purify
# 5. Coagulate (generate rules)
vm-psi transmute step transmute-2025-12-001 coagulate
# 6. Seal
vm-psi transmute seal transmute-2025-12-001
```
---
## Troubleshooting
### Anchor Failures
**Symptom**: `vm-guardian anchor-status` shows failures
**Diagnosis**:
```bash
# Check guardian logs
kubectl logs -n vaultmesh -l app.kubernetes.io/name=guardian --tail=100
# Check anchor backend connectivity
vm-guardian test-backend ethereum
vm-guardian test-backend ots
# Check pending receipts
vm-guardian pending-receipts
```
**Common Causes**:
1. **Network issues**: Check Ethereum RPC connectivity
2. **Insufficient funds**: Check anchor wallet balance
3. **Rate limiting**: Check if backend is rate limiting
4. **Configuration**: Verify anchor config
**Resolution**:
```bash
# Retry anchor
vm-guardian anchor-now --backend ots --wait
# If Ethereum issues, switch to OTS temporarily
vm-guardian config set anchor.primary ots
# Check and top up wallet
vm-guardian wallet balance
vm-guardian wallet fund --amount 0.1
```
### Receipt Integrity Errors
**Symptom**: `verify-all` reports mismatches
**Diagnosis**:
```bash
# Identify affected scroll
vm-guardian verify-all --scroll all --verbose
# Check specific receipt
vm-guardian verify-receipt blake3:... --scroll Compliance --debug
# Compare computed vs stored root
vm-guardian compute-root --scroll Compliance
cat receipts/ROOT.compliance.txt
```
**Common Causes**:
1. **Corrupted JSONL**: File system issues
2. **Incomplete write**: Process interrupted
3. **Manual modification**: Violation of AXIOM-001
**Resolution**:
```bash
# If corruption detected, restore from backup
vm-cli backup restore --backup-id backup-2025-12-05 --scroll Compliance
# Recompute root after restore
vm-guardian recompute-root --scroll Compliance
# Trigger anchor to seal restored state
vm-guardian anchor-now --scroll Compliance --wait
```
### Node Connectivity Issues
**Symptom**: Node showing unhealthy in mesh
**Diagnosis**:
```bash
# Check node status
vm-mesh node status brick-02
# Test connectivity
vm-mesh ping brick-02
# Check routes
vm-mesh routes list --node brick-02
# Check node logs
kubectl logs -n vaultmesh pod/brick-02 --tail=100
```
**Common Causes**:
1. **Network partition**: Firewall/network issues
2. **Resource exhaustion**: Node overloaded
3. **Certificate expiry**: TLS cert expired
4. **Process crash**: Service died
**Resolution**:
```bash
# Restart node pod
kubectl rollout restart deployment/brick-02 -n vaultmesh
# If cert expired
vm-identity cert-renew --node brick-02
# If persistent issues, remove and re-add
vm-mesh node remove brick-02 --force
vm-mesh node add --did did:vm:node:brick-02 --endpoint https://...
```
### Oracle Query Failures
**Symptom**: Oracle returning errors
**Diagnosis**:
```bash
# Check oracle health
vm-oracle health
# Check LLM connectivity
vm-oracle test-llm anthropic
vm-oracle test-llm openai
# Check corpus status
vm-oracle corpus status
# Check logs
kubectl logs -n vaultmesh -l app.kubernetes.io/name=oracle --tail=100
```
**Common Causes**:
1. **LLM API issues**: Rate limiting, key expiry
2. **Corpus empty**: Documents not loaded
3. **Index corruption**: Vector index issues
4. **Memory exhaustion**: OOM conditions
**Resolution**:
```bash
# Rotate API key if expired
kubectl create secret generic oracle-llm-credentials \
--from-literal=anthropic-key=NEW_KEY \
-n vaultmesh --dry-run=client -o yaml | kubectl apply -f -
# Reload corpus
vm-oracle corpus reload
# Rebuild index
vm-oracle corpus reindex
# Restart oracle
kubectl rollout restart deployment/vaultmesh-oracle -n vaultmesh
```
### Phase Stuck in Nigredo
**Symptom**: System in Nigredo for extended period
**Diagnosis**:
```bash
# Check phase details
vm-psi phase current --verbose
# Check active incidents
vm-offsec incidents list --status open
# Check for blocking issues
vm-psi blockers
# Review phase history
vm-psi phase history --last 7d
```
**Common Causes**:
1. **Unresolved incident**: Active security issue
2. **Failed transmutation**: Stuck in process
3. **Missing witness**: Transmutation waiting for signature
4. **Metric threshold**: Health metrics below threshold
**Resolution**:
```bash
# Close incident if resolved
vm-offsec incident close INC-2025-12-001 \
--resolution "Threat neutralized, systems restored"
# Complete stuck transmutation
vm-psi transmute force-complete transmute-2025-12-001
# Manual phase transition (requires justification)
vm-psi phase transition albedo \
--reason "Incident resolved, metrics stable" \
--evidence evidence-report.md
```
### Constitutional Violation Detected
**Symptom**: `gov_violation` alert fired
**Diagnosis**:
```bash
# View violation details
vm-gov violations show VIOL-2025-12-001
# Check what was attempted
vm-gov violations evidence VIOL-2025-12-001
# Review enforcement action
vm-gov enforcement show ENF-2025-12-001
```
**Common Causes**:
1. **Agent misconfiguration**: Automation tried unauthorized action
2. **Capability expiry**: Token expired mid-operation
3. **Bug in engine**: Logic error attempting violation
4. **Attack attempt**: Malicious action blocked
**Resolution**:
```bash
# If false positive, dismiss
vm-gov violations review VIOL-2025-12-001 \
--decision dismiss \
--reason "False positive due to timing issue"
# If real, review and uphold enforcement
vm-gov enforcement review ENF-2025-12-001 --decision uphold
# Fix underlying issue
# (depends on specific violation)
```
---
## Backup & Recovery
### Scheduled Backups
```bash
# Full backup
vm-cli backup create --type full
# Incremental backup
vm-cli backup create --type incremental
# List backups
vm-cli backup list
# Verify backup integrity
vm-cli backup verify backup-2025-12-05
```
### Recovery Procedures
```bash
# 1. Stop services
kubectl scale deployment -n vaultmesh --replicas=0 --all
# 2. Restore from backup
vm-cli backup restore --backup-id backup-2025-12-05
# 3. Verify integrity
vm-guardian verify-all --scroll all
# 4. Restart services
kubectl scale deployment -n vaultmesh --replicas=2 \
vaultmesh-portal vaultmesh-oracle
kubectl scale deployment -n vaultmesh --replicas=1 vaultmesh-guardian
# 5. Trigger anchor to seal restored state
vm-guardian anchor-now --wait
```
### Disaster Recovery
```bash
# Full rebuild from backup
./scripts/disaster-recovery.sh --backup backup-2025-12-05
# Verify federation peers
vm-federation verify-all
# Re-establish federation trust if needed
vm-federation re-establish --peer vaultmesh-berlin
```
---
## Performance Tuning
### Receipt Write Optimization
```toml
# config.toml
[receipts]
# Batch writes for better throughput
batch_size = 100
batch_timeout_ms = 100
# Compression
compression = "zstd"
compression_level = 3
# Index configuration
index_cache_size_mb = 512
```
### Database Tuning
```sql
-- Vacuum and analyze
VACUUM ANALYZE receipts;
-- Check slow queries
SELECT query, calls, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;
-- Index usage
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes
ORDER BY idx_scan;
```
### Memory Optimization
```bash
# Check memory usage
kubectl top pods -n vaultmesh
# Adjust limits if needed
kubectl patch deployment vaultmesh-oracle -n vaultmesh \
-p '{"spec":{"template":{"spec":{"containers":[{"name":"oracle","resources":{"limits":{"memory":"8Gi"}}}]}}}}'
```
---
## Monitoring Dashboards
### Key Metrics to Watch
| Metric | Warning | Critical |
|--------|---------|----------|
| `vaultmesh_guardian_last_anchor_age` | > 2h | > 4h |
| `vaultmesh_receipt_write_errors_total` | > 0 | > 10/min |
| `vaultmesh_mesh_node_unhealthy` | any | multiple |
| `vaultmesh_oracle_latency_p95` | > 30s | > 60s |
| `vaultmesh_governance_violations` | any | critical |
| `vaultmesh_psi_phase` | nigredo > 24h | nigredo > 72h |
### Alert Response
```bash
# Acknowledge alert
vm-alerts ack ALERT-2025-12-001
# Silence alert (for maintenance)
vm-alerts silence --matcher 'alertname="AnchorDelayed"' --duration 2h
# View active alerts
vm-alerts list --active
```

605
docs/skill/PROTOCOLS.md Normal file
View File

@@ -0,0 +1,605 @@
# VaultMesh Federation & Governance Protocols
## Federation Protocol
### Trust Establishment Flow
```
┌──────────────┐ ┌──────────────┐
│ MESH-A │ │ MESH-B │
│ (Dublin) │ │ (Berlin) │
└──────┬───────┘ └──────┬───────┘
│ │
│ 1. Discovery │
│ GET /federation/discovery │
│──────────────────────────────────►│
│ │
│ 2. Proposal │
│ POST /federation/proposals │
│──────────────────────────────────►│
│ │
│ 3. Counter/Accept │
│◄──────────────────────────────────│
│ │
│ 4. Mutual Signature │
│◄─────────────────────────────────►│
│ │
│ 5. Begin Witness Cycle │
│◄─────────────────────────────────►│
│ │
```
### Trust Levels
| Level | Name | Capabilities |
|-------|------|--------------|
| 0 | `isolated` | No federation |
| 1 | `observe` | Read-only witness, public receipts only |
| 2 | `verify` | Mutual verification, receipt sampling |
| 3 | `attest` | Cross-attestation, shared roots |
| 4 | `integrate` | Shared scrolls, joint governance |
### Discovery Record
```json
{
"mesh_id": "did:vm:mesh:vaultmesh-dublin",
"display_name": "VaultMesh Dublin",
"endpoints": {
"federation": "https://federation.vaultmesh-dublin.io",
"verification": "https://verify.vaultmesh-dublin.io"
},
"public_key": "ed25519:z6Mk...",
"scrolls_available": ["Compliance", "Drills"],
"trust_policy": {
"accepts_proposals": true,
"min_trust_level": 1,
"requires_mutual": true
},
"attestations": []
}
```
### Trust Proposal
```json
{
"proposal_id": "fed-proposal-2025-12-06-001",
"proposer": "did:vm:mesh:vaultmesh-dublin",
"target": "did:vm:mesh:vaultmesh-berlin",
"proposed_at": "2025-12-06T10:00:00Z",
"expires_at": "2025-12-13T10:00:00Z",
"proposed_trust_level": 2,
"proposed_terms": {
"scrolls_to_share": ["Compliance"],
"verification_frequency": "hourly",
"retention_period_days": 365,
"data_jurisdiction": "EU",
"audit_rights": true
},
"proposer_attestations": {
"identity_proof": "...",
"compliance_credentials": ["ISO27001", "SOC2"]
},
"signature": "z58D..."
}
```
### Federation Agreement
```json
{
"agreement_id": "fed-agreement-2025-12-06-001",
"parties": [
"did:vm:mesh:vaultmesh-dublin",
"did:vm:mesh:vaultmesh-berlin"
],
"established_at": "2025-12-06T16:00:00Z",
"trust_level": 2,
"terms": {
"scrolls_shared": ["Compliance", "Drills"],
"verification_frequency": "daily",
"retention_period_days": 180,
"data_jurisdiction": "EU",
"audit_rights": true,
"dispute_resolution": "arbitration_zurich"
},
"key_exchange": {
"dublin_federation_key": "ed25519:z6MkDublin...",
"berlin_federation_key": "ed25519:z6MkBerlin..."
},
"signatures": {
"did:vm:mesh:vaultmesh-dublin": {
"signed_at": "2025-12-06T15:30:00Z",
"signature": "z58D..."
},
"did:vm:mesh:vaultmesh-berlin": {
"signed_at": "2025-12-06T16:00:00Z",
"signature": "z47C..."
}
},
"agreement_hash": "blake3:abc123..."
}
```
### Witness Protocol
```
Anchor Completes → Notify Peer → Peer Verifies → Witness Receipt
```
**Witness Receipt**:
```json
{
"type": "fed_witness_event",
"witness_id": "witness-2025-12-06-001",
"witnessed_mesh": "did:vm:mesh:vaultmesh-dublin",
"witnessing_mesh": "did:vm:mesh:vaultmesh-berlin",
"timestamp": "2025-12-06T12:05:00Z",
"scroll": "Compliance",
"witnessed_root": "blake3:aaa111...",
"witnessed_anchor": {
"backend": "ethereum",
"tx_hash": "0x123...",
"block_number": 12345678
},
"verification_method": "anchor_proof_validation",
"verification_result": "verified",
"samples_checked": 5,
"discrepancies": [],
"witness_signature": "z47C..."
}
```
### Cross-Anchor
At trust level 3+, meshes include each other's roots:
```json
{
"type": "fed_cross_anchor",
"anchoring_mesh": "did:vm:mesh:vaultmesh-berlin",
"anchored_mesh": "did:vm:mesh:vaultmesh-dublin",
"dublin_roots_included": {
"Compliance": "blake3:aaa111...",
"Drills": "blake3:bbb222..."
},
"combined_root": "blake3:ccc333...",
"anchor_proof": {
"backend": "bitcoin",
"tx_hash": "abc123..."
}
}
```
### Federation API Endpoints
| Endpoint | Method | Purpose |
|----------|--------|---------|
| `/federation/discovery` | GET | Get mesh discovery record |
| `/federation/proposals` | POST | Submit trust proposal |
| `/federation/proposals/{id}` | GET, PUT | View/respond to proposal |
| `/federation/agreements` | GET | List active agreements |
| `/federation/agreements/{id}` | GET, DELETE | View/revoke agreement |
| `/federation/notify` | POST | Notify of new anchor |
| `/federation/witness` | POST | Submit witness attestation |
| `/federation/roots` | GET | Get current Merkle roots |
| `/federation/receipts/{scroll}` | GET | Fetch receipt samples |
| `/federation/verify` | POST | Request receipt verification |
### CLI Commands
```bash
# Discovery
vm-federation discover --mesh vaultmesh-berlin.io
vm-federation list-known
# Proposals
vm-federation propose \
--target did:vm:mesh:vaultmesh-berlin \
--trust-level 2 \
--scrolls Compliance,Drills
vm-federation proposals list
vm-federation proposals accept fed-proposal-001
vm-federation proposals reject fed-proposal-001 --reason "..."
# Agreements
vm-federation agreements list
vm-federation agreements revoke fed-agreement-001 --notice-days 30
# Verification
vm-federation verify --mesh vaultmesh-berlin --scroll Compliance
vm-federation witness-history --mesh vaultmesh-berlin --last 30d
# Status
vm-federation status
vm-federation health --all-peers
```
---
## Constitutional Governance
### Hierarchy
```
┌─────────────────────────────────────────────────────────────────┐
│ IMMUTABLE AXIOMS │
│ (Cannot be changed, ever) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ CONSTITUTIONAL ARTICLES │
│ (Amendable with supermajority + ratification) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STATUTORY RULES │
│ (Changeable with standard procedures) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ EXECUTIVE ORDERS │
│ (Issued by authorized actors) │
└─────────────────────────────────────────────────────────────────┘
```
### Immutable Axioms
| ID | Name | Statement |
|----|------|-----------|
| AXIOM-001 | Append-Only Receipts | Receipts, once written, shall never be modified or deleted |
| AXIOM-002 | Cryptographic Integrity | All receipts include cryptographic hashes |
| AXIOM-003 | Universal Receipting | All significant changes produce receipts |
| AXIOM-004 | Constitutional Supremacy | No action may violate the Constitution |
| AXIOM-005 | Axiom Immutability | These axioms cannot be amended |
### Constitutional Articles
| Article | Name | Content |
|---------|------|---------|
| I | Governance Structure | Sovereign authority, engine authorities, agent delegation |
| II | Amendment Procedure | Proposal, deliberation, ratification |
| III | Engine Governance | Engine registry, boundaries, lifecycle |
| IV | Rights and Protections | Audit rights, data sovereignty, due process |
| V | Federation | Authority, limits, termination |
| VI | Emergency Powers | Declaration, powers, duration |
### Amendment Workflow
```
PROPOSAL → DELIBERATION (7+ days) → VOTING → RATIFICATION → ACTIVATION
↘ REJECTED → Archive
```
### Proposal Receipt
```json
{
"type": "gov_proposal",
"proposal_id": "PROP-2025-12-001",
"proposal_type": "amendment",
"title": "Add Data Retention Article",
"author": "did:vm:human:sovereign",
"submitted_at": "2025-12-06T10:00:00Z",
"deliberation_ends": "2025-12-13T10:00:00Z",
"content": {
"target": "ARTICLE-VII",
"action": "add",
"text": {
"id": "ARTICLE-VII",
"name": "Data Retention",
"sections": [...]
}
},
"rationale": "Compliance with EU regulations",
"status": "deliberation"
}
```
### Vote Receipt
```json
{
"type": "gov_vote",
"vote_id": "VOTE-2025-12-001-sovereign",
"proposal_id": "PROP-2025-12-001",
"voter": "did:vm:human:sovereign",
"voted_at": "2025-12-14T10:00:00Z",
"vote": "approve",
"weight": 1.0,
"comments": "Essential for compliance",
"signature": "z58D..."
}
```
### Ratification Receipt
```json
{
"type": "gov_ratification",
"ratification_id": "RAT-2025-12-001",
"proposal_id": "PROP-2025-12-001",
"ratified_at": "2025-12-14T12:00:00Z",
"ratified_by": "did:vm:human:sovereign",
"vote_summary": {
"approve": 1,
"reject": 0,
"abstain": 0
},
"quorum_met": true,
"constitution_version_before": "1.0.0",
"constitution_version_after": "1.1.0"
}
```
### Amendment Receipt
```json
{
"type": "gov_amendment",
"amendment_id": "AMEND-2025-12-001",
"proposal_id": "PROP-2025-12-001",
"effective_at": "2025-12-14T14:00:00Z",
"anchor_proof": {
"backend": "ethereum",
"tx_hash": "0x123..."
},
"constitution_hash_before": "blake3:const_v1.0...",
"constitution_hash_after": "blake3:const_v1.1..."
}
```
### Executive Orders
For operational decisions without full amendment:
```json
{
"type": "gov_executive_order",
"order_id": "EO-2025-12-001",
"title": "Temporary Rate Limit Increase",
"issued_by": "did:vm:human:sovereign",
"issued_at": "2025-12-06T15:00:00Z",
"authority": "ARTICLE-I.1",
"order_type": "parameter_change",
"content": {
"parameter": "guardian.anchor_rate_limit",
"old_value": "100/day",
"new_value": "500/day"
},
"duration": {
"type": "temporary",
"expires_at": "2026-01-01T00:00:00Z"
}
}
```
### Emergency Declaration
```json
{
"type": "gov_executive_order",
"order_id": "EO-2025-12-002",
"title": "Security Emergency",
"issued_by": "did:vm:human:sovereign",
"authority": "ARTICLE-VI.1",
"order_type": "emergency",
"content": {
"emergency_type": "security_incident",
"threat_description": "Active intrusion on BRICK-02",
"powers_invoked": [
"Suspend authentication delays",
"Enhanced logging",
"Immediate capability revocation"
]
},
"duration": {
"type": "emergency",
"expires_at": "2025-12-09T03:50:00Z",
"renewable": true
}
}
```
### Violation Detection
```json
{
"type": "gov_violation",
"violation_id": "VIOL-2025-12-001",
"detected_at": "2025-12-06T16:00:00Z",
"detected_by": "engine:guardian",
"violation_type": "unauthorized_action",
"severity": "high",
"details": {
"actor": "did:vm:agent:automation-01",
"action_attempted": "modify_receipt",
"rule_violated": "AXIOM-001",
"action_result": "blocked"
},
"evidence": {
"log_entries": ["..."],
"request_hash": "blake3:..."
}
}
```
### Enforcement Action
```json
{
"type": "gov_enforcement",
"enforcement_id": "ENF-2025-12-001",
"violation_id": "VIOL-2025-12-001",
"enforced_at": "2025-12-06T16:05:00Z",
"enforcement_type": "capability_suspension",
"target": "did:vm:agent:automation-01",
"action_taken": {
"capability_suspended": "write",
"scope": "all_scrolls",
"duration": "pending_review"
},
"review_required": true,
"review_deadline": "2025-12-07T16:05:00Z"
}
```
### CLI Commands
```bash
# Constitution
vm-gov constitution show
vm-gov constitution version
vm-gov constitution diff v1.0.0 v1.1.0
# Proposals
vm-gov proposal create --type amendment --file proposal.json
vm-gov proposal list --status deliberation
vm-gov proposal show PROP-2025-12-001
# Voting
vm-gov vote PROP-2025-12-001 --vote approve
vm-gov vote PROP-2025-12-001 --vote reject --reason "..."
# Ratification
vm-gov ratify PROP-2025-12-001
# Executive Orders
vm-gov order create --type parameter_change --file order.json
vm-gov order list --active
vm-gov order revoke EO-2025-12-001
# Emergencies
vm-gov emergency declare --type security_incident --description "..."
vm-gov emergency status
vm-gov emergency extend --hours 24
vm-gov emergency end
# Violations
vm-gov violations list --severity high,critical
vm-gov violations review VIOL-2025-12-001 --decision dismiss
# Enforcement
vm-gov enforcement list --pending-review
vm-gov enforcement review ENF-2025-12-001 --decision uphold
```
---
## Engine Registry
All engines must be registered in the Constitution:
```json
{
"registered_engines": [
{
"engine_id": "engine:drills",
"name": "Security Drills",
"scroll": "Drills",
"authority": "Security training and exercise management",
"status": "active"
},
{
"engine_id": "engine:oracle",
"name": "Compliance Oracle",
"scroll": "Compliance",
"authority": "Compliance question answering",
"status": "active"
},
{
"engine_id": "engine:guardian",
"name": "Guardian",
"scroll": "Guardian",
"authority": "Anchoring and sentinel",
"status": "active"
},
{
"engine_id": "engine:treasury",
"name": "Treasury",
"scroll": "Treasury",
"authority": "Financial tracking",
"status": "active"
},
{
"engine_id": "engine:mesh",
"name": "Mesh",
"scroll": "Mesh",
"authority": "Topology management",
"status": "active"
},
{
"engine_id": "engine:offsec",
"name": "OffSec",
"scroll": "OffSec",
"authority": "Security operations",
"status": "active"
},
{
"engine_id": "engine:identity",
"name": "Identity",
"scroll": "Identity",
"authority": "DID and capability management",
"status": "active"
},
{
"engine_id": "engine:observability",
"name": "Observability",
"scroll": "Observability",
"authority": "Telemetry monitoring",
"status": "active"
},
{
"engine_id": "engine:automation",
"name": "Automation",
"scroll": "Automation",
"authority": "Workflow execution",
"status": "active"
},
{
"engine_id": "engine:psi",
"name": "Ψ-Field",
"scroll": "PsiField",
"authority": "Consciousness tracking",
"status": "active"
},
{
"engine_id": "engine:federation",
"name": "Federation",
"scroll": "Federation",
"authority": "Cross-mesh trust",
"status": "active"
},
{
"engine_id": "engine:governance",
"name": "Governance",
"scroll": "Governance",
"authority": "Constitutional enforcement",
"status": "active"
}
]
}
```
### Adding New Engines
New engines require constitutional amendment:
1. Draft proposal with engine specification
2. 7-day deliberation period
3. Sovereign approval
4. Anchor confirmation activates engine
```bash
vm-gov proposal create \
--type add_engine \
--engine-id engine:analytics \
--name "Analytics" \
--scroll Analytics \
--authority "Data analysis and insights"
```

View File

@@ -0,0 +1,196 @@
# VaultMesh Quick Reference
## Eternal Pattern
```
Intent → Engine → Receipt → Scroll → Anchor
```
## Three Layers
| Layer | Components | Artifacts |
|-------|------------|-----------|
| L1 Experience | CLI, UI, MCP | Commands, requests |
| L2 Engine | Domain logic | contract.json, state.json |
| L3 Ledger | Receipts, anchors | JSONL, ROOT.*.txt |
## Scrolls
| Scroll | Path | Root File |
|--------|------|-----------|
| Drills | `receipts/drills/` | `ROOT.drills.txt` |
| Compliance | `receipts/compliance/` | `ROOT.compliance.txt` |
| Guardian | `receipts/guardian/` | `ROOT.guardian.txt` |
| Treasury | `receipts/treasury/` | `ROOT.treasury.txt` |
| Mesh | `receipts/mesh/` | `ROOT.mesh.txt` |
| OffSec | `receipts/offsec/` | `ROOT.offsec.txt` |
| Identity | `receipts/identity/` | `ROOT.identity.txt` |
| Observability | `receipts/observability/` | `ROOT.observability.txt` |
| Automation | `receipts/automation/` | `ROOT.automation.txt` |
| PsiField | `receipts/psi/` | `ROOT.psi.txt` |
| Federation | `receipts/federation/` | `ROOT.federation.txt` |
| Governance | `receipts/governance/` | `ROOT.governance.txt` |
## DIDs
```
did:vm:<type>:<identifier>
node → did:vm:node:brick-01
human → did:vm:human:sovereign
agent → did:vm:agent:copilot-01
service → did:vm:service:oracle
mesh → did:vm:mesh:vaultmesh-dublin
```
## Phases
| Symbol | Phase | State |
|--------|-------|-------|
| 🜁 | Nigredo | Crisis |
| 🜄 | Albedo | Recovery |
| 🜆 | Citrinitas | Optimization |
| 🜂 | Rubedo | Integration |
## Axioms
1. Receipts are append-only
2. Hashes are cryptographic
3. All changes produce receipts
4. Constitution is supreme
5. Axioms are immutable
## CLI Cheatsheet
```bash
# Guardian
vm-guardian anchor-status
vm-guardian anchor-now --wait
vm-guardian verify-receipt <hash> --scroll <scroll>
# Identity
vm-identity did create --type node --id <id>
vm-identity capability grant --subject <did> --capability <cap>
vm-identity whoami
# Mesh
vm-mesh node list
vm-mesh node status <id>
vm-mesh topology
# Oracle
vm-oracle query "What are the GDPR requirements?"
vm-oracle corpus status
# Drills
vm-drills create --prompt "<scenario>"
vm-drills status <drill-id>
# Psi
vm-psi phase current
vm-psi transmute start --input <ref>
vm-psi opus status
# Treasury
vm-treasury balance
vm-treasury debit --from <acct> --amount <amt>
# Governance
vm-gov constitution version
vm-gov violations list
vm-gov emergency status
# Federation
vm-federation status
vm-federation verify --mesh <peer>
# System
vm-cli system health
vm-cli receipts count --by-scroll
```
## Receipt Structure
```json
{
"schema_version": "2.0.0",
"type": "<scroll>_<operation>",
"timestamp": "ISO8601",
"header": {
"root_hash": "blake3:...",
"tags": [],
"previous_hash": "blake3:..."
},
"meta": {
"scroll": "ScrollName",
"sequence": 0,
"anchor_epoch": null,
"proof_path": null
},
"body": {}
}
```
## Capabilities
| Capability | Description |
|------------|-------------|
| `anchor` | Submit to anchor backends |
| `storage` | Store receipts/artifacts |
| `compute` | Execute drills/agents |
| `oracle` | Issue compliance answers |
| `admin` | Grant/revoke capabilities |
| `federate` | Establish cross-mesh trust |
## Trust Levels
| Level | Name | Access |
|-------|------|--------|
| 0 | isolated | None |
| 1 | observe | Read-only |
| 2 | verify | Mutual verification |
| 3 | attest | Cross-attestation |
| 4 | integrate | Shared scrolls |
## Severity Levels
| Level | Description |
|-------|-------------|
| critical | Active breach |
| high | Confirmed attack |
| medium | Suspicious activity |
| low | Anomaly/info |
## Key Ports
| Service | HTTP | Metrics |
|---------|------|---------|
| Portal | 8080 | 9090 |
| Guardian | 8081 | 9090 |
| Oracle | 8082 | 9090 |
| MCP | 8083 | - |
## Health Endpoints
```
GET /health/live → Liveness
GET /health/ready → Readiness
GET /metrics → Prometheus
```
## Transmutation Steps
```
Extract → Dissolve → Purify → Coagulate → Seal
```
## Design Gate
- [ ] Clear entrypoint?
- [ ] Contract produced?
- [ ] State object?
- [ ] Receipts emitted?
- [ ] Append-only JSONL?
- [ ] Merkle root?
- [ ] Guardian anchor path?
- [ ] Query tool?

338
docs/skill/SKILL.md Normal file
View File

@@ -0,0 +1,338 @@
# VaultMesh Architect Skill
> *Building Earth's Civilization Ledger — one receipt at a time.*
## Overview
This skill enables Claude to architect, develop, and operate VaultMesh — a sovereign digital infrastructure system that combines cryptographic proofs, blockchain anchoring, and AI governance to create durable, auditable civilization-scale evidence.
## When to Use This Skill
Activate this skill when:
- Designing or implementing VaultMesh engines or subsystems
- Creating receipts, scrolls, or anchor cycles
- Working with the Eternal Pattern architecture
- Implementing federation, governance, or identity systems
- Building MCP server integrations
- Deploying or operating VaultMesh infrastructure
- Writing code that interacts with the Civilization Ledger
## Core Architecture: The Eternal Pattern
Every VaultMesh subsystem follows this arc:
```
Real-world intent → Engine → Structured JSON → Receipt → Scroll → Guardian Anchor
```
### Three-Layer Stack
```
┌───────────────────────────────────────────────┐
│ L1 — Experience Layer │
│ (Humans & Agents) │
│ • CLI / UI / MCP tools / agents │
└───────────────────────────────────────────────┘
┌───────────────────────────────────────────────┐
│ L2 — Engine Layer │
│ (Domain Engines & Contracts) │
│ • contract.json → state.json → outputs/ │
└───────────────────────────────────────────────┘
┌───────────────────────────────────────────────┐
│ L3 — Ledger Layer │
│ (Receipts, Scrolls, ProofChain, Anchors) │
│ • JSONL files → Merkle roots → anchors │
└───────────────────────────────────────────────┘
```
## Registered Engines (Scrolls)
| Engine | Scroll | Purpose |
|--------|--------|---------|
| Drills | `Drills` | Security training and exercises |
| Oracle | `Compliance` | Regulatory compliance Q&A |
| Guardian | `Guardian` | Anchoring and sentinel |
| Treasury | `Treasury` | Financial tracking and settlement |
| Mesh | `Mesh` | Federation topology |
| OffSec | `OffSec` | Security operations and IR |
| Identity | `Identity` | DIDs, credentials, capabilities |
| Observability | `Observability` | Telemetry events |
| Automation | `Automation` | Workflow execution |
| Ψ-Field | `PsiField` | Alchemical consciousness |
| Federation | `Federation` | Cross-mesh trust |
| Governance | `Governance` | Constitutional enforcement |
## File Structure
```
vaultmesh/
├── receipts/ # Receipt storage
│ ├── drills/
│ │ └── drill_runs.jsonl
│ ├── compliance/
│ │ └── oracle_answers.jsonl
│ ├── treasury/
│ │ └── treasury_events.jsonl
│ ├── mesh/
│ │ └── mesh_events.jsonl
│ ├── [scroll]/
│ │ └── [scroll]_events.jsonl
│ ├── ROOT.drills.txt
│ ├── ROOT.compliance.txt
│ └── ROOT.[scroll].txt
├── cases/ # Artifact storage
│ ├── drills/[drill-id]/
│ ├── treasury/[settlement-id]/
│ ├── offsec/[incident-id]/
│ └── psi/[transmutation-id]/
├── corpus/ # Oracle documents
└── config/ # Configuration
```
## Receipt Schema (v2)
```json
{
"schema_version": "2.0.0",
"type": "receipt_type_name",
"timestamp": "2025-12-06T12:00:00Z",
"header": {
"root_hash": "blake3:abc123...",
"tags": ["tag1", "tag2"],
"previous_hash": "blake3:prev..."
},
"meta": {
"scroll": "ScrollName",
"sequence": 42,
"anchor_epoch": 7,
"proof_path": "cases/[scroll]/[id]/PROOF.json"
},
"body": {
// Domain-specific fields
}
}
```
## DID Format
```
did:vm:<type>:<identifier>
Types:
- node → did:vm:node:brick-01
- human → did:vm:human:sovereign
- agent → did:vm:agent:copilot-01
- service → did:vm:service:oracle-openai
- mesh → did:vm:mesh:vaultmesh-dublin
```
## Alchemical Phases
| Phase | Symbol | Meaning | Operational State |
|-------|--------|---------|-------------------|
| Nigredo | 🜁 | Blackening | Crisis, incident |
| Albedo | 🜄 | Whitening | Recovery, stabilization |
| Citrinitas | 🜆 | Yellowing | Optimization, new capability |
| Rubedo | 🜂 | Reddening | Integration, maturity |
## Constitutional Axioms (Immutable)
1. **AXIOM-001**: Receipts are append-only
2. **AXIOM-002**: Hashes are cryptographically verified
3. **AXIOM-003**: All significant changes produce receipts
4. **AXIOM-004**: Constitution is supreme
5. **AXIOM-005**: Axioms cannot be amended
## Design Gate Checklist
When creating any new feature, verify:
### Experience Layer (L1)
- [ ] Clear entrypoint (CLI, MCP tool, HTTP route)?
- [ ] Intent clearly represented in structured form?
### Engine Layer (L2)
- [ ] Produces a contract (explicit or implicit)?
- [ ] State object tracking progress/outcomes?
- [ ] Actions and outputs inspectable (JSON + files)?
### Ledger Layer (L3)
- [ ] Emits receipt for important operations?
- [ ] Receipts written to append-only JSONL?
- [ ] JSONL covered by Merkle root (ROOT.[scroll].txt)?
- [ ] Guardian can anchor the relevant root?
- [ ] Query tool exists for this scroll?
## Code Patterns
### Rust Receipt Emission
```rust
use vaultmesh_core::{Receipt, ReceiptHeader, ReceiptMeta, Scroll, VmHash};
let receipt_body = MyReceiptBody { /* ... */ };
let root_hash = VmHash::from_json(&receipt_body)?;
let receipt = Receipt {
header: ReceiptHeader {
receipt_type: "my_receipt_type".to_string(),
timestamp: Utc::now(),
root_hash: root_hash.as_str().to_string(),
tags: vec!["tag1".to_string()],
},
meta: ReceiptMeta {
scroll: Scroll::MyScroll,
sequence: 0, // Set by receipt store
anchor_epoch: None,
proof_path: None,
},
body: receipt_body,
};
```
### Python Receipt Emission
```python
def emit_receipt(scroll: str, receipt_type: str, body: dict, tags: list[str]) -> dict:
import hashlib
import json
from datetime import datetime
from pathlib import Path
receipt = {
"type": receipt_type,
"timestamp": datetime.utcnow().isoformat() + "Z",
"tags": tags,
**body
}
# Compute root hash
receipt_json = json.dumps(receipt, sort_keys=True)
root_hash = f"blake3:{hashlib.blake3(receipt_json.encode()).hexdigest()}"
receipt["root_hash"] = root_hash
# Append to scroll
scroll_path = Path(f"receipts/{scroll}/{scroll}_events.jsonl")
scroll_path.parent.mkdir(parents=True, exist_ok=True)
with open(scroll_path, "a") as f:
f.write(json.dumps(receipt) + "\n")
# Update Merkle root
root_file = Path(f"ROOT.{scroll}.txt")
root_file.write_text(root_hash)
return receipt
```
### MCP Tool Pattern
```python
@server.tool()
async def my_tool(param: str) -> str:
"""Tool description."""
caller = await get_caller_identity()
await verify_capability(caller, "required_capability")
result = await engine.do_operation(param)
await emit_tool_call_receipt(
tool="my_tool",
caller=caller,
params={"param": param},
result_hash=result.hash,
)
return json.dumps(result.to_dict(), indent=2)
```
## CLI Naming Convention
```bash
vm-<engine> <command> [subcommand] [options]
Examples:
vm-treasury debit --from acct:ops --amount 150 --currency EUR
vm-mesh node list
vm-identity did create --type human --id sovereign
vm-psi phase current
vm-guardian anchor-now
vm-gov proposal create --type amendment
```
## Receipt Type Naming
```
<scroll>_<operation>
Examples:
treasury_credit
treasury_debit
treasury_settlement
mesh_node_join
mesh_route_change
identity_did_create
identity_capability_grant
psi_phase_transition
psi_transmutation
gov_proposal
gov_amendment
```
## Key Integrations
### Guardian Anchor Cycle
```
Receipts → ProofChain → Merkle Root → Anchor Backend (OTS/ETH/BTC)
```
### Federation Witness Protocol
```
Mesh-A anchors → Notifies Mesh-B → Mesh-B verifies → Emits witness receipt
```
### Transmutation (Tem) Pattern
```
Incident (Nigredo) → Extract IOCs → Generate rules → Integrate defenses (Citrinitas)
```
## Testing Requirements
1. **Property Tests**: All receipt operations must be tested with proptest/hypothesis
2. **Invariant Tests**: Core axioms verified after every test
3. **Integration Tests**: Full cycles from intent to anchored receipt
4. **Chaos Tests**: Resilience under network partition, pod failure
## Deployment Targets
- **Kubernetes**: Production deployment via Kustomize
- **Docker Compose**: Local development
- **Akash**: Decentralized compute option
## Related Skills
- `sovereign-operator` — Security operations and MCP tools
- `offsec-mcp` — Offensive security tooling
- `vaultmesh-architect` — This skill
## References
- VAULTMESH-ETERNAL-PATTERN.md — Core architecture
- VAULTMESH-TREASURY-ENGINE.md — Financial primitive
- VAULTMESH-MESH-ENGINE.md — Federation topology
- VAULTMESH-OFFSEC-ENGINE.md — Security operations
- VAULTMESH-IDENTITY-ENGINE.md — Trust primitive
- VAULTMESH-OBSERVABILITY-ENGINE.md — Telemetry
- VAULTMESH-AUTOMATION-ENGINE.md — Workflows
- VAULTMESH-PSI-FIELD-ENGINE.md — Consciousness layer
- VAULTMESH-FEDERATION-PROTOCOL.md — Cross-mesh trust
- VAULTMESH-CONSTITUTIONAL-GOVERNANCE.md — Rules
- VAULTMESH-MCP-SERVERS.md — Claude integration
- VAULTMESH-DEPLOYMENT-MANIFESTS.md — Infrastructure
- VAULTMESH-MONITORING-STACK.md — Observability
- VAULTMESH-TESTING-FRAMEWORK.md — Testing
- VAULTMESH-MIGRATION-GUIDE.md — Upgrades