Initial commit - combined iTerm2 scripts

Contains:
- 1m-brag
- tem
- VaultMesh_Catalog_v1
- VAULTMESH-ETERNAL-PATTERN

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Vault Sovereign
2025-12-28 03:58:39 +00:00
commit 1583890199
111 changed files with 36978 additions and 0 deletions

View File

@@ -0,0 +1,68 @@
Page Title: VaultMesh Infrastructure Overview (Canon v1)
Summary: VaultMesh runs on a sovereign mesh of home, cloud, and virtual nodes. Core services (GitLab, monitoring, backup, dual-vault) live on the BRICK hypervisor and v1-nl-gate, with all access flowing over a Tailscale-powered SSH fabric. The system is designed as a living "civilization ledger": verifiable, reproducible, and portable across hosts.
Key Findings:
- Core "mesh-core-01" stack runs on a Debian VM (gate-vm) hosted on brick.
- External edge/gate server (v1-nl-gate) fronts public connectivity and future tunnels.
- shield-vm acts as the OffSec / TEM / machine-secrets node.
- Dual-vault pattern: Vaultwarden for human secrets, HashiCorp Vault for machine/app secrets.
- Tailscale tailnet + per-node SSH keys provide zero-trust style access across all layers.
- Grafana + Prometheus give observability for both infrastructure and proof engines.
Components:
- Tailscale mesh network (story-ule.ts.net tailnet).
- GitLab (self-hosted) on gate-vm for source, CI, and artifacts.
- MinIO object storage for backups and artifacts.
- PostgreSQL for GitLab and future ledgers.
- Prometheus + Grafana for metrics and dashboards.
- Vaultwarden (human credentials) + HashiCorp Vault (machine secrets).
- shield-vm: OffSec agents, TEM daemon, security experiments.
- lab HV: experimental cluster for Phoenix/PSI and chaos drills.
Workflows / Pipelines:
- Forge Flow: Android/laptop → SSH (Tailscale) → nexus-0 → edit/test → git push → GitLab on gate-vm → CI → deploy to shield-vm / lab.
- Backup Flow: mesh-stack-migration bundle backs up GitLab/Postgres/Vaultwarden to MinIO with freshness monitoring and restore scripts.
- Proof Flow: VaultMesh engines emit receipts and Merkle roots; DevOps release pipeline anchors PROOF.json and ROOT.txt to external ledgers.
Inputs:
- Per-node SSH keypairs and Tailscale identities.
- Git repositories (vaultmesh, mesh-stack-migration, offsec labs).
- Docker/Compose definitions for core stack (gate-vm).
- libvirt VM definitions on brick hypervisor.
Outputs:
- Authenticated SSH sessions over Tailscale with per-node isolation.
- Reproducible infrastructure stack (mesh-stack-migration) deployable onto any compatible host.
- Cryptographically verifiable receipts, Merkle roots, and anchored proof artifacts.
- Observability dashboards for infrastructure health and backup freshness.
Security Notes:
- No password SSH: ed25519 keys only, with IdentitiesOnly enforced.
- Tailscale tailnet isolates nodes from the public internet; v1-nl-gate used as controlled edge.
- Dual-vault split: Vaultwarden for human secrets; HashiCorp Vault for machine/app secrets and CI.
- Backups stored in MinIO, monitored by backup-freshness service with Prometheus metrics and Grafana alerts.
Nodes / Topology:
- Forge Node: nexus-0 (BlackArch) primary development forge.
- Mine Nodes: gamma, beta, brick, w3 home infra, storage, hypervisor.
- Gate Nodes: v1-nl-gate (cloud edge), gate-vm (mesh-core-01 on brick).
- VM Nodes on brick: debian-golden (template), gate-vm (core stack), shield-vm (security).
- Lab HV Nodes: lab-mesh-01, lab-agent-01, lab-chaos-01, phoenix-01 experiments and PSI/Phoenix.
- Mobile Nodes: shield (Termux), bank-mobile (iOS).
Dependencies:
- Tailscale client on all nodes (including VMs where needed).
- libvirt/QEMU on brick for virtualization.
- Docker/Compose on gate-vm for mesh-core stack.
- SSH servers on all nodes; per-node SSH keys for access.
Deployment Requirements:
- At least one capable hypervisor (brick) and one external gate (v1-nl-gate).
- DNS or MagicDNS entries for internal hostnames (e.g. gitlab.mesh.local).
- MinIO and backup-freshness configured via mesh-stack-migration bundle.
- Dual-vault services deployed according to canonical pattern.
Linked Assets:
- `/Users/sovereign/Library/CloudStorage/Dropbox/VaultMesh_Catalog_v1/VaultMesh_Infrastructure_Catalog_v1.*`
- `mesh-stack-migration/` bundle for core stack deployment.
- `vaultmesh` repo (Guardian, Console, Treasury, OffSec engines).

View File

@@ -0,0 +1,59 @@
Page Title: Canonical Infrastructure — VaultMesh v1
Summary: This page defines the canonical infrastructure for VaultMesh as of the first full catalog: which nodes exist, what runs where, and which services are considered "core mesh". It is the reference snapshot for future migrations and evolutions.
Key Findings:
- BRICK + v1-nl-gate + nexus-0 form the spine of the system.
- gate-vm (mesh-core-01) is the canonical host for the mesh-stack-migration bundle.
- shield-vm is the canonical Shield/TEM node with OffSec tooling and machine-secrets vault.
- Dual-vault pattern is standard: Vaultwarden (human), HashiCorp Vault (machine).
- Grafana is the canonical dashboard layer; Wiki.js is explicitly **not** part of the new architecture (external portals like burocrat serve documentation).
Canonical Nodes and Roles:
| Node | Role | Description |
|--------------|------------------------------|---------------------------------------------|
| nexus-0 | Forge | Primary dev/forge node (BlackArch) |
| brick | Hypervisor | Hosts core VMs (debian-golden, gate-vm, shield-vm) |
| v1-nl-gate | External Gate | Cloud-facing edge server, future ingress |
| gate-vm | mesh-core-01 (Core Stack) | GitLab, MinIO, Postgres, Prometheus, Grafana, Vaultwarden, backup-freshness, Traefik, WG-Easy |
| shield-vm | shield-01 (Shield/TEM) | OffSec agents, TEM, HashiCorp Vault, incidents & simulations |
| lab-* | Experimental Mesh | lab-mesh-01, lab-agent-01, lab-chaos-01, phoenix-01 |
Canonical Core Services (gate-vm / mesh-core-01):
- GitLab source control, CI/CD.
- MinIO object storage & backups.
- PostgreSQL GitLab and future service DBs.
- Prometheus metrics.
- Grafana dashboards (infra, backup freshness, proof metrics).
- Vaultwarden human password vault (browsers, logins).
- backup-freshness monitors MinIO backup age.
- Traefik reverse proxy and ingress.
- WG-Easy (optional) simplified WireGuard access.
Canonical Security / Shield Services (shield-vm):
- HashiCorp Vault machine/app secrets.
- TEM daemon threat transmutation engine.
- OffSec tools and MCP Oracle, Shield, AppSec scanners.
- Agent/task scheduler scheduled security workflows.
- Optional: local Prometheus exporters for node/security metrics.
Explicitly Non-Core (but allowed as external):
- Wiki.js not part of canonical infra; documentation handled via Git-based docs/portals (e.g., burocrat, catalogs).
- Legacy projects marked ARCHIVE (e.g., old offsec-shield architecture, sovereign-swarm).
Migration & Portability:
- `mesh-stack-migration/` enables redeploying the entire core stack (GitLab, MinIO, monitoring, backup) to a fresh host:
- Copy bundle → set `.env``docker compose up -d`.
- Run FIRST-LAUNCH and DRY-RUN checklists.
- VMs can be moved or recreated using debian-golden as base.
Evolution Rules:
- If a service becomes critical and stateful, it must:
- Emit receipts and have a documented backup/restore plan.
- Expose metrics consumable by Prometheus.
- Be referenced in the Canonical Infrastructure page with node placement.
- Experimental services stay on Lab HV until they prove their value.
Linked Assets:
- `mesh-stack-migration/STACK-MANIFEST.md` and `STACK-VERSION`.
- `VAULTMESH-ETERNAL-PATTERN.md` (architectural shape).
- `VaultMesh_Infrastructure_Catalog_v1.*` (this catalog).

View File

@@ -0,0 +1,76 @@
Page Title: VaultMesh Node Topology (Canon v1)
Summary: VaultMesh spans four primary tiers—Forge, Mine, Gate, and Lab—with mobile endpoints riding on top. The BRICK hypervisor anchors the virtualization layer, while v1-nl-gate acts as the outer gate. The result is a flexible topology where code forges on nexus-0, lands in GitLab on gate-vm, and manifests on shield-vm and lab nodes.
Key Findings:
- Clear separation between Forge (nexus-0), Core Mesh (gate-vm on brick), Edge Gate (v1-nl-gate), and Lab HV (ephemeral).
- BRICK hypervisor hosts the critical core VMs: debian-golden (template), gate-vm (mesh-core-01), shield-vm (shield-01).
- Tailscale tailnet binds everything together with MagicDNS and per-node hostnames.
- v1-nl-gate is ready to act as external ingress or exit node for future services.
- Node roles are stable but designed to evolve; lab nodes are intentionally ephemeral.
Components:
- Forge Tier: nexus-0 (BlackArch) and optional kali-forge.
- Mine Tier: gamma, beta, brick, w3 primary physical infra.
- Gate Tier: v1-nl-gate (cloud gate), gate-vm on brick (core stack).
- VM Tier: debian-golden (golden image), gate-vm (core services), shield-vm (OffSec/TEM).
- Lab Tier: lab-mesh-01, lab-agent-01, lab-chaos-01, phoenix-01.
Node Inventory:
FORGE NODES:
| Node | Hostname | OS | Role |
|-----------|---------------------------|-----------|----------------------|
| nexus-0 | 100.67.39.1 (Tailscale) | BlackArch | Primary forge (dev) |
| kali-forge| (Tailscale IP) | Kali | Secondary OffSec lab |
MINE NODES Primary Infrastructure:
| Node | Hostname | OS | Role |
|--------|---------------------------|-------------|-------------------|
| gamma | gamma.story-ule.ts.net | Arch Linux | Home primary |
| beta | beta.story-ule.ts.net | Arch Linux | Backup node |
| brick | brick.story-ule.ts.net | Debian | Dell server, HV |
| w3 | w3.story-ule.ts.net | Raspbian | Raspberry Pi node |
GATE NODES Edge / Exit:
| Node | Hostname | OS | Role |
|------------|-------------------------------|--------|-----------------------------|
| v1-nl-gate | v1-nl-gate.story-ule.ts.net | Debian | Netherlands external gate |
| gate-vm | gate-vm.story-ule.ts.net | Debian | mesh-core-01 (core stack) |
VM NODES On brick (libvirt/KVM):
| Node | Hostname | OS | Role |
|---------------|---------------------------------|--------|-------------------------------|
| debian-golden | debian-golden.story-ule.ts.net | Debian | Golden image / template |
| gate-vm | gate-vm.story-ule.ts.net | Debian | Core services (GitLab, etc.) |
| shield-vm | shield-vm.story-ule.ts.net | Debian | Shield / TEM / machine vault |
LAB NODES Experimental (Lab HV):
| Node | Hostname | Role |
|--------------|---------------------|----------------------------------|
| lab-mesh-01 | lab-mesh-01 | Multi-node mesh tests |
| lab-agent-01 | lab-agent-01 | Agent/orchestration experiments |
| lab-chaos-01 | lab-chaos-01 | Chaos/failure drills |
| phoenix-01 | phoenix-01 | Phoenix/PSI prototypes |
MOBILE NODES:
| Node | Hostname | OS | Port |
|-------------|-------------------------------|---------------|-------|
| shield | shield.story-ule.ts.net | Android/Termux| 22 |
| bank-mobile | bank-mobile.story-ule.ts.net | iOS | 8022 |
LAN Fallbacks:
| Node | LAN IP |
|-------|----------------|
| gamma | 192.168.0.191 |
| brick | 192.168.0.119 |
| beta | 192.168.0.236 |
Security Notes:
- Forge, Mine, Gate, and Lab communicate primarily via Tailscale; LAN is a fallback.
- VMs are isolated on libvirt NAT (192.168.122.x), with SSH + Tailscale as ingress.
- v1-nl-gate can be used as WireGuard / exit node for privacy routing.
Dependencies:
- Tailscale on all nodes (physical and virtual as required).
- libvirt/QEMU on brick for VM lifecycle.
- SSH with per-node ed25519 keys.

View File

@@ -0,0 +1,64 @@
Page Title: VaultMesh Virtualization Layer (BRICK Hypervisor)
Summary: The BRICK server runs libvirt/KVM and hosts the core VaultMesh VMs: debian-golden (template), gate-vm (mesh-core-01), and shield-vm (shield-01). Cockpit and VNC provide management and console access, while Tailscale and SSH bring the VMs into the wider mesh.
Key Findings:
- BRICK is the single hypervisor for core VaultMesh VMs.
- debian-golden serves as a reusable golden image to clone new VMs.
- gate-vm runs the mesh-stack-migration bundle (GitLab, MinIO, Prometheus, Grafana, Vaultwarden, backup-freshness, etc.).
- shield-vm is the Shield/OffSec node and home of the machine-secrets vault and TEM stack.
- VM networking uses libvirt NAT (192.168.122.x), with VNC reachable via SSH tunnels.
Components:
- libvirt daemon (qemu-kvm backend).
- QEMU/KVM for hardware-accelerated virtualization.
- Cockpit + cockpit-machines for web-based VM management.
- VNC servers for graphical consoles.
- Tailscale agents (optional/desired) inside VMs.
VM Network Layout:
| VM | NAT IP | VNC Port | Role |
|---------------|------------------|----------|------------------------------------|
| debian-golden | 192.168.122.187 | 5900 | Golden image / base template |
| gate-vm | 192.168.122.236 | 5901 | mesh-core-01 core stack host |
| shield-vm | 192.168.122.73 | 5902 | Shield/OffSec/TEM + machine vault |
Workflows / Pipelines:
- VM Management: Cockpit → https://brick:9090 → "Virtual Machines".
- Console Access:
- `ssh brick`
- `ssh -L 5901:localhost:5901 brick`
- `vnc://localhost:5901` (gate-vm) / `vnc://localhost:5902` (shield-vm).
- Image Pipeline:
- Update debian-golden → snapshot → clone → new VM (e.g., future lab nodes).
- Join to Mesh:
- Boot VM → configure SSH → join Tailscale → register in SSH config.
Inputs:
- libvirt XML definitions for debian-golden, gate-vm, shield-vm.
- Debian cloud images / base images.
- SSH keys for root/debian users on each VM.
- mesh-stack-migration bundle to configure gate-vm.
Outputs:
- Running core VMs with access via SSH + Tailscale + VNC.
- Reproducible VM lifecycle (golden → clone → configure → join mesh).
- Isolated environment for Shield/TEM experiments on shield-vm.
Security Notes:
- VNC ports are not exposed directly; they're reached via SSH tunnel into brick.
- Each VM uses its own SSH host keys and per-node authorized_keys.
- NAT isolation (192.168.122.x) reduces blast radius from VM compromise.
- Installing Tailscale inside gate-vm/shield-vm avoids public exposure.
Dependencies:
- libvirt, qemu-kvm, Cockpit, cockpit-machines on brick.
- SSH and Tailscale inside each VM (where needed).
- TigerVNC or similar client on the operator's laptop.
Deployment Steps:
1. Start VM via Cockpit or `virsh`.
2. Create SSH tunnel from laptop to brick for VNC.
3. Connect via VNC for first-boot setup if needed.
4. Deploy SSH keys and install Tailscale inside the VM.
5. For gate-vm: deploy `mesh-stack-migration` and start core stack.
6. For shield-vm: deploy Shield/TEM/dual-vault components.

View File

@@ -0,0 +1,101 @@
Page Title: SSH Key Architecture (Forge + Mesh)
Summary: VaultMesh uses a strict per-node ed25519 SSH key model with IdentitiesOnly isolation, ControlMaster multiplexing, and mesh-wide access via Tailscale. nexus-0 serves as the primary forge node; brick, v1-nl-gate, gate-vm, and shield-vm are first-class SSH targets with dedicated keys.
Key Findings:
- One keypair per destination node (id_gamma, id_brick, id_v1-nl-gate, id_gate-vm, id_shield-vm, etc.).
- IdentitiesOnly enforces key isolation and prevents cross-host key probing.
- ControlMaster/ControlPath provide fast multiplexed SSH sessions.
- Tailscale hostnames (story-ule.ts.net) give stable addressing; LAN IPs are fallback.
- External service keys (GitHub/GitLab) are separate from infra keys.
Components:
- Per-node private keys (`~/.ssh/id_{node}`).
- Public keys (`~/.ssh/id_{node}.pub`).
- SSH config with host-specific IdentityFile blocks.
- Control sockets (`~/.ssh/cm-%r@%h:%p`).
Key Inventory (Infra Nodes):
| Key File | Target Node | Algorithm |
|------------------|----------------|-----------|
| id_gamma | gamma | ed25519 |
| id_beta | beta | ed25519 |
| id_brick | brick | ed25519 |
| id_w3 | w3 | ed25519 |
| id_v1-nl-gate | v1-nl-gate | ed25519 |
| id_gate-vm | gate-vm | ed25519 |
| id_debian-golden | debian-golden | ed25519 |
| id_shield-vm | shield-vm | ed25519 |
Forge + Mobile:
| Key File | Target | Algorithm |
|------------------|--------------|-----------|
| id_nexus-0 | nexus-0 | ed25519 |
| id_kali-forge | kali-forge | ed25519 |
| id_shield | shield | ed25519 |
| id_bank-mobile | bank-mobile | ed25519 |
External Service Keys:
| Key File | Service |
|----------------------|------------|
| id_ed25519_github | GitHub |
| id_ed25519_gitlab | GitLab |
SSH Config Structure:
```sshconfig
Host *
ServerAliveInterval 30
ServerAliveCountMax 3
TCPKeepAlive yes
ControlMaster auto
ControlPath ~/.ssh/cm-%r@%h:%p
ControlPersist 10m
IdentitiesOnly yes
HashKnownHosts no
StrictHostKeyChecking accept-new
AddKeysToAgent yes
UseKeychain yes
Compression yes
Host nexus-0
HostName 100.67.39.1
User root
IdentityFile ~/.ssh/id_nexus-0
Host brick
HostName brick.story-ule.ts.net
User sovereign
IdentityFile ~/.ssh/id_brick
Host gate-vm
HostName gate-vm.story-ule.ts.net
User debian
IdentityFile ~/.ssh/id_gate-vm
Host shield-vm
HostName shield-vm.story-ule.ts.net
User debian
IdentityFile ~/.ssh/id_shield-vm
```
Security Notes:
- ed25519 keys provide strong security with small keys/signatures.
- IdentitiesOnly ensures ssh never offers the wrong key to the wrong host.
- StrictHostKeyChecking=accept-new uses TOFU while still catching host key changes.
- No password authentication; all critical nodes are key-only.
Key Generation:
```bash
ssh-keygen -t ed25519 -f ~/.ssh/id_{node} -C "aurion-to-{node}"
```
Key Deployment:
```bash
ssh-copy-id -i ~/.ssh/id_{node}.pub debian@{node}
# Or manually
cat ~/.ssh/id_{node}.pub | ssh debian@{node} "cat >> ~/.ssh/authorized_keys"
```
Dependencies:
- OpenSSH client (macOS/Linux/Android).
- ssh-agent and (on macOS) Keychain integration.
- Tailscale for stable hostnames and reachability.

View File

@@ -0,0 +1,71 @@
Page Title: Cryptographic Proof System (VaultMesh Proof Spine)
Summary: VaultMesh uses a Merkle-tree-based proof system with receipts, roots, and cross-ledger anchoring. Each serious action (deploy, anchor, oracle decision, incident handling) emits a receipt. DevOps pipelines produce PROOF.json and ROOT.txt artifacts and anchor them to external ledgers, turning infrastructure history into a verifiable "civilization ledger".
Key Findings:
- All significant actions generate cryptographic receipts in append-only logs.
- Merkle trees allow efficient inclusion proofs for large sets of receipts.
- Anchors can be written to local files, Bitcoin (OTS), Ethereum, or mesh peers.
- The release pipeline for vm-spawn automatically computes Merkle roots and anchors proof artifacts.
- Braid-style interoperability allows importing and emitting foreign ledger roots.
Components:
- Proof Generator (`proof_generate`) creates signed receipts.
- Merkle Batcher (`proof_batch`) aggregates receipts into Merkle trees.
- Anchor System (`proof_anchor_*`) writes roots to durable anchors.
- Verification Engine (`proof_verify`) validates inclusion and integrity.
- Braid Protocol (`proof_braid_*`) cross-ledger interoperability.
Proof Lifecycle:
1. Action occurs (e.g., Guardian anchor, deployment, oracle decision).
2. `proof_generate` creates a signed receipt with a Blake3 hash of the canonical JSON.
3. Receipts accumulate until a batch threshold is reached.
4. `proof_batch` constructs a Merkle tree and computes the root.
5. `proof_anchor_*` writes the root to local files, timestamps, or blockchains.
6. `proof_verify` allows any future verifier to confirm receipt integrity against a given root.
Anchoring Strategies:
| Type | Method | Durability |
|-------|---------------------------------|---------------------|
| local | Files in `data/anchors/` | Node-local |
| ots | OpenTimestamps → Bitcoin | Public blockchain |
| eth | Calldata/contract → Ethereum | Public blockchain |
| mesh | Cross-attest via other nodes | Federated durability|
Braid Protocol:
- `braid_import` import foreign ledger roots from other chains/nodes.
- `braid_emit` expose local roots for others to import.
- `braid_status` track imported vs. local roots and regression.
- Ensures root sequences are strictly advancing (no rollback without detection).
Receipt Schema (Conceptual):
```json
{
"proof_id": "uuid",
"action": "guardian_anchor",
"timestamp": "ISO8601",
"data_hash": "blake3_hex",
"signature": "ed25519_sig",
"witnesses": ["node_id"],
"chain_prev": "prev_proof_id"
}
```
Security Notes:
- Blake3 hashing for speed and modern security.
- Ed25519 signatures for authenticity and non-repudiation.
- Merkle trees make inclusion proofs O(log n).
- Multiple anchoring paths provide defense in depth against ledger loss.
DevOps Integration:
- vm-spawn release pipeline:
- Computes Merkle root over build artifacts.
- Requests RFC 3161 timestamp.
- Anchors hash on Ethereum and Bitcoin.
- Emits PROOF.json and ROOT.txt alongside release assets.
- Guardian CLI (vm_cli.py guardian) provides human-readable views over roots and scrolls.
Dependencies:
- Blake3 library.
- Ed25519 signing library and key management.
- Optional OTS/BTC/ETH client libraries or APIs.
- OffSec MCP / VaultMesh services exposing proof tools.

View File

@@ -0,0 +1,72 @@
Page Title: Lawchain Compliance Ledger
Summary: Lawchain is the compliance-focused ledger that tracks regulatory obligations, oracle answers, and audit trails via receipts. It integrates with the proof system to ensure every compliance answer has a cryptographic backbone, and it is designed to speak the language of EU AI Act, GDPR, NIS2, and future frameworks.
Key Findings:
- Oracle answers are validated against a schema before being recorded.
- Each answer is hashed and bound into a receipt, linking legal semantics to proofs.
- Federation metrics allow multi-node Lawchain sync across the mesh.
- Policy evaluation is driven by JSON inputs and produces JSON results for downstream tools.
Components:
- Lawchain Core Ledger (append-only compliance scroll).
- Oracle Answer Validator (schema enforcement).
- Compliance Scroll store (receipt logs).
- Federation Metrics emitter.
- Policy Evaluator (rule engine).
Oracle Answer Schema (vm_oracle_answer_v1):
```json
{
"question": "string",
"answer_text": "string",
"citations": [{
"document_id": "string",
"framework": "string",
"excerpt": "string"
}],
"compliance_flags": {
"gdpr_relevant": true,
"ai_act_relevant": false,
"nis2_relevant": true
},
"gaps": ["string"],
"insufficient_context": false,
"confidence": "high"
}
```
Workflows / Pipelines:
- Compliance Q&A:
1. Operator (or system) asks Lawchain a question.
2. RAG/Retrieve context from policy docs and regulations.
3. LLM generates an answer draft.
4. Answer is validated against vm_oracle_answer_v1 schema.
5. Hash (Blake3 over canonical JSON) computed and receipt generated.
6. Receipt anchored via proof system and stored in Lawchain.
Metrics Files (examples under /tmp/):
| File | Purpose |
|-------------------------|----------------------------|
| lawchain_federate.out | Federation sync output |
| lawchain_federate.err | Federation errors |
| lawchain_metrics.out | Metrics/logging output |
| policy_eval_out.json | Policy evaluation results |
| policy_input.json | Policy evaluation input |
Security Notes:
- Answer hash computed as blake3(json.dumps(answer, sort_keys=True)).
- Receipts bind answer content, timestamps, and possibly node identity.
- gaps and insufficient_context prevent fake certainty in legal answers.
- Citations must reference real sources, enabling audit of answer provenance.
Compliance Frameworks Tracked:
- GDPR data protection and subject rights.
- EU AI Act risk classification, obligations, and logs.
- NIS2 network and information security.
- Custom extensions can map additional frameworks (e.g., SOC2, ISO 27001).
Dependencies:
- Lawchain service.
- Oracle corpus indexed (policies, regulations, internal docs).
- Blake3 and JSON schema validator.
- Integration with VaultMesh proof spine for receipts and anchoring.

View File

@@ -0,0 +1,83 @@
Page Title: Oracle Engine & Shield Defense (TEM Stack)
Summary: The Oracle Engine provides structured reason → decide → act chains, while Shield and TEM form the defensive veil. Together they detect threats, log them to the proof system, and (optionally) orchestrate responses across shield-vm, lab nodes, and the wider mesh.
Key Findings:
- Oracle chains decisions through explicit reasoning steps, not opaque actions.
- Every significant decision can emit receipts into the proof spine.
- Shield monitors multiple vectors (network, process, file, device, etc.).
- Response levels span from passive logging to active isolation or countermeasures.
- Agent tasks allow scheduled or triggered operations (e.g., periodic scans).
Components:
- Oracle Reasoning Engine.
- Oracle Decision System.
- Tactical Chain Executor.
- Shield Monitor (sensors).
- Shield Responder (actions).
- TEM daemon (threat transmutation logic).
- Agent Task Scheduler.
Oracle Tools:
| Tool | Purpose |
|------------------------|--------------------------------------|
| oracle_status | Node status and capabilities |
| oracle_reason | Analyze situation, propose actions |
| oracle_decide | Make autonomous decision |
| oracle_tactical_chain | Full reason → decide → act chain |
Oracle Tactical Chain Flow:
1. **Context**: Collect current state (logs, metrics, alerts, lawchain state).
2. **Reason**: `oracle_reason` produces candidate actions with justifications.
3. **Decide**: `oracle_decide` selects an action based on risk tolerance and constraints.
4. **Act**: Execute playbooks, or keep in dry-run mode for simulation.
5. **Prove**: Generate a receipt and anchor via proof system (optional but recommended).
Shield Monitor Vectors:
| Vector | Detection Capability |
|-----------|--------------------------------|
| network | Port scans, unusual flows |
| wifi | Rogue APs, deauth attempts |
| bluetooth | Device enumeration/anomalies |
| usb | Storage/HID abuse |
| process | Suspicious binaries, behavior |
| file | Unauthorized modifications |
Shield Response Levels:
| Level | Action |
|---------|----------------------------------------|
| log | Record event only |
| alert | Notify operator (Slack/email/etc.) |
| block | Prevent connection/action |
| isolate | Quarantine node/container/service |
| counter | Active response (e.g., honeypots) |
Agent Tasks:
```json
{
"name": "scheduled_scan",
"trigger": {
"type": "schedule",
"config": {"cron": "0 */6 * * *"}
},
"actions": [
{"tool": "shield_monitor", "args": {"vectors": ["network", "wifi"]}},
{"tool": "oracle_tactical_chain", "args": {"dry_run": true}}
],
"on_complete": "mesh_broadcast"
}
```
Security Notes:
- Dry-run mode is default for dangerous operations; production actions require explicit opt-in.
- Risk tolerance levels gate what Shield/TEM may do without human approval.
- All automated decisions can be bound to receipts for post-incident audit.
MCP / Mesh Tools:
- oracle_status, oracle_reason, oracle_decide, oracle_tactical_chain
- shield_status, shield_monitor, shield_respond
- Agent task management: agent_task, agent_list, agent_cancel
Dependencies:
- OffSec MCP server running on shield-vm/lab nodes.
- Proof system enabled for Oracle and Shield receipts.
- Integrations with metrics (Prometheus) and observability (Grafana).

View File

@@ -0,0 +1,87 @@
Page Title: AppSec Toolchain (Shield / CI Integration)
Summary: VaultMesh uses an integrated application security toolchain rooted on shield-vm and CI pipelines. It combines vulnerability scanning, secret detection, SBOM generation, and IaC analysis into a coherent flow, with findings eligible to be logged into the proof spine for high-risk assets.
Key Findings:
- Nuclei, Trivy, Semgrep, TruffleHog, Gitleaks, Checkov, Syft, and Grype cover distinct layers.
- shield-vm is the natural home for heavy security scans and OffSec tooling.
- CI pipelines can call out to shield-vm or run scanners directly in job containers.
- Secret detection runs in both pre-commit and CI stages for defense-in-depth.
- SBOM generation and vulnerability scanning support long-term supply chain tracking.
Components:
- Nuclei (web and service vuln scanning).
- Trivy (container/filesystem/SBOM vulnerability scanner).
- Semgrep (static code analysis).
- TruffleHog / Gitleaks (secret discovery).
- Checkov (IaC misconfiguration scanner).
- Syft (SBOM generator).
- Grype (vulnerability scanner against SBOMs).
Tool Capabilities:
| Tool | Target Types | Output |
|------------|----------------------------|-------------------------|
| nuclei | URLs, IPs, domains | Findings by severity |
| trivy | Images, dirs, repos, SBOMs | CVEs, secrets, configs |
| semgrep | Source code directories | Security findings |
| trufflehog | Git, S3, GCS, etc. | Verified secrets |
| gitleaks | Git repos, filesystems | Secret locations |
| checkov | Terraform, K8s, Helm, etc. | Misconfigurations |
| syft | Images, dirs, archives | CycloneDX/SPDX SBOM |
| grype | Images, dirs, SBOMs | Vulnerability list |
Example Scans:
Nuclei Web Scan:
```json
{
"targets": ["https://example.com"],
"severity": ["high", "critical"],
"tags": ["cve", "rce"]
}
```
Trivy Container Scan:
```json
{
"target": "vaultmesh-core:latest",
"scan_type": "image",
"scanners": ["vuln", "secret"],
"severity": ["HIGH", "CRITICAL"]
}
```
Secret Detection:
```json
{
"target": "/srv/git/vaultmesh",
"source_type": "git",
"only_verified": true
}
```
MCP Tools:
- offsec_appsec_nuclei_scan
- offsec_appsec_trivy_scan
- offsec_appsec_semgrep_scan
- offsec_appsec_trufflehog_scan
- offsec_appsec_gitleaks_scan
- offsec_appsec_checkov_scan
- offsec_appsec_syft_sbom
- offsec_appsec_grype_scan
Workflows:
1. SBOM Pipeline: Syft → produce CycloneDX JSON → Grype → vulnerability report.
2. Pre-merge Scans: CI job runs Semgrep, Trivy, Gitleaks on merge requests.
3. Periodic Deep Scans: shield-vm runs scheduled AppSec scans, logging high-severity findings.
4. Policy Integration: High-severity or critical findings feed into Lawchain/Lawchain-like policies.
Security Notes:
- Nuclei and Trivy should be rate-limited when targeting external assets.
- Secret detection in CI uses only_verified where possible to reduce noise.
- Baseline files can exclude accepted findings while still tracking new issues.
- AppSec findings for high-value systems may be recorded as receipts in the proof system.
Dependencies:
- offsec-mcp server with tools installed (on shield-vm or lab nodes).
- Network access for pulling scanner templates and vulnerability databases.
- CI integration (GitLab pipelines on gate-vm) to trigger scans automatically.

View File

@@ -0,0 +1,85 @@
Page Title: Forge Flow — From Phone to Shield
Summary: The Forge Flow describes how code moves from the Sovereign's phone and forge node (nexus-0) through GitLab on gate-vm, into CI, and finally onto shield-vm and lab nodes. It is the canonical "path of sovereign code".
Key Findings:
- Primary forge is nexus-0 (BlackArch), reachable via Tailscale from Android/laptop.
- vaultmesh repo lives on nexus-0 under `/root/work/vaultmesh`.
- Git remote points to GitLab on gate-vm (gitlab.mesh.local).
- GitLab CI handles lint → test → build → deploy.
- Production-like deployments land on shield-vm; experiments land on Lab HV nodes.
Forge Flow Diagram (Text):
```text
Android / Laptop
↓ (Tailscale SSH)
nexus-0 (BlackArch forge)
↓ (git push)
GitLab @ gate-vm (mesh-core-01)
↓ (CI: lint → test → build)
shield-vm (Shield / TEM) and Lab HV (phoenix-01, etc.)
```
Steps:
1. Inception (Connect to Forge)
- From Android or laptop:
```bash
ssh VaultSovereign@100.67.39.1 # nexus-0 via Tailscale
tmux attach -t sovereign || tmux new -s sovereign
```
2. Forge (Edit & Test)
- On nexus-0:
```bash
cd /root/work/vaultmesh
nvim .
python3 -m pytest tests/ -v
python3 cli/vm_cli.py guardian status
python3 cli/vm_cli.py console sessions
```
3. Transmit (Git Push to GitLab)
```bash
git add -A
git commit -m "feat(guardian): improve anchor receipts"
git push origin main # or feature branch
```
4. Transform (GitLab CI on gate-vm)
- .gitlab-ci.yml stages:
- lint style and basic checks.
- test pytest and CLI tests.
- build container/image build.
- deploy optional manual or automatic deployment.
5. Manifest (Deploy to Shield or Lab)
- CI deploy job:
- For main: deploy to shield-vm (production-like).
- For lab branches: deploy to lab-mesh-01 / phoenix-01.
- Manual deploy (fallback):
```bash
ssh shield-vm
cd /opt/vaultmesh
git pull
sudo systemctl restart vaultmesh-mcp vaultmesh-tem
```
6. Observe (Metrics & Proofs)
- Grafana dashboards (gate-vm) for system and proof metrics.
- Guardian CLI for roots and scrolls.
- Lawchain/oracle dashboards for compliance view.
Infrastructure Roles in the Flow:
- nexus-0 → live forge, fast iteration, experiments.
- gate-vm → GitLab + CI + registry + observability.
- shield-vm → OffSec/TEM node and primary runtime for security engines.
- Lab HV → ephemeral experimentation environment.
Security Notes:
- SSH access to nexus-0 and shield-vm uses per-node ed25519 keys.
- GitLab access uses HTTPS with tokens or SSH keys.
- Deploy stage should be limited to trusted runners/tags.
Linked Assets:
- vaultmesh/.gitlab-ci.yml (CI pipeline).
- VAULTMESH-INFRA-OVERVIEW style documents.