Files
test/VaultMesh_Catalog_v1/VaultMesh_Infrastructure_Catalog_v2.md
Vault Sovereign 1583890199 Initial commit - combined iTerm2 scripts
Contains:
- 1m-brag
- tem
- VaultMesh_Catalog_v1
- VAULTMESH-ETERNAL-PATTERN

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 03:58:39 +00:00

24 KiB
Raw Blame History

VAULTMESH INFRASTRUCTURE CATALOG

Version 2.0 — Canon v1

Sovereign mesh network providing secure, cryptographically-verified infrastructure across distributed nodes. Core services run on the BRICK hypervisor and v1-nl-gate, with all access flowing over a Tailscale-powered SSH fabric. Designed as a living "civilization ledger": verifiable, reproducible, and portable.

VaultMesh Technologies Dublin, Ireland


Global Catalog Index

Complete inventory of VaultMesh infrastructure capabilities and cross-references.

ID Capability Pages
VM-001 Sovereign mesh network via Tailscale MagicDNS (story-ule.ts.net) 1, 2, 3
VM-002 Per-node ed25519 SSH keys with IdentitiesOnly isolation 1, 2, 4
VM-003 Cryptographic proof system with Merkle tree receipts 1, 5, 6
VM-004 Multi-tier node architecture (Forge/Mine/Gate/Lab/Mobile) 1, 2, 3
VM-005 libvirt/KVM virtualization on BRICK hypervisor 2, 3
VM-006 Dual-vault pattern (Vaultwarden + HashiCorp Vault) 1, 10
VM-007 Cross-platform support (Arch, Debian, BlackArch, Android/Termux, iOS) 2, 4
VM-008 Lawchain compliance ledger integration 5, 6
VM-009 Oracle reasoning engine with tactical chains 5, 7
VM-010 Shield defensive monitoring system + TEM 7, 8
VM-011 AppSec toolchain (Nuclei, Trivy, Semgrep, TruffleHog) 7, 8
VM-012 Proof anchoring (local, OTS, ETH, mesh attestation) 5, 6
VM-013 Braid protocol for foreign ledger interop 5, 6
VM-014 MCP server integration for AI orchestration 7, 8
VM-015 Cockpit/VNC console access for VMs 3, 4
VM-016 SSH ControlMaster multiplexing for performance 1, 4
VM-017 Forge Flow (nexus-0 → GitLab → CI → shield-vm) 9
VM-018 LAN fallback addressing when Tailscale unavailable 1, 2
VM-019 mesh-stack-migration portable deployment bundle 1, 3, 10
VM-020 Agent task automation with scheduled triggers 7, 8
VM-021 GitLab CI/CD on gate-vm (mesh-core-01) 9, 10
VM-022 Grafana + Prometheus observability stack 1, 10
VM-023 Lab HV experimental nodes (phoenix-01, lab-mesh-01, etc.) 2, 10

1. Infrastructure Overview

VaultMesh runs on a sovereign mesh of home, cloud, and virtual nodes. Core services (GitLab, monitoring, backup, dual-vault) live on the BRICK hypervisor and v1-nl-gate, with all access flowing over a Tailscale-powered SSH fabric.

Key Findings

  • Core "mesh-core-01" stack runs on a Debian VM (gate-vm) hosted on brick
  • External edge/gate server (v1-nl-gate) fronts public connectivity and future tunnels
  • shield-vm acts as the OffSec / TEM / machine-secrets node
  • Dual-vault pattern: Vaultwarden for human secrets, HashiCorp Vault for machine/app secrets
  • Tailscale tailnet + per-node SSH keys provide zero-trust style access across all layers
  • Grafana + Prometheus give observability for both infrastructure and proof engines

Core Components

  • Tailscale mesh network (story-ule.ts.net tailnet)
  • GitLab (self-hosted) on gate-vm for source, CI, and artifacts
  • MinIO object storage for backups and artifacts
  • PostgreSQL for GitLab and future ledgers
  • Prometheus + Grafana for metrics and dashboards
  • Vaultwarden (human credentials) + HashiCorp Vault (machine secrets)
  • shield-vm: OffSec agents, TEM daemon, security experiments
  • lab HV: experimental cluster for Phoenix/PSI and chaos drills

Workflows / Pipelines

  • Forge Flow: Android/laptop → SSH (Tailscale) → nexus-0 → edit/test → git push → GitLab on gate-vm → CI → deploy to shield-vm / lab
  • Backup Flow: mesh-stack-migration bundle backs up GitLab/Postgres/Vaultwarden to MinIO with freshness monitoring and restore scripts
  • Proof Flow: VaultMesh engines emit receipts and Merkle roots; DevOps release pipeline anchors PROOF.json and ROOT.txt to external ledgers

Security Notes

  • No password SSH: ed25519 keys only, with IdentitiesOnly enforced
  • Tailscale tailnet isolates nodes from the public internet; v1-nl-gate used as controlled edge
  • Dual-vault split: Vaultwarden for human secrets; HashiCorp Vault for machine/app secrets and CI
  • Backups stored in MinIO, monitored by backup-freshness service with Prometheus metrics and Grafana alerts

2. Node Topology

VaultMesh spans four primary tiers—Forge, Mine, Gate, and Lab—with mobile endpoints riding on top. The BRICK hypervisor anchors the virtualization layer, while v1-nl-gate acts as the outer gate.

Key Findings

  • Clear separation between Forge (nexus-0), Core Mesh (gate-vm on brick), Edge Gate (v1-nl-gate), and Lab HV (ephemeral)
  • BRICK hypervisor hosts the critical core VMs: debian-golden (template), gate-vm (mesh-core-01), shield-vm (shield-01)
  • Tailscale tailnet binds everything together with MagicDNS and per-node hostnames
  • v1-nl-gate is ready to act as external ingress or exit node for future services
  • Node roles are stable but designed to evolve; lab nodes are intentionally ephemeral

Forge Nodes

Node Hostname OS Role
nexus-0 100.67.39.1 (Tailscale) BlackArch Primary forge (dev)
kali-forge (Tailscale IP) Kali Secondary OffSec lab

Mine Nodes — Primary Infrastructure

Node Hostname OS Role
gamma gamma.story-ule.ts.net Arch Linux Home primary
beta beta.story-ule.ts.net Arch Linux Backup node
brick brick.story-ule.ts.net Debian Dell server, HV
w3 w3.story-ule.ts.net Raspbian Raspberry Pi node

Gate Nodes — Edge / Exit

Node Hostname OS Role
v1-nl-gate v1-nl-gate.story-ule.ts.net Debian Netherlands external gate
gate-vm gate-vm.story-ule.ts.net Debian mesh-core-01 (core stack)

VM Nodes — On brick (libvirt/KVM)

Node Hostname OS Role
debian-golden debian-golden.story-ule.ts.net Debian Golden image / template
gate-vm gate-vm.story-ule.ts.net Debian Core services (GitLab, etc.)
shield-vm shield-vm.story-ule.ts.net Debian Shield / TEM / machine vault

Lab Nodes — Experimental (Lab HV)

Node Hostname Role
lab-mesh-01 lab-mesh-01 Multi-node mesh tests
lab-agent-01 lab-agent-01 Agent/orchestration experiments
lab-chaos-01 lab-chaos-01 Chaos/failure drills
phoenix-01 phoenix-01 Phoenix/PSI prototypes

Mobile Nodes

Node Hostname OS Port
shield shield.story-ule.ts.net Android/Termux 22
bank-mobile bank-mobile.story-ule.ts.net iOS 8022

LAN Fallbacks

Node LAN IP
gamma 192.168.0.191
brick 192.168.0.119
beta 192.168.0.236

3. Virtualization Layer

The BRICK server runs libvirt/KVM and hosts the core VaultMesh VMs: debian-golden (template), gate-vm (mesh-core-01), and shield-vm (shield-01). Cockpit and VNC provide management and console access.

Key Findings

  • BRICK is the single hypervisor for core VaultMesh VMs
  • debian-golden serves as a reusable golden image to clone new VMs
  • gate-vm runs the mesh-stack-migration bundle (GitLab, MinIO, Prometheus, Grafana, Vaultwarden, backup-freshness, etc.)
  • shield-vm is the Shield/OffSec node and home of the machine-secrets vault and TEM stack
  • VM networking uses libvirt NAT (192.168.122.x), with VNC reachable via SSH tunnels

VM Network Layout

VM NAT IP VNC Port Role
debian-golden 192.168.122.187 5900 Golden image / base template
gate-vm 192.168.122.236 5901 mesh-core-01 core stack host
shield-vm 192.168.122.73 5902 Shield/OffSec/TEM + machine vault

Workflows

  • VM Management: Cockpit → https://brick:9090 → "Virtual Machines"
  • Console Access: ssh -L 5901:localhost:5901 brickvnc://localhost:5901
  • Image Pipeline: Update debian-golden → snapshot → clone → new VM
  • Join to Mesh: Boot VM → configure SSH → join Tailscale → register in SSH config

Security Notes

  • VNC ports are not exposed directly; they're reached via SSH tunnel into brick
  • Each VM uses its own SSH host keys and per-node authorized_keys
  • NAT isolation (192.168.122.x) reduces blast radius from VM compromise
  • Installing Tailscale inside gate-vm/shield-vm avoids public exposure

Dependencies

  • libvirt, qemu-kvm, Cockpit, cockpit-machines on brick
  • SSH and Tailscale inside each VM (where needed)
  • TigerVNC or similar client on the operator's laptop

4. SSH Key Architecture

VaultMesh uses a strict per-node ed25519 SSH key model with IdentitiesOnly isolation, ControlMaster multiplexing, and mesh-wide access via Tailscale.

Key Findings

  • One keypair per destination node (id_gamma, id_brick, id_v1-nl-gate, id_gate-vm, id_shield-vm, etc.)
  • IdentitiesOnly enforces key isolation and prevents cross-host key probing
  • ControlMaster/ControlPath provide fast multiplexed SSH sessions
  • Tailscale hostnames (story-ule.ts.net) give stable addressing; LAN IPs are fallback
  • External service keys (GitHub/GitLab) are separate from infra keys

Key Inventory (Infra Nodes)

Key File Target Node Algorithm
id_gamma gamma ed25519
id_beta beta ed25519
id_brick brick ed25519
id_w3 w3 ed25519
id_v1-nl-gate v1-nl-gate ed25519
id_gate-vm gate-vm ed25519
id_debian-golden debian-golden ed25519
id_shield-vm shield-vm ed25519

Forge + Mobile Keys

Key File Target Algorithm
id_nexus-0 nexus-0 ed25519
id_kali-forge kali-forge ed25519
id_shield shield ed25519
id_bank-mobile bank-mobile ed25519

External Service Keys

Key File Service
id_ed25519_github GitHub
id_ed25519_gitlab GitLab

SSH Config Structure

Host *
    ServerAliveInterval 30
    ServerAliveCountMax 3
    TCPKeepAlive yes
    ControlMaster auto
    ControlPath ~/.ssh/cm-%r@%h:%p
    ControlPersist 10m
    IdentitiesOnly yes
    HashKnownHosts no
    StrictHostKeyChecking accept-new
    AddKeysToAgent yes
    UseKeychain yes
    Compression yes

Host nexus-0
    HostName 100.67.39.1
    User root
    IdentityFile ~/.ssh/id_nexus-0

Host brick
    HostName brick.story-ule.ts.net
    User sovereign
    IdentityFile ~/.ssh/id_brick

Host gate-vm
    HostName gate-vm.story-ule.ts.net
    User debian
    IdentityFile ~/.ssh/id_gate-vm

Host shield-vm
    HostName shield-vm.story-ule.ts.net
    User debian
    IdentityFile ~/.ssh/id_shield-vm

Security Notes

  • ed25519 keys provide strong security with small keys/signatures
  • IdentitiesOnly ensures ssh never offers the wrong key to the wrong host
  • StrictHostKeyChecking=accept-new uses TOFU while still catching host key changes
  • No password authentication; all critical nodes are key-only

5. Cryptographic Proof System

VaultMesh uses a Merkle-tree-based proof system with receipts, roots, and cross-ledger anchoring. Each serious action (deploy, anchor, oracle decision, incident handling) emits a receipt.

Key Findings

  • All significant actions generate cryptographic receipts in append-only logs
  • Merkle trees allow efficient inclusion proofs for large sets of receipts
  • Anchors can be written to local files, Bitcoin (OTS), Ethereum, or mesh peers
  • The release pipeline for vm-spawn automatically computes Merkle roots and anchors proof artifacts
  • Braid-style interoperability allows importing and emitting foreign ledger roots

Proof Lifecycle

  1. Action occurs (e.g., Guardian anchor, deployment, oracle decision)
  2. proof_generate creates a signed receipt with a Blake3 hash of the canonical JSON
  3. Receipts accumulate until a batch threshold is reached
  4. proof_batch constructs a Merkle tree and computes the root
  5. proof_anchor_* writes the root to local files, timestamps, or blockchains
  6. proof_verify allows any future verifier to confirm receipt integrity against a given root

Anchoring Strategies

Type Method Durability
local Files in data/anchors/ Node-local
ots OpenTimestamps → Bitcoin Public blockchain
eth Calldata/contract → Ethereum Public blockchain
mesh Cross-attest via other nodes Federated durability

Braid Protocol

  • braid_import import foreign ledger roots from other chains/nodes
  • braid_emit expose local roots for others to import
  • braid_status track imported vs. local roots and regression
  • Ensures root sequences are strictly advancing (no rollback without detection)

Receipt Schema (Conceptual)

{
  "proof_id": "uuid",
  "action": "guardian_anchor",
  "timestamp": "ISO8601",
  "data_hash": "blake3_hex",
  "signature": "ed25519_sig",
  "witnesses": ["node_id"],
  "chain_prev": "prev_proof_id"
}

Security Notes

  • Blake3 hashing for speed and modern security
  • Ed25519 signatures for authenticity and non-repudiation
  • Merkle trees make inclusion proofs O(log n)
  • Multiple anchoring paths provide defense in depth against ledger loss

6. Lawchain Compliance Ledger

Lawchain is the compliance-focused ledger that tracks regulatory obligations, oracle answers, and audit trails via receipts. It integrates with the proof system to ensure every compliance answer has a cryptographic backbone.

Key Findings

  • Oracle answers are validated against a schema before being recorded
  • Each answer is hashed and bound into a receipt, linking legal semantics to proofs
  • Federation metrics allow multi-node Lawchain sync across the mesh
  • Policy evaluation is driven by JSON inputs and produces JSON results for downstream tools

Oracle Answer Schema (vm_oracle_answer_v1)

{
  "question": "string",
  "answer_text": "string",
  "citations": [{
    "document_id": "string",
    "framework": "string",
    "excerpt": "string"
  }],
  "compliance_flags": {
    "gdpr_relevant": true,
    "ai_act_relevant": false,
    "nis2_relevant": true
  },
  "gaps": ["string"],
  "insufficient_context": false,
  "confidence": "high"
}

Compliance Q&A Workflow

  1. Operator (or system) asks Lawchain a question
  2. RAG/Retrieve context from policy docs and regulations
  3. LLM generates an answer draft
  4. Answer is validated against vm_oracle_answer_v1 schema
  5. Hash (Blake3 over canonical JSON) computed and receipt generated
  6. Receipt anchored via proof system and stored in Lawchain

Compliance Frameworks Tracked

  • GDPR data protection and subject rights
  • EU AI Act risk classification, obligations, and logs
  • NIS2 network and information security
  • Custom extensions can map additional frameworks (e.g., SOC2, ISO 27001)

Security Notes

  • Answer hash computed as blake3(json.dumps(answer, sort_keys=True))
  • Receipts bind answer content, timestamps, and possibly node identity
  • gaps and insufficient_context prevent fake certainty in legal answers
  • Citations must reference real sources, enabling audit of answer provenance

7. Oracle Engine & Shield Defense

The Oracle Engine provides structured reason → decide → act chains, while Shield and TEM form the defensive veil. Together they detect threats, log them to the proof system, and (optionally) orchestrate responses.

Key Findings

  • Oracle chains decisions through explicit reasoning steps, not opaque actions
  • Every significant decision can emit receipts into the proof spine
  • Shield monitors multiple vectors (network, process, file, device, etc.)
  • Response levels span from passive logging to active isolation or countermeasures
  • Agent tasks allow scheduled or triggered operations (e.g., periodic scans)

Oracle Tools

Tool Purpose
oracle_status Node status and capabilities
oracle_reason Analyze situation, propose actions
oracle_decide Make autonomous decision
oracle_tactical_chain Full reason → decide → act chain

Oracle Tactical Chain Flow

  1. Context: Collect current state (logs, metrics, alerts, lawchain state)
  2. Reason: oracle_reason produces candidate actions with justifications
  3. Decide: oracle_decide selects an action based on risk tolerance and constraints
  4. Act: Execute playbooks, or keep in dry-run mode for simulation
  5. Prove: Generate a receipt and anchor via proof system (optional but recommended)

Shield Monitor Vectors

Vector Detection Capability
network Port scans, unusual flows
wifi Rogue APs, deauth attempts
bluetooth Device enumeration/anomalies
usb Storage/HID abuse
process Suspicious binaries, behavior
file Unauthorized modifications

Shield Response Levels

Level Action
log Record event only
alert Notify operator (Slack/email/etc.)
block Prevent connection/action
isolate Quarantine node/container/service
counter Active response (e.g., honeypots)

Security Notes

  • Dry-run mode is default for dangerous operations; production actions require explicit opt-in
  • Risk tolerance levels gate what Shield/TEM may do without human approval
  • All automated decisions can be bound to receipts for post-incident audit

8. AppSec Toolchain

VaultMesh uses an integrated application security toolchain rooted on shield-vm and CI pipelines. It combines vulnerability scanning, secret detection, SBOM generation, and IaC analysis.

Key Findings

  • Nuclei, Trivy, Semgrep, TruffleHog, Gitleaks, Checkov, Syft, and Grype cover distinct layers
  • shield-vm is the natural home for heavy security scans and OffSec tooling
  • CI pipelines can call out to shield-vm or run scanners directly in job containers
  • Secret detection runs in both pre-commit and CI stages for defense-in-depth
  • SBOM generation and vulnerability scanning support long-term supply chain tracking

Tool Capabilities

Tool Target Types Output
nuclei URLs, IPs, domains Findings by severity
trivy Images, dirs, repos, SBOMs CVEs, secrets, configs
semgrep Source code directories Security findings
trufflehog Git, S3, GCS, etc. Verified secrets
gitleaks Git repos, filesystems Secret locations
checkov Terraform, K8s, Helm, etc. Misconfigurations
syft Images, dirs, archives CycloneDX/SPDX SBOM
grype Images, dirs, SBOMs Vulnerability list

MCP Tools

  • offsec_appsec_nuclei_scan
  • offsec_appsec_trivy_scan
  • offsec_appsec_semgrep_scan
  • offsec_appsec_trufflehog_scan
  • offsec_appsec_gitleaks_scan
  • offsec_appsec_checkov_scan
  • offsec_appsec_syft_sbom
  • offsec_appsec_grype_scan

Workflows

  1. SBOM Pipeline: Syft → produce CycloneDX JSON → Grype → vulnerability report
  2. Pre-merge Scans: CI job runs Semgrep, Trivy, Gitleaks on merge requests
  3. Periodic Deep Scans: shield-vm runs scheduled AppSec scans, logging high-severity findings
  4. Policy Integration: High-severity or critical findings feed into Lawchain/Lawchain-like policies

Security Notes

  • Nuclei and Trivy should be rate-limited when targeting external assets
  • Secret detection in CI uses only_verified where possible to reduce noise
  • Baseline files can exclude accepted findings while still tracking new issues
  • AppSec findings for high-value systems may be recorded as receipts in the proof system

9. Forge Flow — From Phone to Shield

The Forge Flow describes how code moves from the Sovereign's phone and forge node (nexus-0) through GitLab on gate-vm, into CI, and finally onto shield-vm and lab nodes.

Key Findings

  • Primary forge is nexus-0 (BlackArch), reachable via Tailscale from Android/laptop
  • vaultmesh repo lives on nexus-0 under /root/work/vaultmesh
  • Git remote points to GitLab on gate-vm (gitlab.mesh.local)
  • GitLab CI handles lint → test → build → deploy
  • Production-like deployments land on shield-vm; experiments land on Lab HV nodes

Forge Flow Diagram

Android / Laptop
    ↓ (Tailscale SSH)
nexus-0 (BlackArch forge)
    ↓ (git push)
GitLab @ gate-vm (mesh-core-01)
    ↓ (CI: lint → test → build)
shield-vm (Shield / TEM) and Lab HV (phoenix-01, etc.)

Steps

1. Inception (Connect to Forge)

ssh VaultSovereign@100.67.39.1      # nexus-0 via Tailscale
tmux attach -t sovereign || tmux new -s sovereign

2. Forge (Edit & Test)

cd /root/work/vaultmesh
nvim .
python3 -m pytest tests/ -v
python3 cli/vm_cli.py guardian status
python3 cli/vm_cli.py console sessions

3. Transmit (Git Push to GitLab)

git add -A
git commit -m "feat(guardian): improve anchor receipts"
git push origin main   # or feature branch

4. Transform (GitLab CI on gate-vm)

  • .gitlab-ci.yml stages: lint → test → build → deploy

5. Manifest (Deploy to Shield or Lab)

  • CI deploy job: main → shield-vm, lab branches → lab-mesh-01 / phoenix-01
  • Manual fallback: ssh shield-vm && cd /opt/vaultmesh && git pull

6. Observe (Metrics & Proofs)

  • Grafana dashboards (gate-vm) for system and proof metrics
  • Guardian CLI for roots and scrolls
  • Lawchain/oracle dashboards for compliance view

Infrastructure Roles in the Flow

  • nexus-0 → live forge, fast iteration, experiments
  • gate-vm → GitLab + CI + registry + observability
  • shield-vm → OffSec/TEM node and primary runtime for security engines
  • Lab HV → ephemeral experimentation environment

10. Canonical Infrastructure — VaultMesh v1

This page defines the canonical infrastructure for VaultMesh as of the first full catalog: which nodes exist, what runs where, and which services are considered "core mesh".

Key Findings

  • BRICK + v1-nl-gate + nexus-0 form the spine of the system
  • gate-vm (mesh-core-01) is the canonical host for the mesh-stack-migration bundle
  • shield-vm is the canonical Shield/TEM node with OffSec tooling and machine-secrets vault
  • Dual-vault pattern is standard: Vaultwarden (human), HashiCorp Vault (machine)
  • Grafana is the canonical dashboard layer; Wiki.js is explicitly not part of the new architecture

Canonical Nodes and Roles

Node Role Description
nexus-0 Forge Primary dev/forge node (BlackArch)
brick Hypervisor Hosts core VMs (debian-golden, gate-vm, shield-vm)
v1-nl-gate External Gate Cloud-facing edge server, future ingress
gate-vm mesh-core-01 (Core Stack) GitLab, MinIO, Postgres, Prometheus, Grafana, Vaultwarden, backup-freshness, Traefik, WG-Easy
shield-vm shield-01 (Shield/TEM) OffSec agents, TEM, HashiCorp Vault, incidents & simulations
lab-* Experimental Mesh lab-mesh-01, lab-agent-01, lab-chaos-01, phoenix-01

Canonical Core Services (gate-vm / mesh-core-01)

  • GitLab source control, CI/CD
  • MinIO object storage & backups
  • PostgreSQL GitLab and future service DBs
  • Prometheus metrics
  • Grafana dashboards (infra, backup freshness, proof metrics)
  • Vaultwarden human password vault (browsers, logins)
  • backup-freshness monitors MinIO backup age
  • Traefik reverse proxy and ingress
  • WG-Easy (optional) simplified WireGuard access

Canonical Security / Shield Services (shield-vm)

  • HashiCorp Vault machine/app secrets
  • TEM daemon threat transmutation engine
  • OffSec tools and MCP Oracle, Shield, AppSec scanners
  • Agent/task scheduler scheduled security workflows
  • Optional: local Prometheus exporters for node/security metrics

Explicitly Non-Core

  • Wiki.js not part of canonical infra; documentation handled via Git-based docs/portals
  • Legacy projects marked ARCHIVE (e.g., old offsec-shield architecture, sovereign-swarm)

Migration & Portability

mesh-stack-migration/ enables redeploying the entire core stack to a fresh host:

  1. Copy bundle → set .envdocker compose up -d
  2. Run FIRST-LAUNCH and DRY-RUN checklists
  3. VMs can be moved or recreated using debian-golden as base

Evolution Rules

If a service becomes critical and stateful, it must:

  • Emit receipts and have a documented backup/restore plan
  • Expose metrics consumable by Prometheus
  • Be referenced in the Canonical Infrastructure page with node placement

Experimental services stay on Lab HV until they prove their value.


VAULTMESH

Earth's Civilization Ledger

Solve et Coagula

vaultmesh.org • offsecshield.com • vaultmesh.earth


VaultMesh Infrastructure Catalog v2.0 — Canon v1 VaultMesh Technologies • Dublin, Ireland