Files
vm-core/docs/VAULTMESH-SHIELD-NODE-TEM.md
2025-12-27 00:10:32 +00:00

5.6 KiB

Shield Node & TEM Engine

Summary

The Shield Node is the OffSec/TEM appliance for VaultMesh, running on shield-vm with a dedicated MCP backend, agents, and signed activity that flows back into the core ledger.


Key Findings

  • Shield Node now runs as a persistent service on shield-vm (Tailscale: 100.112.202.10).
  • MCP backend listens on :8081 with /health and /mcp/command endpoints.
  • Five core OffSec agents are available (Recon, Vuln, Exploit, CTF, DFIR).
  • VaultMesh talks to the Shield Node via offsec_node_client.py and vm_cli.py offsec … commands.
  • Shield activity is designed to be captured, analyzed, and (in the next iteration) emitted as receipts for ProofChain ingestion.

Components

Component Description
Shield Node host shield-vm (Debian, Tailscale node)
OffSec Agents stack /opt/offsec-agents/ (Python package + virtualenv)
MCP backend files/offsec_mcp.py (FastAPI / uvicorn)
System service vaultmesh-mcp.service (enabled, restart on failure)
VaultMesh client scripts/offsec_node_client.py
CLI façade vm_cli.py offsec agents and vm_cli.py offsec shield-status

Node & Service Layout

Item Value
Host shield-vm (Tailscale IP: 100.112.202.10)
Code root /opt/offsec-agents/
Virtualenv /opt/offsec-agents/.venv/
Service manager systemdvaultmesh-mcp.service
Port 8081/tcp (local + tailnet access)
Local state vaultmesh.db (SQLite, node-local)
Planned receipts /opt/offsec-agents/receipts/ for ProofChain ingestion

Service Configuration (systemd)

  • Unit path: /etc/systemd/system/vaultmesh-mcp.service
  • User: sovereign
  • WorkingDirectory: /opt/offsec-agents
  • ExecStart: /opt/offsec-agents/.venv/bin/uvicorn files.offsec_mcp:app --host 0.0.0.0 --port 8081
  • Environment:
    • VAULTMESH_ROOT=/opt/vaultmesh
    • TEM_DB_PATH=/opt/offsec-agents/state/tem.db
    • TEM_RECEIPTS_PATH=/opt/offsec-agents/receipts/tem

API Endpoints

GET /health

Returns Shield status, node/agent counts, and uptime.

{
  "status": "ok",
  "nodes": 12,
  "proofs": 0,
  "uptime": "6m"
}

POST /mcp/command

JSON body:

{
  "session_id": "string",
  "user": "string",
  "command": "string"
}

Example commands:

  • "status"
  • "mesh status"
  • "agents list"
  • "shield status"
  • "agent spawn recon example.com"
  • "agent mission <id> <target>"

VaultMesh Integration

Environment Variable

On VaultMesh host:

export OFFSEC_NODE_URL=http://100.112.202.10:8081

Client

scripts/offsec_node_client.py

Core methods:

  • health() → calls /health
  • command(command: str, session_id: str, user: str)/mcp/command

CLI Commands

# List agents registered on Shield Node
python3 cli/vm_cli.py offsec agents

# Show Shield health and status
python3 cli/vm_cli.py offsec shield-status

Workflows / Pipelines

1. Operator View

vm offsec shield-status  # Confirm Shield Node is up and healthy
vm offsec agents         # Verify active agent types and readiness

2. OffSec Operations (planned expansion)

  • Trigger recon, vuln scans, and missions via offsec_node_client.py
  • Store results locally in vaultmesh.db
  • Emit receipts to /opt/offsec-agents/receipts/

3. VaultMesh Ingestion (planned)

  • Guardian / automation jobs pull Shield receipts into VaultMesh ProofChain
  • Lawchain and compliance scrolls can reference Shield evidence directly

Security Notes

  • Shield Node is an OffSec/TEM surface and is isolated onto shield-vm
  • Access path is limited to Tailscale + SSH; no public internet exposure
  • SQLite DB and receipts directory are kept local to /opt/offsec-agents
  • Systemd ensures automatic restart on crash or failure
  • TEM-oriented commands (tem status, tem recall) reserved for future expansion

Dependencies

  • Python 3.13, python3-venv, and python3-pip on shield-vm
  • offsec-agents installed editable in /opt/offsec-agents
  • MCP dependencies from files/requirements-mcp.txt
  • Tailscale client running on shield-vm
  • VaultMesh core with OFFSEC_NODE_URL configured

Deployment Summary

  1. Code synced to /opt/offsec-agents on shield-vm
  2. Virtualenv .venv created and offsec-agents installed editable
  3. MCP dependencies installed from files/requirements-mcp.txt
  4. vaultmesh-mcp.service installed, enabled, and started under the sovereign user
  5. Health verified via:
    curl http://localhost:8081/health
    curl -X POST http://localhost:8081/mcp/command \
      -H "Content-Type: application/json" \
      -d '{"session_id":"test","user":"sovereign","command":"agents list"}'
    

Position in Overall Architecture

VaultMesh (core ledger)              Shield Node (offsec-agents)
─────────────────────────            ───────────────────────────
Rust engines                         Python agents + TEM
ProofChain/Guardian                  MCP backend (:8081)
vm_cli.py                            Nexus consoles
offsec_node_client.py ─────────────► /mcp/command
receipt ingestion ◄────────────────── /opt/offsec-agents/receipts/

VaultMesh: "What happened is provable." Shield Node: "What happens at the edge is observed, remembered, and signed."

The link between them is a narrow, explicit HTTP + receipts bridge, not shared mutable state.