* feat(helm): add Pipelock ConfigMap, scanning config, and consolidate compose - Add ConfigMap template rendering DLP, response scanning, MCP input/tool scanning, and forward proxy settings from values - Mount ConfigMap as /etc/pipelock/pipelock.yaml volume in deployment - Add checksum/config annotation for automatic pod restart on config change - Gate HTTPS_PROXY/HTTP_PROXY env injection on forwardProxy.enabled (skip in MCP-only mode) - Use hasKey for all boolean values to prevent Helm default swallowing false - Single source of truth for ports (forwardProxy.port/mcpProxy.port) - Pipelock-specific imagePullSecrets with fallback to app secrets - Merge standalone compose.example.pipelock.yml into compose.example.ai.yml - Add pipelock.example.yaml for Docker Compose users - Add exclude-paths to CI workflow for locale file false positives * Add external assistant support (OpenAI-compatible SSE proxy) Allow self-hosted instances to delegate chat to an external AI agent via an OpenAI-compatible streaming endpoint. Configurable per-family through Settings UI or ASSISTANT_TYPE env override. - Assistant::External::Client: SSE streaming HTTP client (no new gems) - Settings UI with type selector, env lock indicator, config status - Helm chart and Docker Compose env var support - 45 tests covering client, config, routing, controller, integration * Add session key routing, email allowlist, and config plumbing Route to the actual OpenClaw session via x-openclaw-session-key header instead of creating isolated sessions. Gate external assistant access behind an email allowlist (EXTERNAL_ASSISTANT_ALLOWED_EMAILS env var). Plumb session_key and allowedEmails through Helm chart, compose, and env template. * Add HTTPS_PROXY support to External::Client for Pipelock integration Net::HTTP does not auto-read HTTPS_PROXY/HTTP_PROXY env vars (unlike Faraday). Explicitly resolve proxy from environment in build_http so outbound traffic to the external assistant routes through Pipelock's forward proxy when enabled. Respects NO_PROXY for internal hosts. * Add UI fields for external assistant config (Setting-backed with env fallback) Follow the same pattern as OpenAI settings: database-backed Setting fields with env var defaults. Self-hosters can now configure the external assistant URL, token, and agent ID from the browser (Settings > Self-Hosting > AI Assistant) instead of requiring env vars. Fields disable when the corresponding env var is set. * Improve external assistant UI labels and add help text Change placeholder to generic OpenAI-compatible URL pattern. Add help text under each field explaining where the values come from: URL from agent provider, token for authentication, agent ID for multi-agent routing. * Add external assistant docs and fix URL help text Add External AI Assistant section to docs/hosting/ai.md covering setup (UI and env vars), how it works, Pipelock security scanning, access control, and Docker Compose example. Drop "chat completions" jargon from URL help text. * Harden external assistant: retry logic, disconnect UI, error handling, and test coverage - Add retry with backoff for transient network errors (no retry after streaming starts) - Add disconnect button with confirmation modal in self-hosting settings - Narrow rescue scope with fallback logging for unexpected errors - Safe cleanup of partial responses on stream interruption - Gate ai_available? on family assistant_type instead of OR-ing all providers - Truncate conversation history to last 20 messages - Proxy-aware HTTP client with NO_PROXY support - Sanitize protocol to use generic headers (X-Agent-Id, X-Session-Key) - Full test coverage for streaming, retries, proxy routing, config, and disconnect * Exclude external assistant client from Pipelock scan-diff False positive: `@token` instance variable flagged as "Credential in URL". Temporary workaround until Pipelock supports inline suppression. * Address review feedback: NO_PROXY boundary fix, SSE done flag, design tokens - Fix NO_PROXY matching to require domain boundary (exact match or .suffix), case-insensitive. Prevents badexample.com matching example.com. - Add done flag to SSE streaming so read_body stops after [DONE] - Move MAX_CONVERSATION_MESSAGES to class level - Use bg-success/bg-destructive design tokens for status indicators - Add rationale comment for pipelock scan exclusion - Update docs last-updated date * Address second round of review feedback - Allowlist email comparison is now case-insensitive and nil-safe - Cap SSE buffer at 1 MB to prevent memory blowup from malformed streams - Don't expose upstream HTTP response body in user-facing errors (log it instead) - Fix frozen string warning on buffer initialization - Fix "builtin" typo in docs (should be "built-in") * Protect completed responses from cleanup, sanitize error messages - Don't destroy a fully streamed assistant message if post-stream metadata update fails (only cleanup partial responses) - Log raw connection/HTTP errors internally, show generic messages to users to avoid leaking network/proxy details - Update test assertions for new error message wording * Fix SSE content guard and NO_PROXY test correctness Use nil check instead of present? for SSE delta content to preserve whitespace-only chunks (newlines, spaces) that can occur in code output. Fix NO_PROXY test to use HTTP_PROXY matching the http:// client URL so the proxy resolution and NO_PROXY bypass logic are actually exercised. * Forward proxy credentials to Net::HTTP Pass proxy_uri.user and proxy_uri.password to Net::HTTP.new so authenticated proxies (http://user:pass@host:port) work correctly. Without this, credentials parsed from the proxy URL were silently dropped. Nil values are safe as positional args when no creds exist. * Update pipelock integration to v0.3.1 with full scanning config Bump Helm image tag from 0.2.7 to 0.3.1. Add missing security sections to both the Helm ConfigMap and compose example config: mcp_tool_policy, mcp_session_binding, and tool_chain_detection. These protect the /mcp endpoint against tool injection, session hijacking, and multi-step exfiltration chains. Add version and mode fields to config files. Enable include_defaults for DLP and response scanning to merge user patterns with the 35 built-in patterns. Remove redundant --mode CLI flag from the Helm deployment template since mode is now in the config file. * Pipelock Helm hardening + docs for external assistant and pipelock Helm templates: - ServiceMonitor for Prometheus scraping on /metrics (proxy port) - Ingress template for MCP reverse proxy (external AI agent access) - PodDisruptionBudget with minAvailable/maxUnavailable mutual exclusion - topologySpreadConstraints on Deployment - Structured logging config (format, output, include_allowed/blocked) - extraConfig escape hatch for additional pipelock.yaml sections - requireForExternalAssistant guard (fails when assistant enabled without pipelock) - Component label on Service metadata for ServiceMonitor targeting - NOTES.txt pipelock section with health, access, security, metrics info - Bump pipelock image tag 0.3.1 -> 0.3.2 - Fix: rename _asserts.tpl -> asserts.tpl (Helm skipped _ prefixed file) Documentation: - Helm chart README: full Pipelock section - docs/hosting/pipelock.md: dedicated hosting guide (Docker + Kubernetes) - docs/hosting/docker.md: AI features section (external assistant, pipelock) - .env.example: external assistant and MCP env vars Infra: - Chart.lock pinning dependency versions - .gitignore for vendored subchart tarballs * Fix bot comments: quote ingress host, fix sidecar wording, add code block lang * Fail fast when pipelock ingress enabled with empty hosts * Fail fast when pipelock ingress host has empty paths * Messed up the conflict merge --------- Signed-off-by: Juan José Mata <juanjo.mata@gmail.com> Co-authored-by: Juan José Mata <juanjo.mata@gmail.com> Co-authored-by: Juan José Mata <jjmata@jjmata.com>
7.0 KiB
Pipelock: AI Agent Security Proxy
Pipelock is an optional security proxy that scans AI agent traffic flowing through Sure. It protects against secret exfiltration, prompt injection, and tool poisoning.
What Pipelock does
Pipelock runs as a separate proxy service alongside Sure with two listeners:
| Listener | Port | Direction | What it scans |
|---|---|---|---|
| Forward proxy | 8888 | Outbound (Sure to LLM) | DLP (secrets in prompts), response injection |
| MCP reverse proxy | 8889 | Inbound (agent to Sure /mcp) | Prompt injection, tool poisoning, DLP |
Forward proxy (outbound)
When HTTPS_PROXY=http://pipelock:8888 is set, outbound HTTPS requests from Faraday-based clients (like ruby-openai) are routed through Pipelock. It scans request bodies for leaked secrets and response bodies for prompt injection.
Covered: OpenAI API calls via ruby-openai (uses Faraday).
Not covered: SimpleFIN, Coinbase, Plaid, or anything using Net::HTTP/HTTParty directly. These bypass HTTPS_PROXY.
MCP reverse proxy (inbound)
External AI assistants that call Sure's /mcp endpoint should connect through Pipelock on port 8889 instead of directly to port 3000. Pipelock scans:
- Tool call arguments (DLP, shell obfuscation detection)
- Tool responses (injection payloads)
- Session binding (detects tool inventory manipulation)
- Tool call chains (multi-step attack patterns like recon then exfil)
Docker Compose setup
The compose.example.ai.yml file includes Pipelock. To use it:
-
Download the compose file and Pipelock config:
curl -o compose.ai.yml https://raw.githubusercontent.com/we-promise/sure/main/compose.example.ai.yml curl -o pipelock.example.yaml https://raw.githubusercontent.com/we-promise/sure/main/pipelock.example.yaml -
Start the stack:
docker compose -f compose.ai.yml up -d -
Verify Pipelock is healthy:
docker compose -f compose.ai.yml ps pipelock # Should show "healthy"
Connecting external AI agents
External agents should use the MCP reverse proxy port:
http://your-server:8889
The agent must include the MCP_API_TOKEN as a Bearer token in requests. Set this in your .env:
MCP_API_TOKEN=generate-a-random-token
MCP_USER_EMAIL=your@email.com
Running without Pipelock
To use compose.example.ai.yml without Pipelock, remove the pipelock service and its depends_on entries from web and worker, then unset the proxy env vars (HTTPS_PROXY, HTTP_PROXY).
Or use the standard compose.example.yml which does not include Pipelock.
Helm (Kubernetes) setup
Enable Pipelock in your Helm values:
pipelock:
enabled: true
image:
tag: "0.3.2"
mode: balanced
This creates a separate Deployment, Service, and ConfigMap. The chart auto-injects HTTPS_PROXY/HTTP_PROXY/NO_PROXY into web and worker pods.
Exposing MCP to external agents (Kubernetes)
In Kubernetes, external agents cannot reach the MCP port by default. Enable the Pipelock Ingress:
pipelock:
enabled: true
ingress:
enabled: true
className: nginx
hosts:
- host: pipelock.example.com
paths:
- path: /
pathType: Prefix
tls:
- hosts: [pipelock.example.com]
secretName: pipelock-tls
Or port-forward for testing:
kubectl port-forward svc/sure-pipelock 8889:8889 -n sure
Monitoring
Enable the ServiceMonitor for Prometheus scraping:
pipelock:
serviceMonitor:
enabled: true
interval: 30s
additionalLabels:
release: prometheus
Metrics are available at /metrics on the forward proxy port (8888).
Eviction protection
For production, enable the PodDisruptionBudget:
pipelock:
pdb:
enabled: true
maxUnavailable: 1
See the Helm chart README for all configuration options.
Pipelock configuration file
The pipelock.example.yaml file (Docker Compose) or ConfigMap (Helm) controls scanning behavior. Key sections:
| Section | What it controls |
|---|---|
mode |
strict (block threats), balanced (warn + block critical), audit (log only) |
forward_proxy |
Outbound HTTPS scanning (tunnel timeouts, idle timeouts) |
dlp |
Data loss prevention (scan env vars, built-in patterns) |
response_scanning |
Scan LLM responses for prompt injection |
mcp_input_scanning |
Scan inbound MCP requests |
mcp_tool_scanning |
Validate tool calls, detect drift |
mcp_tool_policy |
Pre-execution rules (shell obfuscation, etc.) |
mcp_session_binding |
Pin tool inventory, detect manipulation |
tool_chain_detection |
Multi-step attack patterns |
websocket_proxy |
WebSocket frame scanning (disabled by default) |
logging |
Output format (json/text), verbosity |
For the Helm chart, most sections are configurable via values.yaml. For additional sections not covered by structured values (session profiling, data budgets, kill switch), use the extraConfig escape hatch:
pipelock:
extraConfig:
session_profiling:
enabled: true
max_sessions: 1000
Modes
| Mode | Behavior | Use case |
|---|---|---|
strict |
Block all detected threats | Production with sensitive data |
balanced |
Warn on low-severity, block on high-severity | Default; good for most deployments |
audit |
Log everything, block nothing | Initial rollout, testing |
Start with audit mode to see what Pipelock detects without blocking anything. Review the logs, then switch to balanced or strict.
Limitations
- Forward proxy only covers Faraday-based HTTP clients. Net::HTTP, HTTParty, and other libraries ignore
HTTPS_PROXY. - Docker Compose has no egress network policies. The
/mcpendpoint on port 3000 is still reachable directly (auth token required). For enforcement, use Kubernetes NetworkPolicies. - Pipelock scans text content. Binary payloads (images, file uploads) are passed through by default.
Troubleshooting
Pipelock container not starting
Check the config file is mounted correctly:
docker compose -f compose.ai.yml logs pipelock
Common issues:
- Missing
pipelock.example.yamlfile - YAML syntax errors in config
- Port conflicts (8888 or 8889 already in use)
LLM calls failing with proxy errors
If AI chat stops working after enabling Pipelock:
# Check Pipelock logs for blocked requests
docker compose -f compose.ai.yml logs pipelock --tail=50
If requests are being incorrectly blocked, switch to audit mode in the config file and restart:
mode: audit
MCP requests not reaching Sure
Verify the MCP upstream is configured correctly:
# Test from inside the Pipelock container
docker compose -f compose.ai.yml exec pipelock /pipelock healthcheck --addr 127.0.0.1:8888
Check that MCP_API_TOKEN and MCP_USER_EMAIL are set in your .env file and that the email matches an existing Sure user.