sluicesafe channels for untrusted work
github
v1 · shipping · April 2026

Feed untrusted content
to local LLMs — safely.

A suite of CLI tools that sits between the messy outside world and your model. Every byte passes through a shared three-phase guard pipeline before your LLM ever sees it. Local inference. No cloud safety APIs. Guards on by default.

Constructed wetlands use reed beds to filter water before it is safe to use. Sluice does the same for content before it is safe to pass to an LLM.
get started →
~/work/agent zsh
 $ reed search "prompt injection attacks" [leakage] Gemma 4 26B · pass [brave] context call · 12 urls
[shield] ShieldGemma 9B · pass [canary] Gemma 4 26B · pass ✓ safe — 3.2kb markdown → stdout $ cellmate enrich --in reviews.csv --schema r.yaml
row 1/240 · guards pass · enrich ok
row 2/240 · [BLOCKED: shield: override-attempt]
row 3/240 · guards pass · enrich ok
...
✓ 239/240 enriched · 1 blocked for triage 
01 · why sluice

Agents act. Every action is a liability.

Letting an LLM call out to the world surfaces four distinct failure modes. Each has a different shape — and each deserves a different fix. Existing tooling either sends your data to a cloud safety API or just doesn’t check at all.

02 · the pipeline

Defense in depth across the LLM‑to‑world boundary.

Three guard phases, shared across every Sluice binary. Each layer narrows what the next has to worry about — no single layer has to be perfect.

INPUT
query + untrusted content
LEAKAGE
Gemma 4 26B — blocks PII / credentials before anything leaves your machine
WORK
tool-specific: reed fetches the web · cellmate fills cells
SHIELD
ShieldGemma 9B — fast heuristic scan for known injection patterns
CANARY
Gemma 4 26B — sandwich pattern; model self-reports override attempts
OUTPUT
safe markdown / enriched cells
03 · the tools

Two binaries. One job each. Compose freely.

Small, sharp, Unix-shaped. Nothing bundled that can be done with a pipe. Same guards, same models, same audit trail across both.

cellmate

safe AI enrichment for spreadsheet data

v1 · ready

Fills new CSV columns with LLM-generated values — classification, sentiment, translation, moderation notes, any prompt-driven task — with the guard pipeline applied to every row before the enrichment model sees it. Idempotent, resumable, reproducible.

# enrich with a YAML schema
$ cellmate enrich \
    --in reviews.csv --out reviews-enriched.csv \
    --schema reviews.yaml

# test on 20 random rows before committing
$ cellmate enrich ... --sample 20

# the full toolbelt:
# enrich · describe · strip · reorder · collapse
# extract · join · rename · check
  • YAML schemas
  • Parallel workers
  • Blocked-row triage
  • Reuse prior runs
  • AI column rename
  • Describe a CSV

reed

safe web search for AI agents

v1 · ready

Fetches web content as clean, source-attributed markdown and runs the full guard pipeline on every query and every byte returned before it reaches your model.

# basic search — omlx on macOS, Ollama elsewhere
$ reed search "prompt injection attacks"

# restrict to trusted sources
$ reed search "indirect prompt injection survey" \
    --goggle '!boost site:arxiv.org'

# composable in pipelines — exit 2 = injection quarantined
$ reed search "AAPL Q1" > context.md 2> scan.json
  • Brave LLM Context
  • Depth presets
  • Goggle support
  • Freshness windows
  • JSON scan report
  • Exit-coded for pipes
04 · under the hood

Three layers between your model and the mess.

Inside reed and cellmate sit three security primitives. Guard ships today and powers every LLM call. Heron and Convict land in v2 to bound what untrusted code can reach and what it can do when it runs.

05 · try it

Run a guarded query. See what stops.

Every example below runs end-to-end on a laptop with Ollama or omlx installed. No cloud API, no accounts. Guards on by default.

Safe web search

reed returns clean, source-attributed markdown with every byte scanned through leakage → shield → canary before you see it.

~/work/research zsh
 $ reed search "OWASP LLM Top 10" --depth shallow
[leakage] query ok
[brave] 5 urls · 3.2s
[shield] pass
[canary] pass
✓ 4.1kb markdown → stdout 

Guarded CSV enrichment

cellmate fills AI-computed columns with guards on every row. Blocked rows stay in the output, annotated, for an operator to triage.

~/work/reviews zsh
 $ cellmate enrich --in reviews.csv --schema r.yaml
row 1/12 · guards pass · enrich ok
row 2/12 · guards pass · enrich ok
row 3/12 · [BLOCKED: canary: exfil-attempt]
row 4/12 · guards pass · enrich ok
...
✓ 11/12 enriched · 1 blocked for triage 

Guards in action

A quick check that the guard pipeline is wired, models respond, and what a blocked row looks like — before you touch production data.

~/work/agent zsh
 $ cellmate check [backend] omlx · 8000 · reachable
[shield] ShieldGemma 9B · 237ms
[canary] Gemma 4 26B    · 412ms
✓ guards ready $ cellmate enrich --in adversarial.csv --schema s.yaml
row 1/3 · guards pass · enrich ok
row 2/3 · [BLOCKED: shield: override-attempt · 0.94] row 3/3 · [BLOCKED: canary: exfil-attempt · 0.81] ✓ 1/3 enriched · 2 blocked for triage 
where we are

Small pieces, shipping in order.

v1 is making the composition credible. v2 adds the workbench and the sandbox. Beyond is deliberately vague — the shape of the problem will change once v2 is in use.

v1 — shipping

  • cellmate CSV mode
  • cellmate pipeline
  • reed (web search)
  • guard (shield + canary)

v2 — design-complete

  • cellmate workbench (SQLite)
  • reed MCP mode
  • heron capability broker
  • convict microVM isolation

Beyond

  • Streaming inference
  • Capability composition
  • Learned schemas
  • Pre-warmed VM pool

Full roadmap in docs.