Transport Layer for Multi-Agent Systems

Kill the token tax.

Slipstream 3.1 turns verbose agent coordination into a compact wire protocol: SLIP v3 <src> <dst> <Force> <Object> [payload...].

Use it when your agents exchange a lot of routine coordination traffic: requests, status updates, handoffs, approvals, and errors. Slipstream keeps those messages compact, explicit, and easy to route. Start with LangGraph adapters and fallback. Train later only if your workload needs it.

82%
smaller coordination messages
579
tests passing in 3.1
0
core dependencies

Agent systems waste money on coordination syntax.

In real multi-agent systems, the expensive part is often not reasoning. It is agents repeatedly sending routing metadata, task wrappers, and status envelopes across every hop. Slipstream exists to compress that layer without turning the protocol into unreadable symbol soup.

What breaks

Verbose coordination scales badly.

  • Every agent-to-agent handoff pays the JSON tax again.
  • Special characters fragment under BPE tokenizers.
  • Coordination overhead eats context that should go to reasoning.
  • Flat intent vocabularies are harder for small models to learn.
Why teams adopt it

Smaller messages without hiding the intent.

  • Force + Object keeps the wire compact and still readable.
  • Alphanumeric tokens avoid the tokenizer fragmentation that kills syntactic compression.
  • Strict parsing and fallback refs make the behavior predictable in production.
  • LangGraph adapters let you pilot at the orchestration layer before you retrain anything.

Readable on the wire. Strict in the runtime.

At a glance, Slipstream is just a short, human-readable wire format. Under the hood, 3.1 adds the hard edges needed for production use: strict validation, closed core semantics, and explicit fallback behavior.

Wire grammar

SLIP v3 <src> <dst> <Force> <Object> [payload...]
  • alphanumeric tokens only
  • src and dst: 1-20 chars
  • payload tokens: 0-20
  • payload token max length: 30

Closed Force vocabulary

Observe Inform Ask Request Propose Commit Eval Meta Accept Reject Error Fallback

Fallback stays off-wire

SLIP v3 devops sre Fallback Generic ref7f3a1b2c

Fallback content is stored out-of-band. The wire only carries a short ref token. That keeps edge cases safe without giving up the compact transport format.

Add Slipstream to LangGraph without retraining your agents first.

Slipstream fits between your graph logic and your transport. Keep your existing nodes, prompts, and tools. Add encode and decode nodes around handoffs, then route on Force:Object. The built-in keyword quantizer gets you started, and fallback refs catch everything that does not quantize cleanly.

01

Keep your graph state

Your nodes still produce natural-language intent and context. Slipstream does not require a rewrite of your agent logic.

02

Encode at the boundary

The adapter turns that intent into SLIP v3 messages or fallback refs before the message leaves the handoff boundary.

03

Route and observe

Dispatch on Request:Review, Inform:Status, or Fallback:Generic, then measure fallback rate before deciding whether to train.

Install
pip install slipcore
pip install langgraph
Minimal adapter use
from slipcore import (
    LangGraphSlipstreamAdapter,
    make_encode_node,
    make_decode_node,
    make_force_object_router,
)

Try the wire format, inspect the message shape, and estimate the savings.

If you are evaluating fit for your stack, start here. Build a message, decode one back into intent, and sanity-check whether the coordination savings are material for your workload.

Wire builder

Wire decoder

Coordination cost calculator

Assumes average coordination message drops from 41.9 tokens to 7.4 tokens.

Everything you need to evaluate, pilot, and ship Slipstream 3.1.

Protocol you can trust

  • Strict parser and validator for the SLIP v3 wire format.
  • Closed Force vocabulary and immutable core anchors.
  • Fallback refs stay off-wire and keep long-tail messages bounded.
  • Conformance tests and release gates back the implementation.

Adoption path that starts where you are

  • Use the LangGraph adapter at the orchestration layer first.
  • Keep your existing prompts, tools, and agent behaviors.
  • Start with keyword quantization plus fallback and measure real traffic.
  • Train a model later only if your workload needs higher recall.

Do I need to retrain my agents or models before I can use Slipstream?

No. Start with orchestrator-layer encoding and decoding, route on Force:Object, and let fallback handle the long tail. Training is optional and usually comes later, after you have measured real traffic.

What happens when a message does not fit the vocabulary?

Slipstream uses pointer-based fallback. The wire carries a short ref token and the full content stays in your fallback store, so edge cases do not break the protocol or blow up message size.

Where should Slipstream sit in my stack?

Between your agent logic and your transport or message bus. In LangGraph, that usually means encode and decode nodes at the boundaries where agents hand work to each other.