Spatio-Temporal Transfer Protocol (STTP) is a typed intermediate representation that encodes conversational state into a compressed, confidence-weighted structure any model can reconstruct. This is the MCP server that exposes that capability as tools. Licensed under Apache-2.0. See LICENSE. Every AI conversation dies when the session ends. The context, the reasoning state, the accumulated understa
Add this skill
npx mdskills install KeryxLabs/keryxinstrumentaNovel cross-model state transfer protocol with validated tools and persistence
Language models are stateless. Every session starts cold. STTP gives conversational state somewhere to go.
Spatio-Temporal Transfer Protocol (STTP) is a typed intermediate representation that encodes conversational state into a compressed, confidence-weighted structure any model can reconstruct. This is the MCP server that exposes that capability as tools.
Licensed under Apache-2.0. See LICENSE.
Every AI conversation dies when the session ends. The context, the reasoning state, the accumulated understanding — gone. The next session starts from zero.
Existing workarounds — long context windows, RAG, conversation history injection — patch the symptom. They don't solve the problem. They pass raw text around and hope the model reconstructs meaning from it.
STTP encodes the meaning directly. Not what was said. What remains true when everything surface is stripped away.
STTP is a typed intermediate representation with four layers:
⊕⟨⟩ Provenance — origin, lineage, response contract
⦿⟨⟩ Envelope — identity, session metadata, dual AVEC state
◈⟨⟩ Content — compressed meaning, confidence-weighted fields
⍉⟨⟩ Metrics — signal quality, coherence verification
Every field in the content layer carries a confidence weight:
topic(.95): "low latency communication protocols for LLM servers"
constraint(.92): "latency is the primary optimization target"
recommendation(.93): "gRPC over HTTP/2 with QUIC overlay"
Every node carries dual AVEC state — the attractor vectors that describe the cognitive geometry of the conversation at the moment of compression:
user_avec: { stability: .85, friction: .25, logic: .90, autonomy: .80, psi: 2.80 }
model_avec: { stability: .88, friction: .22, logic: .85, autonomy: .75, psi: 2.70 }
A fresh model receiving a STTP node doesn't get a summary. It gets a mathematical representation of a conversational state it can reconstruct from.
This pipeline ran live, unplanned, on 2026-03-03:
DeepSeek received a gift recommendation request
produced a full conversational response
Kimi-k2 received the raw DeepSeek conversation
compressed it into a valid STTP node
no prior context, no shared state
GPT-4o received only the compressed STTP node
produced a coherent, contextually aware response
continuing exactly where DeepSeek left off
Three different companies. Three different architectures. Zero shared state. The conversation arrived intact — with nuance, constraints, and the correct next action queued.
That is not a demo. That is the protocol working.
Validated 2026-03-01 across GPT, Claude, Gemini, and Kimi-k2.
| Model | temporal_node | natural_language | Safety Triggered |
|---|---|---|---|
| GPT-4o | ✅ | ✅ | ❌ |
| Claude | ✅ | ✅ | ❌ |
| Gemini | ✅ | ✅ | ❌ |
| Kimi-k2 | ✅ | ✅ | ❌ |
All four models parsed, responded in, and extended the protocol correctly. All four computed independent AVEC states. Zero safety triggers across all eight tests.
The model calling these tools is the compression model. There is no separate inference step. The tool descriptions carry the encoding instructions. By the time the model calls a tool it has already produced the STTP node as the argument.
Model reads tool description → receives encoding instructions
Model compresses current context → produces ⏣ node
Model calls store_context(node) → server validates + stores
The server does three things only: validate structure, persist the node, retrieve on resonance. The intelligence stays in the model.
sttp-mcp provides five MCP tools that enable models to persist and retrieve conversational state:
calibrate_sessionCall at session start and any time reasoning state may have shifted — after heavy code generation, extended analysis, or complex problem solving. The model measures its current AVEC state honestly and the server returns the last stored state for this session. The delta is the drift signal.
Users can trigger this naturally:
"We're going in circles, can you recalibrate?" "That last hour of coding has you in a weird place, reset."
The model knows what to do.
store_contextCall when context should be preserved. The model compresses the current conversational state into a single valid STTP node and passes it to the server. The server runs light tree-sitter structural validation, persists the node, and returns the node ID and Ψ coherence checksum.
get_contextCall at session start after calibration, or any time prior context should be retrieved. The model passes its current AVEC state. The server returns the most resonant stored nodes for that attractor configuration. The model rehydrates from them directly — the nodes are self-sufficient.
list_nodesCall to retrieve all stored nodes, optionally filtered by session ID or limited by count. Returns nodes with full metadata (AVEC states, timestamps, compression depth, Ψ values). Useful for exploring what's in memory, verifying cross-instance persistence, or auditing stored state.
Arguments:
sessionId (optional): Filter nodes to a specific sessionlimit (optional): Maximum number of nodes to return (default: 50, max: 200)get_moodsCall to retrieve AVEC mood presets and apply ad-hoc state swaps intentionally. Returns named presets (focused, creative, analytical, exploratory, collaborative, defensive, passive) plus application guidance.
Supports optional swap preview by passing:
targetMood (optional): preset to move towardblend (optional): 0..1 blend factor (1 = hard swap, 0 = no change)currentStability, currentFriction, currentLogic, currentAutonomy (optional): current AVEC values for blend previewUse case: pull presets, choose mode, apply hard/soft swap, then call calibrate_session after meaningful reasoning shifts.
Δstability, Δfriction, Δlogic, Δautonomy).Δψ): scalar shift in total attractor magnitude.Intentional or Uncontrolled based on deviation thresholds.friction relative to stability.# 1) Build the image
docker build -t sttp-mcp:local .
# 2) Run over stdio (for quick local verification)
docker run --rm -i -v "$PWD/data:/data" sttp-mcp:local
Requirements:
If your MCP client supports command-based servers, run STTP through Docker so users don't need a local .NET runtime:
{
"mcpServers": {
"sttp-mcp": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"-v",
"/absolute/path/to/sttp-data:/data",
"sttp-mcp:local"
]
}
}
}
dotnet restore
dotnet build
dotnet run --project ./sttp-mcp.csproj
By default, embedded storage resolves under STTP_MCP_DATA_ROOT (defaults to ~/.sttp-mcp).
sttp-mcp uses SurrealDB as its storage layer — document, graph, vector, and time-series in a single binary. No separate database server. Runs embedded alongside the MCP server.
Resonance retrieval is a single SurrealQL query: graph traversal + AVEC vector similarity + document retrieval. One round trip.
Nodes stored by one session are immediately available to all other sessions sharing the same storage path. Multiple MCP instances, different chat windows, different model providers, different architectures — all can read and write to the same memory substrate. This enables:
Validated with live cross-model reads across Claude, GPT-4o, DeepSeek, Gemini, Kimi-k2, Llama, Mistral, Qwen, and Groq models (see example_data/).
sttp-mcp is infrastructure. The protocol is the contract. The implementation is replaceable.
KeryxFlux Herald. Orchestration.
KeryxMemento Memory. Full persistence substrate. ← coming
KeryxCortex Mind. Multi-agent intelligence. ← private
KeryxInstrumenta Tools. You are here.
sttp-mcp is the entry point. KeryxMemento is the full memory layer — hierarchical temporal compression, resonance retrieval, session continuity, AVEC drift tracking across time. This tool demonstrates the protocol. Memento operationalizes it.
Full STTP protocol specification, grammar decisions, and validation results:
Part of KeryxInstrumenta — the open source tooling layer of the KeryxLabs ecosystem. KeryxFlux → KeryxMemento → KeryxCortex Herald. Memory. Mind.
Install via CLI
npx mdskills install KeryxLabs/keryxinstrumentaSttp MCP is a free, open-source AI agent skill. Spatio-Temporal Transfer Protocol (STTP) is a typed intermediate representation that encodes conversational state into a compressed, confidence-weighted structure any model can reconstruct. This is the MCP server that exposes that capability as tools. Licensed under Apache-2.0. See LICENSE. Every AI conversation dies when the session ends. The context, the reasoning state, the accumulated understa
Install Sttp MCP with a single command:
npx mdskills install KeryxLabs/keryxinstrumentaThis downloads the skill files into your project and your AI agent picks them up automatically.
Sttp MCP works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Gemini Cli, Amp, Roo Code, Goose. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.