TypeScript SDK for S2 This repo contains the official TypeScript SDK for S2, a serverless data store for streams, built on the service's REST API. S2 is a managed service that provides unlimited, durable streams. Streams can be appended to, with all new records added to the tail of the stream. You can read from any portion of a stream – indexing by record sequence number, or timestamp – and follow
Add this skill
npx mdskills install s2-streamstore/s2-sdk-typescriptComprehensive SDK documentation with detailed usage patterns and runnable examples
1<div align="center">2 <p>3 <!-- Light mode logo -->4 <a href="https://s2.dev#gh-light-mode-only">5 <img src="https://raw.githubusercontent.com/s2-streamstore/s2-sdk-rust/main/assets/s2-black.png" height="60">6 </a>7 <!-- Dark mode logo -->8 <a href="https://s2.dev#gh-dark-mode-only">9 <img src="https://raw.githubusercontent.com/s2-streamstore/s2-sdk-rust/main/assets/s2-white.png" height="60">10 </a>11 </p>1213 <h1>TypeScript SDK for S2</h1>1415 <p>16 <!-- npm -->17 <a href="https://www.npmjs.com/package/@s2-dev/streamstore"><img src="https://img.shields.io/npm/v/@s2-dev/streamstore.svg" alt="npm version" /></a>18 <!-- Discord (chat) -->19 <a href="https://discord.gg/vTCs7kMkAf"><img src="https://img.shields.io/discord/1209937852528599092?logo=discord" /></a>20 </p>21</div>2223This repo contains the official TypeScript SDK for [S2](https://s2.dev), a serverless data store for streams, built on the service's [REST API](https://s2.dev/docs/rest/protocol).2425S2 is a managed service that provides unlimited, durable streams.2627Streams can be appended to, with all new records added to the tail of the stream. You can read from any portion of a stream – indexing by record sequence number, or timestamp – and follow updates live.2829See it in action on the [playground](https://s2.dev/playground).3031**Quick links:**32- Runnable [examples](./examples) directory33- Patterns [package](packages/patterns)34- SDK [documentation](https://s2-streamstore.github.io/s2-sdk-typescript/)35- S2 REST API [documentation](https://s2.dev/docs/rest/protocol)3637> **Note:** The repository for releases prior to 0.16.x can be found at this [link](https://github.com/s2-streamstore/s2-sdk-typescript-old).3839## Install4041```bash42npm add @s2-dev/streamstore43# or44yarn add @s2-dev/streamstore45# or46bun add @s2-dev/streamstore47```4849## Quick start5051Want to get up and running? Head to the [S2 dashboard](https://s2.dev/dashboard) to sign-up and grab an access key, and create a new "basin" from the UI.5253Then define the following environment variables respectively:54```bash55export S2_ACCESS_TOKEN="<token>"56export S2_BASIN="<basin>"57```5859From there, you can run the following snippet (or any of the other [examples](./examples)).6061<!-- snippet:start quick-start -->62```ts63import {64 AppendAck,65 AppendInput,66 AppendRecord,67 S2,68 S2Environment,69} from "@s2-dev/streamstore";7071const basinName = process.env.S2_BASIN ?? "my-existing-basin";72const streamName = process.env.S2_STREAM ?? "my-new-stream";7374const s2 = new S2({75 ...S2Environment.parse(),76 accessToken: process.env.S2_ACCESS_TOKEN ?? "my-access-token",77});7879// Create a basin (namespace) client for basin-level operations.80const basin = s2.basin(basinName);8182// Make a new stream within the basin, using the default configuration.83const streamResponse = await basin.streams.create({ stream: streamName });84console.dir(streamResponse, { depth: null });8586// Create a stream client on our new stream.87const stream = basin.stream(streamName);8889// Make a single append call.90const append: Promise<AppendAck> = stream.append(91 // `append` expects an input batch of one or many records.92 AppendInput.create([93 // Records can use a string encoding...94 AppendRecord.string({95 body: "Hello from the docs snippet!",96 headers: [["content-type", "text/plain"]],97 }),98 // ...or contain raw binary data.99 AppendRecord.bytes({100 body: new TextEncoder().encode("Bytes payload"),101 }),102 ]),103);104105// When the promise resolves, the data is fully durable and present on the stream.106const ack = await append;107console.log(108 `Appended records ${ack.start.seqNum} through ${ack.end.seqNum} (exclusive).`,109);110console.dir(ack, { depth: null });111112// Read the two records back as binary.113const batch = await stream.read(114 {115 start: { from: { seqNum: ack.start.seqNum } },116 stop: { limits: { count: 2 } },117 },118 { as: "bytes" },119);120121for (const record of batch.records) {122 console.dir(record, { depth: null });123 console.log("decoded body: %s", new TextDecoder().decode(record.body));124}125```126<!-- snippet:end quick-start -->127128## Development129130Run examples:131132```bash133export S2_ACCESS_TOKEN="<token>"134export S2_BASIN="<basin>"135export S2_STREAM="<stream>" # optional per example136npx tsx examples/<example>.ts137```138139Run tests:140141```bash142bun run test143```144145The SDK also ships with a basic browser example, to experiment with using the SDK directly from the web.146147```bash148bun run --cwd packages/streamstore example:browser149```150151## Using S2152153S2 SDKs, including this TypeScript one, provide high-level abstractions and conveniences over the core [REST API](https://s2.dev/docs/rest/protocol).154155### Account and basin operations156157The account and basin APIs allow for CRUD ops on basins (namespaces of streams), streams, granular access tokens, and more.158159### Data plane (stream) operations160161The core SDK verbs are around appending data to streams, reading data from them.162163See the examples and documentation for more details.164165Below are some high level notes on how to interact with the data plane.166167#### Appends168169The atomic unit of append is an `AppendInput`, which contains a batch of `AppendRecord`s and some optional additional parameters.170171Records contain a body and optional headers. After an append completes, each record will be assigned a sequence number (and a timestamp).172173174<!-- snippet:start data-plane-unary -->175```ts176// Append a mixed batch: string + bytes with headers.177console.log("Appending two records (string + bytes).");178const mixedAck = await stream.append(179 AppendInput.create([180 AppendRecord.string({181 body: "string payload",182 headers: [183 ["record-type", "example"],184 ["user-id", "123"],185 ],186 }),187 AppendRecord.bytes({188 body: new TextEncoder().encode("bytes payload"),189 headers: [[new Uint8Array([1, 2, 3]), new Uint8Array([4, 5, 6])]],190 }),191 ]),192);193console.dir(mixedAck, { depth: null });194```195<!-- snippet:end data-plane-unary -->196197### Append sessions (ordered, stateful appends)198199Use an `AppendSession` when you want higher throughput and ordering guarantees:200- It is stateful and enforces that the order you submit batches becomes the order on the stream.201- It supports pipelining submissions while still preserving ordering (especially with the `s2s` transport).202203<!-- snippet:start data-plane-append-session -->204```ts205console.log("Opening appendSession with maxInflightBytes=1MiB.");206const appendSession = await stream.appendSession({207 // This determines the maximum amount of unacknowledged, pending appends,208 // which can be outstanding at any given time. This is used to apply backpressure.209 maxInflightBytes: 1024 * 1024,210});211212const startSeq = mixedAck.end.seqNum;213// Submit an append batch.214// This returns a promise that resolves into a `BatchSubmitTicket` once the session has215// capacity to send it.216const append1: BatchSubmitTicket = await appendSession.submit(217 AppendInput.create([218 AppendRecord.string({ body: "session record A" }),219 AppendRecord.string({ body: "session record B" }),220 ]),221);222const append2: BatchSubmitTicket = await appendSession.submit(223 AppendInput.create([AppendRecord.string({ body: "session record C" })]),224);225226// The tickets can be used to wait for the append to become durable (acknowledged by S2).227console.dir(await append1.ack(), { depth: null });228console.dir(await append2.ack(), { depth: null });229230console.log("Closing append session to flush outstanding batches.");231await appendSession.close();232```233<!-- snippet:end data-plane-append-session -->234235### Producer236237Streams can support up to 200 appended batches per second (per single stream), but tens of MiB/second.238239For throughput, you typically want fewer, but larger batches.240241The `Producer` API simplifies this by connecting an `appendSession` with an auto-batcher (via `BatchTransform`), which lingers and accumulates records for a configurable amount of time. This is the recommended path for most high-throughput writers.242243<!-- snippet:start producer-core -->244```ts245const producer = new Producer(246 new BatchTransform({247 // Linger and collect new records for up to 25ms per batch.248 lingerDurationMillis: 25,249 maxBatchRecords: 200,250 }),251 await stream.appendSession(),252);253254const tickets = [];255for (let i = 0; i < 10; i += 1) {256 const ticket = await producer.submit(257 AppendRecord.string({258 body: `record-${i}`,259 }),260 );261 tickets.push(ticket);262}263264const acks = await Promise.all(tickets.map((ticket) => ticket.ack()));265for (const ack of acks) {266 console.log("Record durable at seqNum:", ack.seqNum());267}268269// Use the seqNum of the third ack as a coordinate for reading it back.270let record3 = await stream.read({271 start: { from: { seqNum: acks[3].seqNum() } },272 stop: { limits: { count: 1 } },273});274console.dir(record3, { depth: null });275276await producer.close();277await stream.close();278```279<!-- snippet:end producer-core -->280281### Read sessions282283Read operations, similarly, can be done via individual `read` calls, or via a `readSession`.284285Use a session whenever you want:286- to read more than a single response batch (responses larger than 1 MiB),287- to keep a session open and tail for new data (omit stop criteria).288289<!-- snippet:start read-session-core -->290```ts291const readSession = await stream.readSession({292 start: { from: { tailOffset: 10 }, clamp: true },293 stop: { waitSecs: 10 },294});295296for await (const record of readSession) {297 console.log(record.seqNum, record.body);298}299```300<!-- snippet:end read-session-core -->301302## Client configuration303304### Retries and append retry policy305306<!-- snippet:start client-config -->307```ts308import { S2, S2Environment, S2Error } from "@s2-dev/streamstore";309310const accessToken = process.env.S2_ACCESS_TOKEN;311if (!accessToken) {312 throw new Error("Set S2_ACCESS_TOKEN to configure the SDK.");313}314315const basinName = process.env.S2_BASIN;316if (!basinName) {317 throw new Error("Set S2_BASIN so we know which basin to inspect.");318}319320const streamName = process.env.S2_STREAM ?? "docs/client-config";321322// Global retry config applies to every stream/append/read session created via this client.323const s2 = new S2({324 ...S2Environment.parse(),325 accessToken,326 retry: {327 maxAttempts: 3,328 minBaseDelayMillis: 100,329 maxBaseDelayMillis: 500,330 appendRetryPolicy: "all",331 requestTimeoutMillis: 5_000,332 },333});334335const basin = s2.basin(basinName);336await basin.streams.create({ stream: streamName }).catch((error: unknown) => {337 if (!(error instanceof S2Error && error.status === 409)) {338 throw error;339 }340});341342const stream = basin.stream(streamName);343const tail = await stream.checkTail();344console.log("Tail info:");345console.dir(tail, { depth: null });346```347<!-- snippet:end client-config -->348349- `appendRetryPolicy: "noSideEffects"` only retries appends that are naturally idempotent via `matchSeqNum`.350- `appendRetryPolicy: "all"` can retry any failure (higher durability, but can duplicate data without idempotency).351352### Session transports353354Sessions can use either:355- `fetch` (HTTP/1.1)356- `s2s` (S2’s streaming protocol over HTTP/2)357358You can force a transport per stream:359360<!-- snippet:start force-transport -->361```ts362// Override the automatic transport detection to force the fetch transport.363const stream = basin.stream(streamName, {364 forceTransport: "fetch",365});366```367<!-- snippet:end force-transport -->368369... or rely on environment auto-detection to specify the transport, as is the default behavior.370371> [!IMPORTANT]372> HTTP/2 library use, required for `s2s`, is currently only enabled by default for Node.js and Deno. Bun defaults to HTTP/1 (see [tracking issue](https://github.com/s2-streamstore/s2-sdk-typescript/issues/113)), but can be forced using the mechanism described above.373374## Patterns375376For higher-level, more opinionated building blocks (typed append/read sessions, framing, dedupe helpers), see the [patterns](packages/patterns/README.md) package.377378## Feedback379380We use [Github Issues](https://github.com/s2-streamstore/s2-sdk-typescript/issues) to381track feature requests and issues with the SDK. If you wish to provide feedback,382report a bug or request a feature, feel free to open a Github issue.383384## Reach out to us385386Join our [Discord](https://discord.gg/vTCs7kMkAf) server. We would love to hear387from you.388389You can also email us at [hi@s2.dev](mailto:hi@s2.dev).390
Full transparency — inspect the skill content before installing.