高度なこのページを編集.md

Bundler-Agnostic RSC Serialization

A Standalone Flight Protocol Implementation Without Bundler Coupling, Environment Restrictions, or React Imports

A technical deep-dive into @lazarv/rsc — a from-scratch implementation of React's Flight protocol that removes the three coupling assumptions present in React's official serializer: dependency on a specific bundler, dependency on specific runtime environments, and dependency on the react-server Node.js export condition. The result is an RSC serializer that runs identically in Node.js, Deno, Bun, Cloudflare Workers, the browser, and any environment that provides Web Platform APIs.

React Server Components (RSC) rely on the Flight protocol — a line-delimited streaming format that serializes React element trees, data structures, and client/server reference metadata across runtime boundaries. The official implementation, react-server-dom-webpack, is tightly coupled to three infrastructure assumptions:

  1. Bundler coupling. The serializer depends on Webpack manifests for client and server reference resolution. Alternative bundlers (Vite, Rollup, esbuild, Rspack) require adapter packages or compatibility shims.
  2. Environment coupling. The server entry point uses Node.js-specific APIs (stream.Readable, Buffer) and the client entry point assumes a browser context. Running the same code in Deno, Bun, or Cloudflare Workers requires separate builds or polyfills.
  3. Export condition coupling. React's internal Flight server code is gated behind the react-server Node.js export condition. Environments that do not support or configure this condition — worker threads, custom runtimes, embedded engines — cannot load the serializer without bundling a condition based version of the serializer and React.

This paper presents @lazarv/rsc, a standalone implementation of the Flight protocol that removes all three couplings. It achieves full wire-format parity with react-server-dom-webpack while introducing several architectural innovations: abstract module resolver/loader interfaces, Web Platform API-only I/O, Symbol.for()-based React integration without direct imports, reference-counting for data object inline/outline decisions, microtask-coalesced chunk flushing, a synchronous serialization mode, and a zero-copy element tuple scanner on the deserialization path.

Across 13 benchmark scenarios, @lazarv/rsc outperforms react-server-dom-webpack by 1.1×–6.5× on serialization and 1.0×–11.2× on deserialization, with roundtrip improvements of 1.0×–6.1×.

Bundler Coupling

react-server-dom-webpack requires a Webpack plugin that generates a client and a server manifest — JSON mappings from module specifiers to their bundled output paths. The serializer reads this manifest at runtime to resolve "use client" and "use server" references into chunk metadata that the client and the server can use to load the correct module.

// react-server-dom-webpack/server — requires Webpack manifest
import { renderToReadableStream } from "react-server-dom-webpack/server";

// The manifest is generated by the Webpack plugin
const manifest = require("./react-client-manifest.json");
const stream = renderToReadableStream(<App />, manifest);

For non-Webpack bundlers, the React team provides react-server-dom-esm (incomplete), and the community has built various adapter shims (react-server-dom-vite, etc.). Each adapter must reverse-engineer the manifest format and provide its own Webpack plugin equivalent. This creates a fragmented ecosystem where every bundler needs custom integration code.

Environment Coupling

The official package ships four entry points gated by environment:

EntryPlatformAPIs Used
server.nodeNode.jsstream.Readable, stream.Writable
server.edgeEdge runtimesReadableStream
client.browserBrowserReadableStream, fetch
client.nodeNode.js SSRstream.Readable

This four-way split creates conditional import complexity. A framework that targets multiple environments (SSR + edge + browser) must dynamically select the correct entry point, handle the API surface differences, and test each combination separately.

Export Condition Coupling

React's Flight server code lives under the react-server Node.js export condition:

{
  "exports": {
    ".": {
      "react-server": "./server.react-server.js",
      "default": "./server.js"
    }
  }
}

This condition must be configured in the bundler or runtime (--conditions=react-server in Node.js, resolve.conditions in Webpack/Vite). Environments that do not support export conditions — or that have not been configured with this specific condition — cannot import the Flight server at all. This is the most insidious coupling because it is invisible: the import succeeds but loads the wrong entry point, producing cryptic runtime errors.

For a project like @lazarv/react-server that uses the Flight protocol in diverse contexts — worker threads for SSR, cache providers for snapshot storage, logger proxies for structured cross-environment logging — the export condition restriction is a fundamental architectural barrier.

@lazarv/rsc was designed with the following invariants:

  1. Full Flight protocol parity. Every type supported by react-server-dom-webpack — elements, fragments, Suspense, lazy, memo, forwardRef, context, Activity, ViewTransition, Promises, Map, Set, Date, BigInt, RegExp, Symbol, URL, URLSearchParams, FormData, TypedArrays, ArrayBuffer, DataView, Blob, ReadableStream, async iterables, client/server references, bound actions, temporary references, error digest propagation — must serialize and deserialize identically.
  2. Bundler-agnostic. No Webpack plugin, no Vite plugin, no bundler manifest. Module resolution is an abstract interface that the consumer provides.
  3. Environment-agnostic. A single code path for all environments. Built exclusively on Web Platform APIs: ReadableStream, TextEncoder, TextDecoder, FormData, Blob, URL. No stream.Readable, no Buffer, no AsyncLocalStorage.
  4. No react-server condition. The serializer must work in any environment without requiring special export condition configuration.
  5. No direct React imports. The package must not import from react at any level. React integration happens through Symbol.for() and an optional React instance passed at call time.
  6. Performance parity or better. The implementation must be at least as fast as the official package across representative workloads.
  7. Synchronous mode. For use cases that do not involve Promises or streaming (cache snapshots, logger payloads), a fully synchronous serialize/deserialize path must be available.

The package consists of two entry points and four source files:

EntrySourceRole
@lazarv/rsc/serverserver/index.mjsserver/shared.mjsSerialization: renderToReadableStream, syncToBuffer, prerender, decodeReply, reference registration
@lazarv/rsc/clientclient/index.mjsclient/shared.mjsDeserialization: createFromReadableStream, createFromFetch, syncFromBuffer, encodeReply, server reference proxies

The index.mjs files are re-export barrels. All logic lives in the shared.mjs files — approximately 3,300 lines for the server and 3,500 lines for the client. There are no platform-conditional branches, no dynamic require() calls, no environment detection beyond a dev-mode flag.

The React Decoupling Strategy

The most fundamental design decision is how to interact with React without importing it.

React's Flight protocol operates on React element types ($$typeof symbols), internal data structures (lazy payloads, context objects), and — for client component rendering — React's internal hooks dispatcher. The official react-server-dom-webpack imports these directly from react, which creates the react-server condition dependency.

@lazarv/rsc uses three strategies to avoid this:

Strategy 1: Symbol.for() for type detection.

React element types are global symbols registered via Symbol.for(). Any code — regardless of which React copy is loaded — can detect them:

const REACT_ELEMENT_TYPE = Symbol.for("react.element");
const REACT_TRANSITIONAL_ELEMENT_TYPE = Symbol.for("react.transitional.element");
const REACT_FRAGMENT_TYPE = Symbol.for("react.fragment");
const REACT_SUSPENSE_TYPE = Symbol.for("react.suspense");
const REACT_CLIENT_REFERENCE = Symbol.for("react.client.reference");
const REACT_SERVER_REFERENCE = Symbol.for("react.server.reference");
// ... 15+ additional type symbols

Because Symbol.for() returns the same symbol across all realms (including worker threads and iframes), this approach is immune to multiple-React-copy issues that plague instanceof checks.

Strategy 2: Structural duck-typing for elements.

Instead of calling React.createElement(), the serializer inspects objects structurally:

function isReactElement(value) {
  return (
    value !== null &&
    typeof value === "object" &&
    (value.$$typeof === REACT_ELEMENT_TYPE ||
      value.$$typeof === REACT_TRANSITIONAL_ELEMENT_TYPE)
  );
}

This works with elements created by any React version (18, 19, experimental) and any JSX transform (classic, automatic, manual createElement calls).

Strategy 3: Optional React instance for hooks.

Client components that use hooks (use(), useId(), useMemo(), useCallback(), useEffect()) require React's internal dispatcher. Rather than importing React, @lazarv/rsc accepts an optional react instance in the options:

const stream = renderToReadableStream(<App />, {
  react: React, // Optional — only needed if components use hooks
});

When provided, the serializer accesses React's internal dispatcher via React.__SERVER_INTERNALS_DO_NOT_USE_OR_WARN_USERS_THEY_CANNOT_UPGRADE (or the client equivalent). When not provided, pure server components (no hooks) work normally; hook usage throws a clear error.

This opt-in approach means @lazarv/rsc can serialize plain data structures, React elements, and even re-serialize existing Flight payloads without any React dependency whatsoever.

Abstract Module Interfaces

Where react-server-dom-webpack uses Webpack manifests, @lazarv/rsc uses two abstract interfaces:

// Server-side: how to resolve references to metadata
interface ModuleResolver {
  resolveClientReference?(reference: unknown): ClientReferenceMetadata | null;
  resolveServerReference?(reference: unknown): ServerReferenceMetadata | null;
}

// Client-side: how to load modules from metadata
interface ModuleLoader {
  preloadModule?(metadata: ClientReferenceMetadata): Promise<void> | void;
  requireModule(metadata: ClientReferenceMetadata): unknown;
  loadServerAction?(id: string): Promise<Function> | Function;
}

The framework (or any consumer) provides these implementations. @lazarv/react-server implements them by connecting to its Vite-generated module graph. Another framework could implement them against Rspack manifests, import maps, or any other module system. The Flight protocol itself is agnostic.

Reference Counting and Deduplication

Before serializing, @lazarv/rsc performs a pre-scan of the entire model tree to count how many times each object or array is referenced:

function countReferences(model) {
  const counts = new Map();
  const stack = [model];

  while (stack.length > 0) {
    const value = stack.pop();
    if (value === null || value === undefined) continue;
    if (typeof value !== "object") continue;

    // Skip types always emitted as separate chunks
    if (value instanceof Date || value instanceof RegExp ||
        ArrayBuffer.isView(value) || /* ... */) continue;

    const count = (counts.get(value) || 0) + 1;
    counts.set(value, count);
    if (count > 1) continue; // Already walked children

    // Walk children based on type (array, Map, Set, element, object)
    // ...
  }
  return counts;
}

This O(n)O(n) pre-scan enables a key optimization: inline vs. outline decision. Objects referenced exactly once are inlined directly in the parent JSON row. Objects referenced more than once are emitted as separate chunks with their own IDs, and subsequent references use $<id> back-references. This preserves object identity on the client while minimizing chunk count and payload size.

Note that client and server reference deduplication (collapsing repeated I rows for the same client component, or repeated server reference chunks for the same action) is standard behavior shared with react-server-dom-webpack. The pre-scan reference counting is a separate optimization that operates on the data layer — plain objects and arrays — determining whether they can be inlined or must be outlined as separate chunks.

The serializeValue Dispatch

The core serialization function is a 360-line type dispatcher that handles every Flight-serializable type. The dispatch order is performance-critical — the most common types are checked first:

nullundefinedbooleannumber (with NaNInfinity/−0) →
string (with large-string TEXT row optimization) → bigintRegExpsymbol →
temporary references → client references → server references → functions →
arrays (inline vs. outline) → React elements → PromisesDateMapSetReadableStreamBlobasync iterables → TypedArraysArrayBufferFormDataURLURLSearchParamsError → plain objects (inline vs. outline)

Each type maps to a specific wire-format encoding. The encodings use single-character prefixes ($D for Date, $Q for Map, $W for Set, $n for BigInt, $S for Symbol, etc.) that match the official protocol.

Large String Optimization

Strings above 1KB are serialized using a length-prefixed binary row format instead of JSON:

if (value.length >= TEXT_CHUNK_SIZE) {
  const id = request.getNextChunkId();
  const textBytes = encoder.encode(value);
  const hexLength = textBytes.byteLength.toString(16);
  const headerStr = `${id}:T${hexLength},`;
  // ... emit as binary chunk ...
  return "$" + id;
}

This avoids the overhead of JSON.stringify() for large strings — no quoting, no escape character processing, no extra allocations. The hex-length prefix tells the parser exactly how many bytes to read, enabling zero-copy consumption on the deserialization side.

Microtask-Coalesced Chunk Flushing

During synchronous serialization, multiple rows are produced (the root model, shared object chunks, client reference chunks, server reference chunks). The official implementation flushes each row individually, producing one ReadableStream.enqueue() call per row.

@lazarv/rsc suppresses flushing during synchronous work and batches all rows into a single enqueue() call:

function startWork(request) {
  // Suppress per-writeChunk flushing
  const wasFlowing = request.flowing;
  request.flowing = false;

  try {
    const serialized = serializeValue(request, request.model, null, null);
    const row = request.serializeModelRow(0, serialized);
    request.writeChunk(row);

    // Restore flowing and flush ALL rows in one batch
    request.flowing = wasFlowing;
    if (request.flowing && request.destination) {
      request.flushChunks();
    }
  } catch (error) {
    // ...
  }
}

The flush itself coalesces all buffered chunks into a single Uint8Array:

flushChunks() {
  if (this.completedChunks.length === 0) return;
  const chunks = this.completedChunks;
  this.completedChunks = []; // Swap-first for re-entrancy safety

  // Encode and merge
  const encoded = Array.from({ length: chunks.length });
  let totalLength = 0;
  for (let i = 0; i < chunks.length; i++) {
    const chunk = chunks[i];
    encoded[i] = chunk instanceof Uint8Array ? chunk : encoder.encode(chunk);
    totalLength += encoded[i].length;
  }

  if (encoded.length === 1) {
    this.destination.enqueue(encoded[0]);
  } else {
    const merged = new Uint8Array(totalLength);
    let offset = 0;
    for (const e of encoded) { merged.set(e, offset); offset += e.length; }
    this.destination.enqueue(merged);
  }
}

This produces fewer ReadableStream reads on the consumer, fewer <script> tags in SSR HTML (when inlining flight data), and less cross-thread MessagePort traffic when the stream is transferred to an SSR worker.

The Swap-First Re-Entrancy Guard

The flushChunks method uses a swap-first pattern: it snapshots the current queue into a local variable and atomically replaces this.completedChunks with a fresh empty array before iterating.

This is not a theoretical concern — it is a battle-tested fix for a production bug. Node.js's ReadableStream implementation can fire pull() as a synchronous microtask during controller.enqueue() when a pending reader is waiting. Without the swap, the re-entrant pull() handler calls flushChunks() again, sees the same unflushed array, and writes the in-flight chunk a second time — producing duplicate rows on the flight stream.

The swap makes re-entrant flushes no-ops: they see an empty queue and return immediately. New chunks produced during the re-entrant path push to the fresh array, which is drained by the next flush cycle.

Lazy Promise Allocation

The official deserializer creates a Promise for every chunk, even when the chunk resolves synchronously (which is the common case — most chunks are resolved during the same processData() call that creates them).

@lazarv/rsc defers Promise allocation:

createChunk(id) {
  return {
    id,
    status: PENDING,
    value: undefined,
    _promise: null,   // No Promise allocated
    _resolve: null,
    _reject: null,
  };
}

_ensurePromise(chunk) {
  if (chunk._promise !== null) return chunk._promise;

  if (chunk.status === RESOLVED) {
    // Already resolved — return pre-settled promise
    const p = Promise.resolve(chunk.value);
    p.status = "fulfilled";
    p.value = chunk.value;
    chunk._promise = p;
  } else {
    // Still pending — allocate real promise
    const p = new Promise((res, rej) => {
      chunk._resolve = res;
      chunk._reject = rej;
    });
    p.catch(() => {}); // suppress unhandled rejection
    p.status = "pending";
    chunk._promise = p;
  }
  return chunk._promise;
}

For a typical flight payload with hundreds of chunks, the vast majority resolve synchronously and never need a Promise. This saves two closure allocations and one Promise allocation per chunk — a measurable improvement on large payloads.

Element Tuple Scanner

React elements are serialized as JSON arrays: ["$", type, key, props]. When a row contains a large props object (200KB+ is common for content-heavy pages), JSON.parse() of the entire row is expensive.

@lazarv/rsc implements a custom scanner that extracts the element header fields without parsing the props:

function _scanElementTuple(str) {
  // Caller verified: str starts with '["$",'
  let i = 5; // past '["$",'

  // Field 1: type — usually short ("div", "$L1")
  const typeStart = i;
  i = _skipJsonValue(str, i);
  const typeEnd = i;

  // ... skip comma, whitespace ...

  // Field 2: key — usually null or a short string
  const keyStart = i;
  i = _skipJsonValue(str, i);
  const keyEnd = i;

  // Parse only the tiny header fields
  const type = JSON.parse(str.slice(typeStart, typeEnd));
  const key = JSON.parse(str.slice(keyStart, keyEnd));

  // Everything after is the raw props string — NOT parsed yet
  return { type, key, rawPropsStr: str.slice(i + 1, len - 1) };
}

The _skipJsonValue helper scans past a JSON value character-by-character without allocating any objects. For element headers (type + key), this is O(1)O(1) with respect to props size. The raw props string is parsed lazily — only when the deserialized element's props are actually accessed.

A companion _scanObjectRow function applies the same technique to plain object rows, identifying key→value boundaries without parsing large nested values eagerly.

Module Import Buffering

The Flight protocol emits I (Import/Module) rows before the model rows that reference them. When module loading is synchronous (as in production bundles), the module chunk is resolved before the model row that references it. But when module loading is asynchronous (as in Vite's dev server, where import() is used), the module chunk may still be PENDING when the model row arrives.

@lazarv/rsc handles this by buffering model rows when async imports are in flight:

// In the consume loop:
if (this.pendingModuleImports.length > 0) {
  this.pendingRows.push(rowData);
} else {
  processRow(rowData);
}

// After imports settle:
async flushPendingRows() {
  await Promise.all(this.pendingModuleImports);
  this.pendingModuleImports.length = 0;
  for (const row of this.pendingRows) {
    processRow(row);
  }
  this.pendingRows.length = 0;
}

This avoids the need for lazy wrappers or deferred resolution — when the buffered rows are replayed, all module chunks are already resolved, and element construction proceeds synchronously.

@lazarv/rsc provides a synchronous round-trip path that no other Flight implementation offers:

import { syncToBuffer } from "@lazarv/rsc/server";
import { syncFromBuffer } from "@lazarv/rsc/client";

// Serialize to Uint8Array — no Promises, no streams
const buffer = syncToBuffer(model, options);

// Deserialize from Uint8Array — synchronous, returns value directly
const value = syncFromBuffer(buffer, options);

Motivation

Not all Flight protocol use cases involve network streaming. In @lazarv/react-server, the synchronous mode powers two systems:

  1. Cache providers. UI and data snapshots are serialized to binary buffers for storage in any backend (Redis, filesystem, SQLite). The Flight protocol's rich type support (Map, Set, Date, BigInt, TypedArrays, React elements with client references) makes it superior to JSON.stringify() for caching — and the synchronous API avoids the complexity of stream management in cache read/write paths.
  2. Logger proxy. Structured log data is serialized across environment boundaries using the Flight protocol, preserving type fidelity that JSON.stringify() would destroy (Date becomes ISO string, Map becomes {}, Set becomes [], BigInt throws).

Implementation

syncToBuffer uses the same FlightRequest and startWork machinery as renderToReadableStream, but replaces the ReadableStream controller with a simple array collector:

export function syncToBuffer(model, options = {}) {
  const request = new FlightRequest(model, options);
  const chunks = [];

  request.destination = {
    enqueue(chunk) {
      chunks.push(chunk instanceof Uint8Array ? chunk : encoder.encode(chunk));
    },
    close() {},
    error() {},
  };
  request.flowing = true;

  startWork(request);
  request.flushChunks();

  // Concatenate into single Uint8Array
  let totalLength = 0;
  for (const chunk of chunks) totalLength += chunk.length;
  const result = new Uint8Array(totalLength);
  let offset = 0;
  for (const chunk of chunks) { result.set(chunk, offset); offset += chunk.length; }
  return result;
}

syncFromBuffer feeds the entire buffer into the Flight parser in one call and returns the resolved root value directly. Because there are no Promises or async iterables in the input, all chunks resolve synchronously and the result is available immediately.

Type Preservation vs. JSON

The synchronous mode preserves types that JSON.stringify() / JSON.parse() destroys:

TypeJSON.stringifyJSON.parsesyncToBuffersyncFromBuffer
DateISO string (lossy)Date object
Map{} (lossy)Map
Set[] (lossy)Set
BigIntthrowsBigInt
undefinedomittedundefined
-00 (lossy)-0
NaNnull (lossy)NaN
Infinitynull (lossy)Infinity
RegExp{} (lossy)RegExp
Symbol.for()throwsSymbol
Uint8Array{0:…, 1:…} (lossy)Uint8Array
Circular refsthrowsPreserved via $<id>

This makes syncToBuffer / syncFromBuffer a drop-in replacement for JSON.stringify() / JSON.parse() that handles the full JavaScript type system.

Row Format

Each row in the Flight stream follows the format:

<id>:<tag><payload>\n

Where <id> is a decimal chunk identifier, <tag> is a single character indicating the row type, and <payload> is the row content (typically JSON). The root model is always chunk ID 0.

TagNamePayload
(empty)ModelJSON value
IImport[moduleId, chunks, exportName] or [moduleId, chunks, exportName, 1] (async)
EError{"message": "...", "stack": "...", "digest": "..."}
HHintPreload hint metadata
DDebugDebug info (dev mode only)
TTextLength-prefixed raw UTF-8 text
BBinaryBase64-encoded binary chunk
PPostponePPR postpone marker
WConsoleConsole log replay data
NNonceTimestamp (dev mode timing)

Binary rows (TypedArrays, ArrayBuffer, Text) use a length-prefixed format instead of newline termination:

<id>:<tag><hex_length>,<binary_data>

Value Encoding Prefixes

Serialized values within JSON payloads use single-character prefixes to encode non-JSON types:

PrefixTypeExample
$undefinedundefined
$NaNNaN
$InfinityInfinity
$-Infinity-Infinity
$-0Negative zero
$nBigInt$n12345678901234567890
$DDate$D2024-06-15T12:00:00.000Z
$SSymbol$Sbench.symbol
$RRegExp$R/pattern/flags
$QMap$Q<id> (entries in separate chunk)
$WSet$W<id> (items in separate chunk)
$LClient reference$L<id>
$hServer reference$h<id>
$TTemporary reference$T<path>
$lURL$lhttps://example.com
$UURLSearchParams$U[["a","1"],["b","2"]]
$KFormData$K[["field","value"]]
$ZError$Z{"name":"...","message":"..."}
$<id>Back-referencePoints to previously emitted chunk
@<id>PromisePoints to async chunk

When the serializer encounters a React element whose type is a function (not a client reference), it renders it as a server component — calling the function with its props and serializing the return value.

Hooks Dispatcher

Server components can use a subset of React hooks: use(), useId(), useMemo(), useCallback(), useMemoCache(). These require React's internal dispatcher (internals.H) to be set correctly during the component call.

@lazarv/rsc implements a minimal dispatcher that supports these hooks without importing React:

const HooksDispatcher = {
  use(usable) {
    if (typeof usable.then === "function") {
      return trackUsedThenable(thenableState, usable, thenableIndexCounter++);
    }
    // ...
  },
  useId() {
    return "_" + (currentRequest.identifierPrefix || "S") + "_" +
           currentRequest.identifierCount++.toString(32) + "_";
  },
  useMemo(nextCreate) { return nextCreate(); },
  useCallback(callback) { return callback; },
  useMemoCache(size) {
    const data = Array(size);
    for (let i = 0; i < size; i++) data[i] = REACT_MEMO_CACHE_SENTINEL;
    return data;
  },
  // Unsupported hooks throw clear errors
  useState: unsupportedHook,
  useEffect: unsupportedHook,
  useReducer: unsupportedHook,
  // ...
};

The dispatcher is installed before each component call and restored afterwards via callComponentWithDispatcher, which also handles the SuspenseException thrown by use() when it encounters an unresolved thenable.

Suspense and Retry

When use() encounters a pending Promise, it throws a SuspenseException — a sentinel error that signals the serializer to retry the component after the Promise resolves. @lazarv/rsc handles this with a retry chain:

function retryComponentRender(request, type, props, savedState, ...) {
  return new Promise((resolve, reject) => {
    function attempt(prevThenableState, waitFor) {
      waitFor.then(() => {
        try {
          const result = callComponentWithDispatcher(
            request, type, props, prevThenableState
          );
          resolve(result);
        } catch (error) {
          if (isThenable(error)) {
            // Component suspended again — chain another retry
            attempt(error._savedThenableState, error);
          } else {
            reject(error);
          }
        }
      }, reject);
    }
    attempt(savedState, blockedThenable);
  });
}

Each retry restores the thenableState from the previous attempt, ensuring that use() calls before the suspension point return their cached values. This matches React's per-task retry semantics.

All benchmarks were run locally using Vitest's bench mode against identical fixtures. The react-server-dom-webpack package uses the same React experimental version (0.0.0-experimental-ab18f33d-20260220). Each scenario runs for at least 100 iterations with warmup.

Serialization (renderToReadableStream)

Scenario@lazarv/rsc (ops/s)webpack (ops/s)Speedup
react: minimal element659,111101,4686.5×
react: shallow wide (1,000)4,5947775.9×
react: deep nested (100)31,04313,4572.3×
react: product list (50)13,4234,7092.8×
react: large table (500×10)6502123.1×
data: primitives470,89995,6034.9×
data: large string (100KB)16,23012,6711.3×
data: nested objects (20)118,96665,0351.8×
data: large array (10K)2992961.0×
data: Map & Set23,30014,1791.6×
data: Date/BigInt/Symbol442,35691,2924.8×
data: typed arrays120,91628,5744.2×
data: mixed payload17,90110,8681.6×

The largest gains appear on workloads dominated by React element construction and type dispatch overhead. The minimal element benchmark — a single <div> with text — isolates per-element overhead, where @lazarv/rsc is 6.5× faster. The large array benchmark (10,000 plain objects) is I/O-bound by JSON.stringify() on both sides, showing near-parity.

Deserialization (createFromReadableStream)

Scenario@lazarv/rsc (ops/s)webpack (ops/s)Speedup
react: minimal element476,960381,6681.2×
react: shallow wide (1,000)44,0713,94711.2×
react: deep nested (100)239,45341,9685.7×
react: product list (50)99,58128,1743.5×
react: large table (500×10)9,3133,2332.9×
data: primitives371,182346,2181.1×
data: large string (100KB)75,71675,6921.0×
data: nested objects (20)193,801160,7771.2×
data: large array (10K)6876571.0×
data: Map & Set33,83330,5371.1×
data: Date/BigInt/Symbol401,727311,9831.3×
data: typed arrays125,078105,2501.2×
data: mixed payload49,11435,5501.4×

Deserialization gains are most dramatic on wide trees: shallow wide (1,000 siblings) is 11.2× faster. This is where the element tuple scanner and lazy promise allocation have the greatest impact — 1,000 elements means 1,000 chunks where @lazarv/rsc avoids Promise allocation for synchronously-resolved chunks.

Roundtrip (serialize + deserialize)

Scenario@lazarv/rsc (ops/s)webpack (ops/s)Speedup
react: minimal element348,49484,5544.1×
react: shallow wide (1,000)4,0396686.0×
react: deep nested (100)27,35910,0812.7×
react: product list (50)11,8873,9693.0×
react: large table (500×10)6212093.0×
data: primitives231,20678,6262.9×
data: large string (100KB)14,04612,1741.2×
data: nested objects (20)77,47747,7831.6×
data: large array (10K)2052041.0×
data: Map & Set13,6069,8291.4×
data: Date/BigInt/Symbol238,04172,7153.3×
data: typed arrays84,48926,1013.2×
data: mixed payload12,9388,3831.5×

Prerender (prerender)

@lazarv/rsc also provides a prerender API that serializes a model to a static prelude ReadableStream, waiting for all async work to complete before returning. Selected results:

Scenarioops/sMean latency
react: minimal element719,4531.4 µs
react: product list (50)12,67078.9 µs
react: large table (500×10)6131.63 ms
data: primitives519,4701.9 µs
data: mixed payload18,62153.7 µs

A Note on webpack Benchmark Stability

The react-server-dom-webpack benchmarks exhibit significantly higher variance (RME of 9–31%) compared to @lazarv/rsc (RME of 0.2–2.6%). This suggests architectural differences in allocation patterns — higher GC pressure leads to more variable latency. The @lazarv/rsc numbers are more stable, indicating fewer and more predictable allocations per operation.

Additionally, react-server-dom-webpack's serializer detaches ArrayBuffer backing stores during TypedArray serialization, requiring fresh fixture construction on each benchmark iteration for TypedArray scenarios. @lazarv/rsc uses non-destructive reads (new Uint8Array(value.buffer, value.byteOffset, value.byteLength)), allowing fixture reuse.

Dimensionreact-server-dom-webpack@lazarv/rsc
BundlerWebpack only (plugin + manifest)Any (abstract ModuleResolver interface)
RuntimeNode.js (server) + browser (client)Any Web Platform runtime
Export conditionRequires react-server conditionNo condition required
React dependencyDirect import from reactSymbol.for() + optional instance
Entry points4 (server.node, server.edge, client.browser, client.node)2 (server, client) — same code everywhere
Synchronous modeNot availablesyncToBuffer / syncFromBuffer
Object deduplicationClient/server reference dedupClient/server reference dedup + pre-scan reference counting for data objects
Chunk flushingPer-rowMicrotask-coalesced with swap-first re-entrancy guard
Promise allocationEager (every chunk)Lazy (only when awaited)
Element parsingFull JSON.parse per rowO(1) header scan + deferred props parse
TypedArray handlingDestructive (detaches ArrayBuffer)Non-destructive (view-based read)
Flight protocolFullFull parity
renderToReadableStream
renderToPipeableStream✅ (Node.js only)— (use ReadableStream everywhere)
createFromReadableStream
createFromNodeStream✅ (Node.js only)— (use ReadableStream everywhere)
encodeReply / decodeReply
Temporary references
Bound actions (.bind())
Error digest propagation
Prerender

The decoupled design of @lazarv/rsc enables Flight protocol usage beyond the traditional "server renders, client hydrates" pattern:

DirectionUse CaseWhy Flight over JSON
Server → ClientStreaming RSC renderingStandard use case
Server → CacheSnapshot storage in Redis/filesystem/SQLiteType preservation (Map, Set, Date, BigInt, client refs)
Cache → ServerSnapshot restorationReconstructs full React element trees with references
Worker → MainCross-thread React tree transferStructured clone alternative with streaming
Process → ProcessIPC with typed dataFlight handles types that JSON.stringify() destroys
Server → LoggerStructured log serializationPreserves type fidelity across environment boundaries
Any → AnyGeneral-purpose serializationsyncToBuffer/syncFromBuffer as a better JSON.stringify/JSON.parse

@lazarv/rsc demonstrates that the Flight protocol — React's serialization format for Server Components — can be cleanly separated from its three historical coupling points: Webpack, Node.js, and the react-server export condition. The key architectural insights are:

  1. Symbol.for() replaces imports. React's type system is built on global symbols. Any code can detect React elements, client references, and server references without importing React — making the react-server export condition unnecessary for serialization.
  2. Abstract interfaces replace manifests. Bundler-specific manifest formats are an implementation detail. An abstract ModuleResolver / ModuleLoader interface lets any bundler or runtime provide its own module resolution logic without adapter shims.
  3. Web Platform APIs replace Node.js APIs. ReadableStream runs everywhere. A single entry point per side (server/client) eliminates the four-way platform split and the conditional import complexity it creates.
  4. Pre-scan reference counting enables inline/outline decisions for data objects. A single O(n)O(n) walk before serialization determines which objects can be inlined and which must be outlined as separate chunks to preserve identity — reducing chunk count and payload size.
  5. Microtask-coalesced flushing with swap-first re-entrancy safety produces fewer, larger stream chunks — reducing downstream costs in SSR HTML injection, cross-thread transfer, and consumer iteration.
  6. Lazy Promise allocation and O(1) element scanning remove per-chunk overhead on the deserialization path, with the largest impact on wide trees common in real applications.
  7. Synchronous mode unlocks the Flight protocol for use cases (caching, logging, IPC) that do not involve network streaming, making it a general-purpose typed serialization format.

The result is a Flight protocol implementation that is faster, more portable, and more versatile than the official package — while maintaining full wire-format compatibility. Any framework, runtime, or tool can adopt RSC serialization by depending on @lazarv/rsc and implementing two interfaces.

Viktor LázárViktor Lázár