pretable.v0.0.0

$ pretable — vol. 2 · no. 1

The fastest data grid for React.
Built for the AI era.

60fps under streaming load. Zero row drift. A deterministic engine designed for live data, agent output, and real-time telemetry — not retrofitted from a batch-era grid.

MIT licensed · open source

  • 01

    Performance

    The fastest grid in independent benchmarks.

    16 ms frame p95 on wrapped-text scroll — 4× faster than AG Grid (67 ms) at 3,000 rows, 5 repeats. AG Grid also clips 152 px of wrapped content; pretable's rows hold their cell content. 9 ms frame p95 under 1,000 patches/sec streaming (tied with AG Grid + TanStack; MUI X collapses). Zero long tasks, zero blank gaps, zero anchor shift. Verifiable: pnpm bench:matrix.

  • 02

    AI-native

    AI isn't a feature. It's the data model.

    Pretable's engine was designed around streaming and partial data — the shape AI agents and live feeds actually produce. Most grids retrofit a streaming adapter onto a batch-era data model. Pretable doesn't.

  • 03

    Wrapped text

    Multi-line cells, no layout thrash.

    Auto-height rows with wrapped content — at 60fps under streaming. No row-jump on hover, no layout shift on scroll, no row-height recalc churn when an agent writes longer text mid-stream. Most grids force fixed heights to avoid this.

  • 04

    Ecosystem

    Drops into the AI SDKs you already use.

    Vercel AI SDK · OpenAI Responses · Anthropic streams · LangGraph · your own SSE. One import. The streaming pipeline is purpose-built — every other grid leaves it to you.

inspection.log·
rendered 0·sel none
Timestamp2026-04-12T09:30:00.000Z
Severityinfo
Sourcegateway
Ownerrouting
Tagscold-start, tenant-a
MessageWrapped inspection rows should stay legible while the grid keeps backward scroll anchor behavior stable under sustained navigation. token-dev-0-0
Timestamp2026-04-12T09:30:17.000Z
Severitywarn
Sourceretriever
Owneragents
Tagscustomer-facing, timeout
MessagePinned metadata columns must remain readable without forcing the renderer onto a separate code path that hides real layout costs. token-dev-1-0 Pinned metadata columns must remain readable without forcing the renderer onto a separate code path that hides real layout costs. token-dev-1-1
Timestamp2026-04-12T09:30:34.000Z
Severityerror
Sourcesession-cache
Ownersafety
Tagseviction, burst
MessageLocal filtering, sorting, and selection should work on the same core state machine that future remote and streaming modes will use. token-dev-2-0 Local filtering, sorting, and selection should work on the same core state machine that future remote and streaming modes will use. token-dev-2-1 Local filtering, sorting, and selection should work on the same core state machine that future remote and streaming modes will use. token-dev-2-2
Timestamp2026-04-12T09:30:51.000Z
Severitytrace
Sourceplanner
Ownerrag-pipeline
Tagstool-call, spec
MessageLarge local inspection datasets are for manual pressure testing, not marketing claims; benchmark evidence still decides whether the wedge is real. token-dev-3-0 Large local inspection datasets are for manual pressure testing, not marketing claims; benchmark evidence still decides whether the wedge is real. token-dev-3-1 Large local inspection datasets are for manual pressure testing, not marketing claims; benchmark evidence still decides whether the wedge is real. token-dev-3-2 Large local inspection datasets are for manual pressure testing, not marketing claims; benchmark evidence still decides whether the wedge is real. token-dev-3-3
Timestamp2026-04-12T09:31:08.000Z
Severityinfo
Sourcestream-router
Ownerinference
Tagsbackpressure, sse
MessageTelemetry in the prototype should explain rendered rows and planned height without requiring developers to scrape DOM state by hand. token-dev-4-0
Timestamp2026-04-12T09:31:25.000Z
Severitywarn
Sourceanalytics
Ownerstorage
Tagsrunset, median
MessageWrapped inspection rows should stay legible while the grid keeps backward scroll anchor behavior stable under sustained navigation. token-dev-5-0 Wrapped inspection rows should stay legible while the grid keeps backward scroll anchor behavior stable under sustained navigation. token-dev-5-1
Timestamp2026-04-12T09:31:42.000Z
Severityerror
Sourcepolicy-audit
Ownerstate
Tagsdrift, review
MessagePinned metadata columns must remain readable without forcing the renderer onto a separate code path that hides real layout costs. token-dev-6-0 Pinned metadata columns must remain readable without forcing the renderer onto a separate code path that hides real layout costs. token-dev-6-1 Pinned metadata columns must remain readable without forcing the renderer onto a separate code path that hides real layout costs. token-dev-6-2
Timestamp2026-04-12T09:31:59.000Z
Severitytrace
Sourcevector-index
Ownerbenchmarks
Tagswarm-cache, steady-state
MessageLocal filtering, sorting, and selection should work on the same core state machine that future remote and streaming modes will use. token-dev-7-0 Local filtering, sorting, and selection should work on the same core state machine that future remote and streaming modes will use. token-dev-7-1 Local filtering, sorting, and selection should work on the same core state machine that future remote and streaming modes will use. token-dev-7-2 Local filtering, sorting, and selection should work on the same core state machine that future remote and streaming modes will use. token-dev-7-3

01 · why now

Data grids were built for the batch era.
Then AI showed up.

Every popular React data grid was designed when data arrived in one shape: a complete array, fetched once, rendered. AI agents, streaming APIs, and live telemetry don't work that way. They produce data over time — token by token, patch by patch, partial first.

  1. 1995

    Spreadsheet

  2. 2010

    Data grid (batch)

  3. 2024

    Streaming AI

  4. NOW

    Pretable

Three failure modes every team building AI-driven dashboards has watched ship — symptoms of the same root cause: a render path that assumed data arrives all at once.

  • Row vanishes mid-stream.

    Selection breaks on the first patch. Trust evaporates with it.

  • Stream speeds up, frames drop.

    Demos handle 100/sec. Production at 1k breaks. Users notice.

  • Wrapped text jumps on update.

    Row heights recalc, viewport shifts. No reading rhythm survives.

02 · built for

If you're shipping live data, you're shipping this.

  • Use case 01

    AI-driven analytics dashboards.

    Your product asks an LLM to summarize, classify, or rank data. Results stream into a table users actually scroll, sort, and filter. Selection survives the next streaming patch.

    • OpenAI Responses
    • Vercel AI SDK
    • Anthropic
  • Use case 02

    Agent traces and tool-call output.

    LangGraph or your own agent runtime emits structured events — node transitions, tool calls, intermediate state. Pretable renders the live trace as it happens.

    • LangGraph
    • CrewAI
    • your own SSE
  • Use case 03

    Real-time financial dashboards.

    Trading floors, portfolio analytics, risk monitors — thousands of patches/sec, multi-line annotations, no row drift when the market moves. The dashboards capital-markets and asset-management teams already need.

    • Market data feeds
    • WebSocket
    • Server-Sent Events
G Google Developer Expertscacheplane, Inc.

Pretable is built by cacheplane — Google Developer Experts behind production data and analytics interfaces at:

SantanderM&T BankThe Motley FoolAG GridGoogleFedExClickUpRunway

yes, that AG Grid. We helped build the grid we're now competing with.

Receipts, not claims.

  • faster scroll vs ag-grid
  • 16ms
    frame p95 / wrapped scroll
  • 0
    long tasks / streaming
  • 25k/s
    max sustained update rate

See them re-run in the bench →

03 · how we compare

How we compare.

Two windows: wrapped-text scroll at 3,000 rows (5 repeats), and streaming updates at 1,000 patches/sec (3 repeats). All on Chromium. Pretable's column is amber-italic. Numbers come from pnpm bench:matrix; committed evidence in status/milestones; full streaming sweep at docs/streaming-rate-envelope.

metricpretable4× faster scrollag-gridtanstackmui-xbudget
frame p95 (ms) — wrapped scroll166717n/a≤ 16
row-height fidelity (px error)01520n/a≤ 1
frame p95 (ms) — streaming999100≤ 16
long task ms / 3 s test0005,3410
visible row drift≤ 128≤ 22≤ 1
max sustained rate25,000/s25,000/s25,000/s< 500/s
purpose-built streaming pipelineyesnonono

Re-run the comparison → /bench

04 · how it works

A deterministic pipeline. No magic.

The benchmarks aren't a coincidence. They follow from a render path designed around five stages — each one readable, each one verifiable in source. Engine and viewport are pure functions; data flows one way; the DOM is touched exactly once per frame.

  1. 01

    Source

    Streaming patches and static rows treated identically.

    • Token-by-token patches via SSE, WebSocket, or any async iterable
    • Static Row[] arrays use the same input shape
    • No "streaming mode" toggle — adapters convert both to engine input
    Row[] | Patchstream-adapter
  2. 02

    Engine

    Pure reducer. Sort, filter, selection, row-id stability.

    • (rows, columns, sort, filter, selection) → Snapshot
    • Deterministic — same inputs always produce the same output, every frame
    • Row-id keys are first-class — selection survives filters, sorts, and live patches
    • Under 3,000 lines. Read it end-to-end in one sitting.
    Snapshotgrid-core
  3. 03

    Viewport

    Row-height plan + virtualization range. Off-DOM measurement.

    • Wrapped row heights computed with character-width tables and font metrics — pure arithmetic
    • No getBoundingClientRect, no forced reflow, no measure-on-mount
    • Virtualization range derived from scroll position + total planned height
    • Off-screen rows excluded from the plan — no phantom DOM
    RenderPlanlayout-core + text-core
  4. 04

    Renderer

    The only stage that touches the DOM.

    • Diffs the previous RenderPlan against the new one
    • Patches affected rows; reuses unchanged DOM nodes
    • Selection, sort indicators, filter chips all data-driven from the snapshot — no imperative state
    Element[]renderer-dom
  5. 05

    Frame

    RAF coalesces patches per animation frame.

    • 100 to 25,000 patches/sec all collapse to one snapshot per frame
    • Long tasks: zero across the operating envelope
    • Selection, cursor, scroll position never lost mid-frame
    60fpsbrowser
  • DOM is expensive. We use math instead.

    Wrapped row heights computed with character-width tables and font metrics — pure arithmetic. No getBoundingClientRect, no forced reflow, no measure-on-mount. The DOM is touched exactly once per frame, at commit.

  • Engine is a pure function.

    (rows, columns, sort, filter, selection) → Snapshot. No imperative DOM. Streaming patches and batch arrays hit the same reducer — that's why selection survives every update.

  • RAF batches the stream.

    100 to 25,000 patches/sec all collapse to one snapshot per animation frame. Long tasks: zero across the operating envelope.

  • Telemetry stays off-DOM.

    Render counts, viewport range, planned height — all data emitted by the engine, never read from the DOM. Zero measurement-induced thrash.

Read the source: packages/grid-core, layout-core, text-core, renderer-dom — under 3,000 lines combined.

05 · for engineers

For engineers: how it looks in your codebase.

Connect any token-streaming source — OpenAI Responses, Anthropic, or your own SSE — to a pretable grid. Selection survives every chunk.

"use client";
import { useEffect, useState } from "react";
import { connectElementStream } from "@pretable-internal/stream-adapter";
import { PretableGrid } from "@pretable/react";
import { columns } from "./columns";
import { openai } from "./openai-client";

export function ChatGrid({ prompt }: { prompt: string }) {
  const [rows, setRows] = useState([]);

  useEffect(() => {
    void (async () => {
      const stream = await openai.responses.stream({
        model: "gpt-5",
        input: prompt,
      });
      connectElementStream(stream, {
        onElement: (row) => setRows((r) => [...r, row]),
      });
    })();
  }, [prompt]);

  return <PretableGrid rows={rows} columns={columns} />;
}

Full example: apps/streaming-demo

06 · what's in the box

Engineering credibility points.

Each feature backed by a bench scenario or demo. No claim without a click-to-prove.

07 · ready to ship

Run the benchmarks. Then ship.

The grid is in your hands at the top of this page. The numbers are reproducible at /bench. The source reads cleanly. Star, install, ship.

MIT licensed · Built in the open · No telemetry.