$ pretable — vol. 2 · no. 1
The fastest data grid for React.
Built for the AI era.
60fps under streaming load. Zero row drift. A deterministic engine designed for live data, agent output, and real-time telemetry — not retrofitted from a batch-era grid.
MIT licensed · open source
- 01
Performance
The fastest grid in independent benchmarks.
16 ms frame p95 on wrapped-text scroll — 4× faster than AG Grid (67 ms) at 3,000 rows, 5 repeats. AG Grid also clips 152 px of wrapped content; pretable's rows hold their cell content. 9 ms frame p95 under 1,000 patches/sec streaming (tied with AG Grid + TanStack; MUI X collapses). Zero long tasks, zero blank gaps, zero anchor shift. Verifiable:
pnpm bench:matrix. - 02
AI-native
AI isn't a feature. It's the data model.
Pretable's engine was designed around streaming and partial data — the shape AI agents and live feeds actually produce. Most grids retrofit a streaming adapter onto a batch-era data model. Pretable doesn't.
- 03
Wrapped text
Multi-line cells, no layout thrash.
Auto-height rows with wrapped content — at 60fps under streaming. No row-jump on hover, no layout shift on scroll, no row-height recalc churn when an agent writes longer text mid-stream. Most grids force fixed heights to avoid this.
- 04
Ecosystem
Drops into the AI SDKs you already use.
Vercel AI SDK · OpenAI Responses · Anthropic streams · LangGraph · your own SSE. One import. The streaming pipeline is purpose-built — every other grid leaves it to you.
01 · why now
Data grids were built for the batch era.
Then AI showed up.
Every popular React data grid was designed when data arrived in one shape: a complete array, fetched once, rendered. AI agents, streaming APIs, and live telemetry don't work that way. They produce data over time — token by token, patch by patch, partial first.
1995
Spreadsheet
2010
Data grid (batch)
2024
Streaming AI
NOW
Pretable
Three failure modes every team building AI-driven dashboards has watched ship — symptoms of the same root cause: a render path that assumed data arrives all at once.
Row vanishes mid-stream.
Selection breaks on the first patch. Trust evaporates with it.
Stream speeds up, frames drop.
Demos handle 100/sec. Production at 1k breaks. Users notice.
Wrapped text jumps on update.
Row heights recalc, viewport shifts. No reading rhythm survives.
02 · built for
If you're shipping live data, you're shipping this.
Use case 01
AI-driven analytics dashboards.
Your product asks an LLM to summarize, classify, or rank data. Results stream into a table users actually scroll, sort, and filter. Selection survives the next streaming patch.
- OpenAI Responses
- Vercel AI SDK
- Anthropic
Use case 02
Agent traces and tool-call output.
LangGraph or your own agent runtime emits structured events — node transitions, tool calls, intermediate state. Pretable renders the live trace as it happens.
- LangGraph
- CrewAI
- your own SSE
Use case 03
Real-time financial dashboards.
Trading floors, portfolio analytics, risk monitors — thousands of patches/sec, multi-line annotations, no row drift when the market moves. The dashboards capital-markets and asset-management teams already need.
- Market data feeds
- WebSocket
- Server-Sent Events
Pretable is built by cacheplane — Google Developer Experts behind production data and analytics interfaces at:
↳ yes, that AG Grid. We helped build the grid we're now competing with.
Receipts, not claims.
- 4×faster scroll vs ag-grid
- 16msframe p95 / wrapped scroll
- 0long tasks / streaming
- 25k/smax sustained update rate
03 · how we compare
How we compare.
Two windows: wrapped-text scroll at 3,000 rows (5 repeats), and streaming updates at 1,000 patches/sec (3 repeats). All on Chromium. Pretable's column is amber-italic. Numbers come from pnpm bench:matrix; committed evidence in status/milestones; full streaming sweep at docs/streaming-rate-envelope.
| metric | pretable4× faster scroll | ag-grid | tanstack | mui-x | budget |
|---|---|---|---|---|---|
| frame p95 (ms) — wrapped scroll | 16 | 67 | 17 | n/a | ≤ 16 |
| row-height fidelity (px error) | 0 | 152 | 0 | n/a | ≤ 1 |
| frame p95 (ms) — streaming | 9 | 9 | 9 | 100 | ≤ 16 |
| long task ms / 3 s test | 0 | 0 | 0 | 5,341 | 0 |
| visible row drift | ≤ 1 | 28 | ≤ 2 | 2 | ≤ 1 |
| max sustained rate | 25,000/s | 25,000/s | 25,000/s | < 500/s | — |
| purpose-built streaming pipeline | yes | no | no | no | — |
04 · how it works
A deterministic pipeline. No magic.
The benchmarks aren't a coincidence. They follow from a render path designed around five stages — each one readable, each one verifiable in source. Engine and viewport are pure functions; data flows one way; the DOM is touched exactly once per frame.
- 01
Source
Streaming patches and static rows treated identically.
- Token-by-token patches via SSE, WebSocket, or any async iterable
- Static
Row[]arrays use the same input shape - No "streaming mode" toggle — adapters convert both to engine input
→ Row[] | Patchstream-adapter - 02
Engine
Pure reducer. Sort, filter, selection, row-id stability.
(rows, columns, sort, filter, selection) → Snapshot- Deterministic — same inputs always produce the same output, every frame
- Row-id keys are first-class — selection survives filters, sorts, and live patches
- Under 3,000 lines. Read it end-to-end in one sitting.
→ Snapshotgrid-core - 03
Viewport
Row-height plan + virtualization range. Off-DOM measurement.
- Wrapped row heights computed with character-width tables and font metrics — pure arithmetic
- No
getBoundingClientRect, no forced reflow, no measure-on-mount - Virtualization range derived from scroll position + total planned height
- Off-screen rows excluded from the plan — no phantom DOM
→ RenderPlanlayout-core + text-core - 04
Renderer
The only stage that touches the DOM.
- Diffs the previous
RenderPlanagainst the new one - Patches affected rows; reuses unchanged DOM nodes
- Selection, sort indicators, filter chips all data-driven from the snapshot — no imperative state
→ Element[]renderer-dom - Diffs the previous
- 05
Frame
RAF coalesces patches per animation frame.
- 100 to 25,000 patches/sec all collapse to one snapshot per frame
- Long tasks: zero across the operating envelope
- Selection, cursor, scroll position never lost mid-frame
→ 60fpsbrowser
DOM is expensive. We use math instead.
Wrapped row heights computed with character-width tables and font metrics — pure arithmetic. No
getBoundingClientRect, no forced reflow, no measure-on-mount. The DOM is touched exactly once per frame, at commit.Engine is a pure function.
(rows, columns, sort, filter, selection) → Snapshot. No imperative DOM. Streaming patches and batch arrays hit the same reducer — that's why selection survives every update.RAF batches the stream.
100 to 25,000 patches/sec all collapse to one snapshot per animation frame. Long tasks: zero across the operating envelope.
Telemetry stays off-DOM.
Render counts, viewport range, planned height — all data emitted by the engine, never read from the DOM. Zero measurement-induced thrash.
↳ Read the source: packages/grid-core, layout-core, text-core, renderer-dom — under 3,000 lines combined.
05 · for engineers
For engineers: how it looks in your codebase.
Connect any token-streaming source — OpenAI Responses, Anthropic, or your own SSE — to a pretable grid. Selection survives every chunk.
"use client";
import { useEffect, useState } from "react";
import { connectElementStream } from "@pretable-internal/stream-adapter";
import { PretableGrid } from "@pretable/react";
import { columns } from "./columns";
import { openai } from "./openai-client";
export function ChatGrid({ prompt }: { prompt: string }) {
const [rows, setRows] = useState([]);
useEffect(() => {
void (async () => {
const stream = await openai.responses.stream({
model: "gpt-5",
input: prompt,
});
connectElementStream(stream, {
onElement: (row) => setRows((r) => [...r, row]),
});
})();
}, [prompt]);
return <PretableGrid rows={rows} columns={columns} />;
}Full example: apps/streaming-demo
06 · what's in the box
Engineering credibility points.
Each feature backed by a bench scenario or demo. No claim without a click-to-prove.
01
60fps performance
500k rows render at frame p95 ≤ 16ms on the S7 stress scenario.
→ receipt: /bench?s=S7&scale=stress02
Selection survives filters
Row-id keys persist across filter, sort, and live updates. Click a row, filter the grid, the selection sticks.
→ receipt: live demo above03
Deterministic engine
The render path is read-able. packages/grid-core ships fewer than 3,000 lines.
→ receipt: github.com/cacheplane/pretable04
No-flash hydration
SSR-safe initial paint. Selection state survives hydration. Works in Next.js App Router.
→ receipt: this page
07 · ready to ship
Run the benchmarks. Then ship.
The grid is in your hands at the top of this page. The numbers are reproducible at /bench. The source reads cleanly. Star, install, ship.
MIT licensed · Built in the open · No telemetry.