Projecting React: A Scientific Exploration in AI-Native Framework Development
I've been working on TanStack Start for a while now, and one friction has been nagging at me the whole time: React is big. Bundled through Vite, it lands at ~60 KB gzip on the client before a single line of app code runs. (The ~45 KB figure commonly cited is the CDN UMD build; modern ESM bundlers don't tree-shake down that far.) The rest of the TanStack toolchain (Router, Query, Store, Form, Virtual) is collectively a fraction of that, and it felt off to ship a stack where the smallest piece you can't remove is also the largest.
The obvious move was Preact. preact/compat has been the pragmatic "React but tiny" path for years, and I sat down to wire it up.
It didn't work. preact-compat has drifted from React 19 enough that it's no longer a true drop-in. Small frictions stacked up around use() semantics, the React 19 server-action surface, portals, error boundaries, and hydration edges. Every fix was a shim on top of a shim, and the delta kept growing. Somewhere around the fifth patch I stopped and asked the uncomfortable question.
The problem wasn't Preact. The problem was that I wanted React's public API, projected at a different scope.
Code as a materialized view
Around the same time, my good friend and collaborator Kyle Mathews posted something that's been rattling around my head for weeks:
Coding agents turn code from artifact into materialized view. The base tables are the ideas — algorithms, protocols, semantic contracts. The code is one projection. For decades we treated the artifact as authoritative because regenerating was expensive. It isn't anymore.
We only treat code as the authoritative artifact because regenerating it is expensive. Flip that cost curve and the whole model inverts. The ideas become the base table. The code becomes one view among many. "N specialized projections of the same ideas" is suddenly possible.
React's public API is the base table: the element model, the hooks contract, Suspense semantics, the hydration lifecycle, the SSR streaming protocol. A decade of real-world use. A massive test suite. Stable enough to build on. React-the-repo is one projection of that API, optimized for a fleet of consumers TanStack Start doesn't share: concurrent mode, time slicing, DevTools, a full React Server Components runtime, a long tail of DOM quirks.
The question I couldn't stop asking: what if I asked an AI agent to produce a different projection, one scoped to exactly what TanStack Start needs?
A note on vinext
As it happens, I wasn't the first person to ask the projection question. A few weeks earlier Cloudflare shipped vinext, a Vite plugin that reimplements the Next.js API surface, built by one engineer with AI assistance in one week for about $1,100. 1,700+ Vitest tests, 380 Playwright runs, production builds 4× faster than Next.js 16 + Turbopack. The discourse called it a "slop-fork."
That label stuck partly because it wasn't random. Vinext is real engineering, and Cloudflare has a clear commercial motive: making Next.js easier to deploy on Workers pulls Vercel customers onto their platform. Both things are true at once. The label ended up mixing "made with AI" with "made to capture market share," and the second half is what people were actually reacting to. Fair enough.
But motive doesn't make something slop. It makes it a product. And this thing I built doesn't have the market-share axis at all. It's an experiment, running on two of my own websites and nothing else. Different motive, same technique. I'll borrow Kyle's word for both of them and move on: projections, not forks.
The shape: core plus toggleable features
TanStack Start is synchronous-friendly. Router state is owned by @tanstack/router-core. External store tearing is handled by @tanstack/react-store. We have our own devtools. The RSC pipeline uses @vitejs/plugin-rsc against real react-server-dom for Flight serialization. Suspense we need. SSR streaming we need. Concurrent scheduling for the rendering path? We don't.
Some of that gets cut permanently. Concurrent rendering, time slicing, the lane-based scheduler, React DevTools, the Flight client deserializer: none of these are implemented at all. useTransition and useDeferredValue run synchronously. startTransition is fn(). The scheduler is a microtask wrapper. Those are product decisions: TanStack Start either doesn't need them, or another piece of the stack handles them.
Everything else got split into two layers. The first is an irreducible core (~6.71 KB gzip): fiber reconciler with keyed child diffing, host DOM mount/update, the standard hook surface (useState, useReducer, useEffect, useLayoutEffect, useInsertionEffect, useMemo, useCallback, useRef, useId, useSyncExternalStore, use), native event binding, Fragments, JSX runtime. Every React app needs all of it.
On top of the core sit eight toggleable features, each with a real implementation and a stub:
| Feature | Stubbed behavior | Savings (gzip) |
|---|---|---|
portal |
Children render in place, container ignored | ~30 B |
context |
Provider → Fragment, useContext returns default |
~80 B |
suspense |
Suspense → Fragment, thenables retry on settle | ~640 B |
memo |
Pass-through every render | ~80 B |
forwardRef |
Ref dropped (React 19 "refs as props" still works) | ~70 B |
lazy |
Sync-resolvable payloads work; async retries on settle | ~20 B |
classComponents |
constructor + render + setState only |
~200 B |
hydration |
hydrateRoot throws (use createRoot for SPA) |
~1270 B |
The Vite plugin swaps index.js → stub.js for any feature flagged off, so the full code never enters the module graph and tree-shaking strips it. No user-code changes. No runtime branching.
Two starting points, not a spectrum:
import { redact } from '@tanstack/redact/vite'
redact({ preset: 'full' }) // 9.03 KB — drop-in React, opt OUT what you don't need
redact({ preset: 'nano' }) // 6.71 KB — irreducible core, opt IN what you do need
Per-feature overrides merge on top of either preset. full minus hydration for an SPA. nano plus context plus suspense for a small interactive app. Most apps want either close-to-React or close-to-minimal; the toggles let you land precisely where your shape needs.
This isn't scientific pruning. It's product decisions all the way down. Everything I made toggleable, I made toggleable because some consumer might not need it. Everything I left out entirely, I left out because my consumer never will. That's the point of projections: each one is scoped to its consumer, not to some abstract ideal.
What actually happened
It took one day. One day of prompting, in the iterative-exploration sense Kyle means: shaping a projection against a spec, not writing code from scratch.
By the end of that day: element model, JSX runtime (classic + automatic), Fragment, memo, lazy, forwardRef. useState, useEffect, useLayoutEffect, useRef, useMemo, useCallback, useContext with correct semantics. Class components. Error boundaries. useSyncExternalStore. use() for context and promises. SSR with Suspense streaming. Hydration. All passing the test suite.
The work that came after wasn't building. It was using. Once tannerlinsley.com and tanstack.com were actually running on the shim, real traffic surfaced the bugs that no amount of unit testing was going to catch:
- Reconciliation order.
placeChildrenInOrderhad to iterate in reverse. Forward iteration cascades failures on every out-of-order child insertion. - Effect cleanup timing.
useEffectcleanup had to run at effect-run time (the passive drain), not at dispatch time. Otherwise coalesced renders, the kind TanStack Router triggers on every user action, leak side effects into the DOM. - Deferred hydration.
use(promise)andlazycomponents suspending mid-hydration needed matching ancestor-Suspense guards in two code paths. - Controlled inputs. Every keystroke had to fire
onChangewithevent.nativeEventaliased on the dispatched event for library compat. - SSR streaming. Shell + bootstrap had to be buffered into a single
TextEncoder.encode+enqueueinstead of per-chunk. Cut Node stream overhead measurably in the CPU profile.
Each one was a one-shot fix the moment I described it. The pattern was always the same: spot the bug in production, write down what I saw, get the fix. No rabbit holes. Claude already knew the right thing to do. It just needed to know which thing to do, and that information only came from running the code in the wild.
Every one of these is a real React bug shape. Any React core maintainer would recognize them. That's what makes this a real projection instead of a convincing knockoff. Same invariants, same failure modes.
The numbers
Measured against React 19.2.3:
| Entry | React 19 gzip | Projection (full) |
Projection (nano) |
|---|---|---|---|
react-dom/client |
60.3 KB | 9.03 KB | 6.71 KB |
react-dom/server |
61.1 KB | 4.55 KB | — |
Full client runtime (react + react-dom/client + jsx-runtime) |
~60 KB | ~10.94 KB | ~9.30 KB |
In either preset, ~80–85% smaller than stock React. And because there's no scheduler, no lanes, no fiber work loop, the render path itself is simpler:
| Benchmark | Real React | Projection | Speedup |
|---|---|---|---|
| client-nav (router-driven navigation loop) | 34.9 hz | 78.1 hz | 2.24× |
| SSR (request loop) | ~48 hz | 168 hz | ~3× |
700/700 unit + integration tests pass. Then I measured it against real React on two actual production TanStack Start apps, one small and one large, using identical Lighthouse protocol: 30-run median, 3 URLs × 2 form factors, wrangler dev --local serving a production build for both variants.
tannerlinsley.com, shipping on it today
If you're reading this post in your browser right now, you're running on the projection. The HTML, the JavaScript that hydrated it, the runtime handling your scroll and your clicks. Not real React. I moved this site over the same day I drafted this post.
| Metric | React | Projection | Δ |
|---|---|---|---|
| Lighthouse perf score (median) | 99 | 100 | +1 |
| FCP | 1.22s | 1.00s | −18.1% |
| LCP | 1.42s | 1.24s | −12.1% |
| TBT | 0ms | 0ms | — |
| CLS | 0 | 0 | — |
| Speed Index | 1.22s | 1.00s | −18.1% |
| JS transferred over the wire | 144.1 KB | 96.5 KB | −33.0% |
A third off the JavaScript payload on a personal blog. A site that's almost entirely prerendered, with barely any interactivity. Even on a site where React is doing close to nothing, it still accounted for 33% of what was on the wire. You're shipping plumbing by default, not by need.
Mobile is where the wins widened: FCP dropped 18–22%, LCP 12–13%, across the home page, the index, and a representative post. TBT and CLS stayed flat at zero. No interactivity regression, no layout shift. Desktop hits the Lighthouse ceiling on both variants (100/100 across all three URLs), so the score table understates the delta. The raw timings are the story.
No RSC on this site, so no LCP regression anywhere. Clean across the board.
tanstack.com, capable of running it end-to-end
tannerlinsley.com is a personal blog. tanstack.com is the stress test: TanStack Router, Query, Store, Form, Table, Virtual, the RSC-rendered blog, the docs renderer, every Suspense boundary, every hydration edge, every third-party integration TanStack Start apps routinely pull in. The projection can drop in and run the whole thing with no regressions we've been able to find. If it were going to break on something real, it would have by now.
Same protocol, same measurement discipline:
| Metric | vs. real React |
|---|---|
| Lighthouse performance score | parity (±2, within run noise) |
| FCP (desktop) | −4% to −17% |
| FCP (mobile) | up to −14% |
| TBT | 0ms → 0ms |
| CLS | 0 → 0 |
| LCP (non-RSC pages) | parity |
| LCP (RSC-heavy pages) | +8% to +43% |
| Total app client JS | −980 KB (−4.7% of full app bundle) |
Lighthouse performance score lands at parity. FCP wins across both form factors, with the biggest gains on the content-heavy docs and blog pages. TBT and CLS zero. The one real regression is LCP on RSC-heavy pages. The LCP element lives in the Flight-streamed subtree, and the projection's use(pendingPromise) + deferred-resume path adds latency vs. React's battle-tested RSC client. All affected pages still score "Good" on Core Web Vitals (<2.5s LCP), and the fix path is clear. It's on the list, not a blocker.
Net: nearly a megabyte of client JS off the wire on a full-scale app, parity or better on every other metric, one known regression with a clear fix. Against stock React 19.2.3.
Two production-scale sites with very different shapes. Both hold up.
Is it irresponsible not to project?
If projecting your dependencies down to your actual shape now takes days instead of months (vinext shows it, this experiment shows it), then shipping upstream's full general-purpose library is itself a decision with consequences. You're betting that upstream understands your shape better than you do. For most apps, that bet is just right. They benefit from the generality.
But for libraries that define their own world (TanStack Start defines a pretty opinionated one), the bet starts to look strange. You're shipping ~52KB of plumbing to every user for features none of your users will use, because the cost of not shipping it used to be too high to justify. That cost just dropped by two orders of magnitude.
If I don't explore this shape myself, someone else will. Not maliciously. The wins are sitting there and the cost is now low. Someone will find a 2× render-path speedup, or a 50KB bundle savings, or a hydration shape that fits their world better than the upstream default. And if a projection like that ever ends up shipping to my users because I didn't build my own understanding first, the tradeoffs are theirs, not mine. I'll have given up the chance to know my own shape.
At that point, not projecting stops being a conservative choice and starts being a cop-out.
Why I'm not releasing this
Projecting is cheap. Releasing a projection publicly is not.
A public "alternative React," even positioned carefully, even labeled experimental, even explicitly scoped, is a community cost. People will benchmark it. People will compare. People will ask the maintainers of the real thing to respond. Some fraction of developers will try it out, hit a concurrent-mode edge case, not understand what they're seeing, and blame React. Confusion compounds. Community attention is finite.
I don't want to pay that cost, and I don't think anyone would benefit from me paying it. This isn't an alternative React. It's a narrow experiment shaped around the needs of a specific kind of app, and the narrowness is what makes it work. Releasing it would invite interpretations that aren't true.
So it stays private and experimental. It isn't going into TanStack Start. It's not a dependency of any TanStack package. Right now it runs on exactly two sites, this one and tanstack.com, and that's the entire surface area. Maybe it stays that way forever. Maybe at some point it's worth formalizing into something people can opt into. That's a later conversation.
Vinext chose the other path: public, documented, a plugin anyone can install. That's a valid choice and Cloudflare has the shoulders for it. It's not the one I'm making here. The same technique can be released very differently, and personal software is a concept that's going to matter more as projection costs drop. Not every derivation wants to be a product.
On whether React core should care
I don't expect the React core team to care about this, and I think that's the right reaction.
The honest framing: this is software tailor-built for exactly one consumer. It works better for me because it was shaped to my specific usage, and the generality I gave up is exactly what you'd have to give up to get my numbers. If there's anything here for React core, it's that their public API is well-specified enough to project against. That's a compliment.
The React team's job is to ship a general React. Mine is to ship TanStack Start. Those jobs used to overlap more than they do now, because running your own projection of React used to be impossible. Now it isn't. That doesn't mean React has to change. It means consumers like me have more responsibility for our own shape.
If anything, projections like this are useful feedback for a core team. Evidence of how a public API gets used in the wild, which parts are load-bearing, which parts go unused in specific domains. But they aren't proposals. Treating them that way misses what they are.
Distros, remixes, and the shape of the next few years
The analogy I keep coming back to is Linux distros.
There's no "real" Linux in the sense people sometimes imply when they use the word "fork." There's a kernel, and then there are hundreds of distributions projecting that kernel into whatever shape their users want: Debian, Arch, Alpine, NixOS, whatever runs on your router. Nobody thinks Arch is hurting Linux by existing. The pluralism is the point.
Song remixes are the same structure. There's the original track, and then there are derivative arrangements, and sometimes the derivative is the version people prefer. That doesn't diminish the original. It extends what the original can be.
I think the next few years of web development look more like distros and remixes than anyone's prepared for. People will build their own projections of the libraries they depend on, shaped around what they actually use. Some will ship as public products like vinext. Most shouldn't. The artifact stops being authoritative. The ideas do.
What's next
@tanstack/react is a private experiment. It doesn't live in TanStack Start, it's not a dependency of any TanStack package, and I'm not publishing it. Right now it runs on my personal site and on tanstack.com. That's the whole deployment footprint, and it may stay that way indefinitely. The project exists because I wanted to know what a version of React scoped to the shape of a Start app would cost to build, and once I saw the cost, not building it looked like the stranger choice.
Use the upstream. That's almost always right. But pay attention to the shape of your own consumers, and notice when you're shipping generalizations you no longer need. The cost of owning your own projection has dropped far enough that "just use the default" is no longer the automatic answer it used to be.
A year ago this post would have been science fiction. Today it's a weekend of work. That's the shift worth paying attention to.