The React Compiler Changed How I Write Components

ReactFrontendPerformance

For years, writing performant React meant carrying a mental checklist into every component. Is this callback going to cause a child re-render? Does this computed value need wrapping? Should this list item be in React.memo? The ceremony was real, and getting it wrong was easy.

React Compiler memoization in 2026 changes that contract. With React Compiler v1.0 shipping in October 2025 and now shipping by default in new Next.js and Vite projects, the compiler handles automatic memoization at build time. You write the straightforward version of your component. The compiler figures out what to cache.

I've been running it in production for six months on performance-sensitive trading UIs: dashboards with hundreds of rapidly updating cells, order books re-rendering dozens of times per second. Here's what I've learned.

What the Compiler Actually Does

The short version: the compiler is a Babel plugin that analyzes your components at build time and injects memoization logic before your code ships. It does not use React.memo, useMemo, or useCallback. Instead, it inserts a lower-level caching mechanism (internally called _useMemoCache) that tracks reactive values at a more granular level than those APIs ever could.

The compiler works in three phases. First, it parses your component into an AST and classifies every expression as either static (constants, imports, things that never change), reactive (values derived from props or state), or derived (computed from reactive values). Then it infers what depends on what. Finally, it injects cache slots at the right boundaries.

The result is that a component like this:

function PriceRow({ instrument, price, onSelect }) {
  const formatted = formatCurrency(price, instrument.currency);
  const label = `${instrument.name} - ${instrument.exchange}`;

  return (
    <div className="price-row" onClick={() => onSelect(instrument.id)}>
      <span>{label}</span>
      <span>{formatted}</span>
    </div>
  );
}

...gets compiled such that formatted, label, and the onClick handler are all cached individually. If price changes but instrument doesn't, only formatted is recomputed. The label doesn't change. The onClick callback stays stable. The child component only re-renders what actually changed.

Before the compiler, you'd have written something like:

const PriceRow = React.memo(function PriceRow({ instrument, price, onSelect }) {
  const formatted = useMemo(
    () => formatCurrency(price, instrument.currency),
    [price, instrument.currency]
  );
  const label = useMemo(
    () => `${instrument.name} - ${instrument.exchange}`,
    [instrument.name, instrument.exchange]
  );
  const handleClick = useCallback(
    () => onSelect(instrument.id),
    [onSelect, instrument.id]
  );

  return (
    <div className="price-row" onClick={handleClick}>
      <span>{label}</span>
      <span>{formatted}</span>
    </div>
  );
});

Both versions behave the same. One is a lot easier to read and change.

What It Handles Well

The compiler is genuinely good at the common cases. If your components are pure functions that follow the Rules of React, it'll optimize most of what you write without any help.

Specifically, it handles:

  • Expensive calculations inside render: memoized automatically when dependencies haven't changed
  • Stable function references passed as props: no more useCallback wrappers to prevent child re-renders
  • Inline objects and arrays: the compiler doesn't create a new reference unless the underlying data changed
  • JSX sub-trees: individual elements get their own cache slots, so a stable header doesn't re-render just because the body changed

The real-world numbers are meaningful. Instagram saw a 3% improvement across pages when Meta rolled this out internally. Sanity Studio reported 20-30% better rendering. Wakelet measured a 10% improvement in LCP and 15% in INP. These aren't lab benchmarks. They're production apps.

On our trading dashboards, the impact was most visible in the order book component, a wide grid updating at high frequency. After enabling the compiler, we removed about 200 lines of manual memoization that we'd accumulated over two years of performance work. The rendering profile looked roughly the same. That's a good sign: the compiler is doing what we were doing manually, but without the maintenance cost.

Where It Falls Short

I want to be honest about the gaps, because there are real ones.

The library problem is the biggest one. The compiler assumes that values are stable unless they've actually changed. Some third-party hooks break that assumption. useMutation from TanStack Query, useTheme from Material UI, and useLocation from React Router all returned new object references on every render at the time we were testing (earlier versions, before those libraries updated for compiler compatibility). When a hook returns a new reference even if the data hasn't changed, the compiler can't do anything. It bails out of optimizing that component.

The library ecosystem is catching up, but if your codebase is heavy with older third-party hooks, you'll see less benefit than you expect.

The Rules of React are non-negotiable. The compiler requires that your components be pure functions. If you're mutating state directly, calling hooks conditionally, or reading from non-deterministic sources like Math.random() or Date.now() during render, the compiler silently skips those components. It doesn't crash. It just doesn't optimize. This is actually a good behavior, but it means you might think you have the compiler working when a bunch of your components are opted out.

The ESLint plugin (eslint-plugin-react-compiler) surfaces these cases. Running it on our codebase when we first enabled the compiler found about 15 components with violations we didn't know about. Finding them was useful. Fixing them took a sprint.

Non-trivial data flows confuse it. The compiler tracks data flow statically. If you have a mutable ref that you write to in one effect and read from in a child component, the compiler can't reliably trace that path. It'll bail out. Same if you're accessing values through dynamic property access patterns. In practice, this comes up more in older codebases than in greenfield code.

The debugging story is worse. Manual memoization is explicit. When something re-renders unexpectedly, you look at the useMemo or useCallback and check the dependencies. With the compiler, it's a black box. You're looking at compiled output in React DevTools and trying to infer what the compiler decided. The DevTools "Memo ✨" badge tells you the compiler processed a component, but it does not tell you whether the memoization is actually working. I've had more than one session staring at the profiler trying to understand why something was re-rendering when it shouldn't.

When You Still Need Manual useMemo and useCallback

I thought I'd be removing all my manual memoization hooks. That turned out to be naive.

There are a few cases where you still want them:

Effect dependencies. If a value is used as a dependency in a useEffect, you sometimes need a guarantee about its reference stability that goes beyond what the compiler infers. The compiler is good at this, but I've had cases where a refactor quietly changed how it classified a value, and the effect started firing more than expected. A manual useMemo in those spots is a contract: this value should be stable across renders unless these specific dependencies change. The compiler respects manual memoization and won't override it.

Genuinely expensive computations. I'm talking about parsing 100,000 rows, running a graph layout algorithm, doing heavy filtering and sorting on unvirtualized data. The compiler will memoize these, but an explicit useMemo communicates intent to your teammates. It's documentation as much as it is optimization.

External library boundaries. If you're passing values into a library that hasn't been updated for compiler compatibility, the stability guarantees need to be explicit. A useCallback wrapping the handler is still the clearest way to signal "this reference is intentionally stable."

The mental shift I've landed on: use the compiler as your default, and reach for manual hooks when you need a contract. Either for correctness or for communication.

How the Mental Model Changes for New Developers

This is where I think the long-term impact is most interesting.

Before the compiler, React had a split audience. Developers who understood useMemo and useCallback deeply could write efficient code. Developers who didn't (or who were newer to React) either wrote slow components or added memoization cargo-cult style, wrapping everything just in case. Both outcomes were common.

The compiler collapses that split. You can now write the simple, clear version of your component and trust the compiler to handle the performance side. A junior developer can write:

function UserCard({ user, onEdit }) {
  const displayName = `${user.firstName} ${user.lastName}`;
  return (
    <div>
      <h2>{displayName}</h2>
      <button onClick={() => onEdit(user.id)}>Edit</button>
    </div>
  );
}

...and get the same rendering behavior as the version with all the memoization wrappers. That's genuinely good for the ecosystem.

That said, I don't think it makes React easier. It makes the common cases easier. But the compiler's rules, the bailout behavior, the library compatibility issues, the debugging story: these things require understanding what the compiler actually does. New developers who learn React in 2026 won't need to know useMemo on day one. But they'll need to understand purity, referential stability, and the Rules of React eventually. The complexity moved, but it didn't disappear.

Migrating an Existing Codebase

I'll skip the obvious "enable the compiler in your build config" step and focus on what actually took time.

The 'use no memo' directive is your friend during migration. You add it to a component's function body and the compiler skips it entirely:

function LegacyWidget() {
  'use no memo';
  // compiler ignores this component
  // safe to migrate piece by piece
}

This lets you enable the compiler project-wide without a big-bang audit. Start with new components. Run the ESLint plugin. Fix violations as you go. Remove 'use no memo' as you review each component.

One thing I wish someone had told me earlier: don't remove your existing useMemo and useCallback calls during the initial migration. Let the compiler run alongside them first. The compiler will double-memoize (unnecessary but harmless) and you can observe the behavior before cleaning up. Once you've confirmed the compiler is handling things correctly, remove the manual hooks one component at a time. It's slower, but you catch regressions.

Is React Simpler Now?

Honestly? Not simpler. Differently complex.

The surface-area complexity went down. You write less code to express the same intent. The boilerplate is gone. That's real.

But the underlying conceptual model got more opaque. Before, you could trace exactly why something was or wasn't memoized. Now you're working with a tool that makes decisions for you, and understanding why those decisions are right or wrong requires knowing the compiler's internals. That's a higher bar, not a lower one.

I think that's the right tradeoff. Most of the time, the compiler's decisions are correct, and writing less boilerplate code produces fewer bugs. The cases where you need to understand the compiler deeply are fewer than the cases where you used to need to understand memoization deeply. So on average, it's a win.

But I'd push back on the framing that the compiler is a solution to React's complexity. It's a solution to one specific category of complexity: the mechanical, repetitive parts. The thinking is still yours.


Recommended Resources