React Performance Patterns & State Management

Suspense, React.lazy, and Component-Level Code Splitting

React.lazy and <Suspense> are React’s built-in primitives for component-level code splitting. React.lazy(() => import('./HeavyComponent')) creates a dynamically imported component that loads only when rendered, producing a separate chunk that’s fetched on demand. <Suspense fallback={<Loading />}> wraps lazy components to show a loading indicator while the chunk downloads. This is the foundation for reducing initial bundle size without any external library.

Route-based splitting is the highest-impact starting point: wrap each route’s component in React.lazy so users only download code for the page they’re visiting. In React Router v7, the lazy route property handles this natively. In Next.js App Router, route-level splitting happens automatically. Component-based splitting targets heavy widgets loaded on interaction — modals, rich text editors, chart libraries, date pickers. Render them conditionally with React.lazy + a state toggle so the chunk downloads only when the user triggers the component.

In React 19, Suspense has evolved beyond lazy loading into a general-purpose async coordination mechanism. It works with the use() hook for data fetching (suspend until a Promise resolves), with streaming SSR (show fallbacks for slow server-rendered sections), and with startTransition for non-urgent updates. You can nest Suspense boundaries for granular control — an outer boundary for the page layout and inner boundaries for individual data-dependent sections. Always pair Suspense with Error Boundaries to gracefully handle network failures during chunk loading.

A key performance pattern: preload on hover, render on click. When a user hovers over a button that will open a lazy-loaded modal, fire import('./Modal') on onMouseEnter — this starts the download. By the time the user clicks, the chunk is likely cached and the modal appears instantly. This combines the bandwidth efficiency of lazy loading with the perceived speed of eager loading (see Section 6 for more on this pattern).

Resources:

Memoization in the Compiler Era

Before the React Compiler, preventing unnecessary re-renders required manual memoization: React.memo() to skip re-rendering when props haven’t changed, useMemo() to cache expensive computations, and useCallback() to stabilize function references passed as props. Getting this right was error-prone — missing a dependency, over-memoizing (adding complexity for no benefit), or memoizing the wrong thing were all common pitfalls.

The React Compiler (covered in Section 9) changes this calculus fundamentally. It analyzes component code at build time and automatically inserts optimizations where they produce a measurable benefit. Early adopters report 25–40% fewer re-renders without any code changes, and some teams have removed thousands of lines of manual memoization. For new projects using the React Compiler, the guidance is simple: don’t bother with useMemo, useCallback, or memo unless profiling reveals a specific problem. Write clear, readable components and let the compiler optimize.

For projects not yet using the compiler, manual memoization is still valuable but should be guided by profiling, not intuition. Use the React DevTools Profiler to identify components that re-render frequently with the same output. Focus memoization on: expensive computations (sorting/filtering large arrays), components receiving new object/array references as props on every render, and components passed to memoized children (where an unstable callback prop would defeat the child’s memo). The most common anti-pattern is wrapping everything in memo — this adds overhead for components that almost always receive different props and would re-render anyway.

Resources:

Concurrent Rendering and startTransition

React 18’s concurrent rendering enables features that keep the UI responsive during heavy updates. startTransition marks a state update as non-urgent — React will begin the update but can interrupt it to handle more important work (like user input). If the user types in a search box while a large list is filtering, the typing stays responsive because the list re-render is a transition that can be interrupted and restarted.

useTransition provides a pending boolean alongside startTransition, letting you show a loading indicator for the transition. useDeferredValue defers updates to a value, showing stale data while the fresh data renders in the background — ideal for search-as-you-type interfaces where the input should never lag. These patterns compose naturally with Suspense: a transition that triggers a Suspense boundary will keep the previous UI visible (rather than showing a fallback) until the new content is ready.

For performance, the key insight is that concurrent features let you prioritize responsiveness over completeness. Instead of blocking the UI until a heavy render finishes, you render what you can immediately and fill in the rest asynchronously. This directly improves INP because user interactions are never blocked by ongoing renders.

Resources:

Virtual Lists for Large Datasets

Rendering thousands of DOM nodes kills performance — each node consumes memory, triggers layout calculations, and slows paint. Virtual lists (also called “windowed” rendering) solve this by rendering only the items currently visible in the viewport, plus a small overscan buffer. As the user scrolls, items that leave the viewport are unmounted and new items are mounted — maintaining a constant DOM size regardless of list length.

TanStack Virtual (formerly React Virtual) is the current recommendation. It’s headless (render-agnostic), supports vertical, horizontal, and grid layouts, dynamic row heights, and integrates with any scrollable container. react-window (by Brian Vaughn, former React team) is the lighter alternative for simpler cases. For very large datasets (100K+ items), combine virtual lists with content-visibility: auto on the container and consider pagination or infinite scroll as a UX pattern rather than loading all data upfront.

Resources:

The Re-Render Problem: Context, Prop Drilling, and State Architecture

Understanding React’s re-rendering model is essential for performance. A component re-renders when: its state changes, its parent re-renders (passing new props), or a Context it consumes changes. The third case is the most dangerous for performance because any change to a Context value re-renders every consumer of that context, even if the specific piece of data a component uses hasn’t changed. This is the fundamental limitation of React’s built-in Context API for global state.

Prop drilling (passing props through multiple component layers) is often maligned but is actually the most performant pattern — components only re-render when the props they receive actually change. The downsides are ergonomic, not performance-related: deep prop chains are verbose and couple intermediate components to data they don’t use. Context is the React-blessed escape hatch, but it should be used for low-frequency, read-heavy state like theme, locale, authenticated user, and feature flags — not for rapidly changing data like form input values, animation state, or real-time counters.

When Context performance becomes a problem, the solutions are: split contexts (separate ThemeContext from UserContext from CartContext so changes to one don’t re-render consumers of the others), memoize the context value (useMemo on the value object to prevent new references on every render), or graduate to an external store that provides selector-based subscriptions.

Resources:

State Management Libraries: Performance Characteristics

Zustand: The Sweet Spot

Zustand (~3KB) has become the default choice for most React applications in 2025–2026. It creates a store with hooks — components subscribe to specific slices of state via selectors, and only re-render when their selected slice changes. No Provider wrapper needed (it works at module level), no action types or reducers unless you want them, and the API feels like a natural extension of useState. Zustand uses useSyncExternalStore under the hood for correct concurrent rendering behavior. Its middleware ecosystem supports persistence (localStorage/sessionStorage), Redux DevTools integration, and Immer for immutable updates.

Jotai: Atomic Precision

Jotai (~1.2KB) takes the opposite approach: instead of one store, you create many small atoms — independent pieces of state that components subscribe to individually. A component using useAtom(countAtom) re-renders only when countAtom changes, regardless of what other atoms do. Derived atoms (computed from other atoms) automatically recompute when dependencies change, similar to MobX computed values or SolidJS signals. Jotai integrates natively with Suspense for async atoms. This “bottom-up” model produces the most granular re-rendering of any library, making it ideal for complex, interdependent state like form builders, design tools, or IoT dashboards.

Redux Toolkit: Enterprise Predictability

Redux remains relevant for large enterprise applications where strict unidirectional data flow, time-travel debugging, and middleware ecosystems (thunks, sagas) provide value. Redux Toolkit (RTK) has dramatically reduced boilerplate with createSlice and createAsyncThunk. Performance-wise, Redux uses react-redux’s useSelector with shallow equality checks — components only re-render when their selected slice changes. The risk is poorly written selectors that return new object references on every call, defeating the equality check. Use createSelector from Reselect for memoized derived data.

MobX: Observable Reactivity

MobX takes an object-oriented, reactive approach — mark state as observable, and any component that reads an observable value (wrapped in observer()) automatically re-renders when that value changes. Computed values (derived data that recalculates only when dependencies change) are MobX’s killer feature for performance — they cache results and only recompute when their observable inputs change, with no manual memoization needed. MobX’s fine-grained tracking means components re-render with surgical precision. The trade-off is the “magic” of automatic tracking, which can be harder to debug than explicit subscriptions.

React’s Built-in: useReducer and the Store Pattern

For applications that don’t warrant an external library, useReducer provides Redux-like structured updates (dispatch actions to a reducer function) with React’s built-in state management. Combine with Context for shared state, but be aware of Context’s re-render limitations. The useReducer + Context pattern works well for moderate complexity (authentication flows, multi-step wizards, shopping carts with a handful of consumers) but should be graduated to Zustand or Jotai when profiling reveals excessive re-renders.

Resources:

Server State vs Client State: TanStack Query and Apollo

A critical distinction for performance: server state (data fetched from APIs — cached, stale, asynchronous) and client state (UI state — themes, open/closed toggles, form input) should be managed separately. Mixing them in a single store leads to unnecessary complexity and re-renders.

TanStack Query (React Query) is the gold standard for server state. It handles caching, background refetching, stale-while-revalidate, pagination, infinite scroll, optimistic updates, and request deduplication. Importantly, it caches at the query level — a component that reads useQuery({ queryKey: ['user', id] }) only re-renders when that specific query’s data changes. This eliminates the “global store re-renders everything” problem for server data. TanStack Query also integrates with Suspense for declarative loading states.

Apollo Client is the equivalent for GraphQL applications. Its normalized cache can prevent redundant network requests by deduplicating entities across queries — if a User object appears in both a list query and a detail query, Apollo caches it once and updates both views. However, Apollo’s hook-based approach (useQuery, useMutation) has historically been more tightly coupled to the component tree than the client’s manual cache operations. For performance, prefer Apollo’s watchQuery with fetchPolicy: 'cache-and-network' for data that should show cached results immediately while refreshing in the background. Use useLazyQuery for queries triggered by user interaction rather than component mount.

SWR (by Vercel) is a lighter alternative to TanStack Query with a focus on simplicity and the stale-while-revalidate pattern. It’s particularly well-integrated with Next.js but has a smaller feature set than TanStack Query.

Resources:

Profiling and Identifying Bottlenecks

The React DevTools Profiler is your primary tool for diagnosing re-render performance. Record a session, interact with the app, and examine the flamegraph to see which components rendered, why they rendered (props changed, parent rendered, context changed, hooks changed), and how long each render took. The “Highlight updates” feature visually flashes components that re-render, making it immediately obvious when something is re-rendering excessively.

Key patterns to look for: components that render frequently with identical output (candidates for memo or moving state down), context providers whose value changes on every render (need useMemo on the value), and expensive components that re-render due to parent re-renders (consider extracting into a separate component with memo, or restructuring to lift the changing state into a sibling). The Chrome DevTools Performance panel complements this with main-thread flame charts, long task identification, and LoAF attribution (see Section 2).

Resources: