← All articles

React Server Components + Streaming in Production: A Deep Dive (App Router 2025–2026)

How we migrated 400+ pages to RSC, what trade-offs actually materialized, how to measure and fix waterfalls, and why Suspense boundary placement is an engineering choice — not something the framework figures out for you.

We started our App Router migration with one expectation: RSC would make our data fetching simpler and our pages faster. Twelve months later, both are true — but not for the reasons we originally thought, and not without a significant detour through waterfall hell and Suspense boundary rethinks.

This is the article I wish had existed before we started.

Why We Migrated

Our product is a B2B SaaS dashboard with heavy data requirements per page — typically 4 to 8 independent data sources per route. In the Pages Router, this meant either over-fetching everything in getServerSideProps and passing a giant props blob down, or managing a patchwork of client-side data fetching with loading states everywhere.

RSC offered a different model: fetch data where it is used, at the component level, without client-side JavaScript. The pitch was compelling. The reality was more nuanced.

The Migration: What 400+ Pages Actually Means

Our 400+ pages number is slightly misleading — many are dynamic routes that resolve to a handful of distinct layouts. The actual migration work broke down into three categories:

  • Layout components — navigation, sidebars, breadcrumbs. These were the easiest: mostly server-rendered with no interactivity requirements.
  • Data-heavy page components — dashboards, tables, detail views. High value from RSC; also highest risk of waterfall introduction.
  • Interactive leaf components — forms, modals, real-time widgets. These stay as Client Components and required the most rethinking.
Note The most common migration mistake is adding 'use client' too high in the tree. This pushes the Client Component boundary up and eliminates the RSC benefit for everything below it. Push the boundary as far down as possible.

Waterfalls: How They Form and How to Detect Them

A waterfall in RSC occurs when a Server Component awaits data before rendering its children, and those children also await data. Each await is sequential — they do not run in parallel unless you explicitly make them parallel.

Here is the pattern that creates a waterfall:

// BAD — sequential fetches
async function ProductPage({ id }: { id: string }) {
  const product = await fetchProduct(id);          // waits ~120ms
  const reviews = await fetchReviews(id);          // waits another ~80ms
  const related = await fetchRelatedProducts(id);  // waits another ~90ms
  // Total: ~290ms before any HTML is sent
  return <Layout product={product} reviews={reviews} related={related} />;
}

And here is how to fix it:

// GOOD — parallel fetches with Promise.all
async function ProductPage({ id }: { id: string }) {
  const [product, reviews, related] = await Promise.all([
    fetchProduct(id),
    fetchReviews(id),
    fetchRelatedProducts(id),
  ]);
  // Total: ~120ms (the slowest of the three)
  return <Layout product={product} reviews={reviews} related={related} />;
}

But Promise.all only helps when the data is truly independent. When one fetch depends on the result of another, you have a legitimate waterfall — and the solution is different.

Detecting Waterfalls in Production

We instrumented our fetch layer to log timing data to our observability platform. Each fetch wraps in a lightweight tracer:

// lib/traced-fetch.ts

export async function tracedFetch<T>(
  key: string,
  fetcher: () => Promise<T>,
): Promise<T> {
  const start = performance.now();
  try {
    return await fetcher();
  } finally {
    const duration = performance.now() - start;
    if (duration > 200) {
      console.warn(`[slow-fetch] ${key} took ${duration.toFixed(0)}ms`);
    }
  }
}

Combined with Next.js's built-in request logging in development, this made waterfalls visible within the first day of a new page's deployment.

Suspense Boundary Placement: The Real Engineering Work

Streaming with Suspense is where RSC delivers its most significant UX improvement — the server can send HTML in chunks as data becomes available, rather than waiting for the slowest fetch before sending anything.

But Suspense boundary placement is a product decision masquerading as a technical one. Ask the wrong question ("where should I put Suspense?") and you get the wrong answer. The right question is: what is the minimum useful unit of UI that can be shown without this data?

Mental model Think of each Suspense boundary as a contract: "the user will see this shell immediately, and this data-dependent section will fill in as soon as it is ready." Design the shell first, then decide where the data boundary falls.

In practice, we settled on three boundary categories:

  • Page shell — navigation, header, layout chrome. Always immediate, never behind Suspense.
  • Primary content — the main data section. Behind a single Suspense with a content-shaped skeleton.
  • Secondary panels — related data, activity feeds, analytics widgets. Each behind its own Suspense, can arrive independently.
// app/dashboard/[id]/page.tsx

export default function DashboardPage({ params }: { params: { id: string } }) {
  return (
    <DashboardShell>
      <Suspense fallback={<MetricsSkeleton />}>
        <MetricsPanel id={params.id} />
      </Suspense>
      <Suspense fallback={<ActivitySkeleton />}>
        <ActivityFeed id={params.id} />
      </Suspense>
    </DashboardShell>
  );
}

Trade-offs We Did Not Expect

Cache invalidation got harder

RSC with Next.js caches aggressively. The revalidate option and cache: 'no-store' give you control, but stale data bugs are now harder to diagnose — the cache lives on the server, not in the browser DevTools you are used to inspecting.

Error boundaries need more thought

In the Pages Router, a data fetching error in getServerSideProps could redirect to a 500 page. In RSC, an unhandled error in a Server Component crashes the nearest error boundary. If that boundary is too high, you lose the entire page. If it is too granular, you end up with many small broken UI sections that confuse users.

Testing became simpler in some ways, harder in others

Server Components are pure functions of their props and environment — easy to unit test. But integration tests that span the server/client boundary require either a full Next.js test environment or careful mocking of the RSC runtime, neither of which is trivial.

Watch out Do not use React Query or SWR in Server Components — they are designed for client-side data synchronization. In RSC, fetch directly and let Next.js handle deduplication and caching at the fetch layer.

Results After 12 Months in Production

Across our 10 highest-traffic routes, moving to RSC + streaming delivered:

  • TTFB improved by an average of 340ms — the server sends the shell before all data is ready
  • JavaScript bundle size reduced by 18% — less client-side data fetching code
  • LCP improved on 8 of 10 routes — primary content streams earlier
  • INP was unchanged — our interactive components are still Client Components

The biggest win was not in the numbers. It was in the codebase: data fetching logic is now co-located with the components that use it, and the props-drilling chains that previously spanned 4–5 component layers are gone.

Key Takeaways

  • Use Promise.all for independent fetches — never await sequentially in the same component
  • Push the 'use client' boundary as far down the tree as possible
  • Design Suspense boundaries around "minimum useful UI units", not technical convenience
  • Instrument your fetch layer from day one — waterfalls are invisible without tracing
  • Plan your error boundary strategy before you migrate, not after your first production incident