Web Vitals in 2026: Keeping LCP Under 1.5s With 500k+ Users and Heavy SSR
Real cases from production: how font loading strategy, hero image delivery, third-party scripts, partial prerendering, and edge caching interact — and how to win each battle without sacrificing the others.
Our product serves 500k+ monthly active users through a Next.js SSR application with significant page weight — many pages load data from 5+ services, render complex UI, and include third-party scripts for analytics, support chat, and A/B testing. Keeping LCP under 1.5 seconds in that environment is an ongoing engineering discipline, not a one-time fix.
Here is what the optimization work actually looks like, problem by problem.
The LCP Candidate Problem
Before optimizing LCP, you need to know what the LCP element actually is on your page. A common mistake is assuming it is the hero image. In our case, on mobile viewports, the LCP element on several high-traffic pages was a text heading — because the image loaded below the fold.
Use the Chrome DevTools Performance panel and CrUX field data together. Lab data tells you what LCP is in ideal conditions; field data tells you what real users experience across device classes and connection speeds. The two are often different.
web-vitals JavaScript library lets you collect field data in your own analytics. Send the LCP metric with the element property to understand which element is slow across real user sessions, segmented by device type.
Font Loading: The Hidden LCP Killer
Fonts block LCP in two distinct ways: the font file itself delays text rendering, and layout shift from font swap pushes other elements around. Both hurt CWV.
In Next.js 16, the right approach is next/font/google — it downloads the font at build time and serves it from your own domain. No Google CDN involved at runtime, no manual <link rel="preconnect"> to fonts.gstatic.com needed:
// app/layout.tsx
import { Inter } from 'next/font/google';
const inter = Inter({
subsets: ['latin'],
display: 'optional', // Never causes layout shift
preload: true, // Auto-generates <link rel="preload"> in <head>
});
export default function RootLayout({ children }: { children: React.ReactNode }) {
return (
<html lang="en" className={inter.className}>
<body>{children}</body>
</html>
);
}
next/font handles preloading and inlining the @font-face declaration automatically. display: 'optional' means the browser uses the font only if it loads before the render deadline — otherwise it uses the fallback and never swaps. This eliminates CLS from font swap entirely, at the cost of occasionally showing the fallback on slow connections. For our use case, that tradeoff is correct.
size-adjust, ascent-override, and descent-override on your fallback font face to match the metrics of your custom font. This minimizes the visual difference between the fallback and the loaded font, making font-display: optional nearly invisible to users.
Hero Images: Every Millisecond Counts
For pages where the LCP element is an image, the delivery chain matters end to end:
- Format — AVIF first, WebP fallback, JPEG last. AVIF is 40–50% smaller than WebP for equivalent quality.
- Responsive sizing — serve the correct size for the viewport. A mobile user should not download a 1400px image.
- Preload hint — add a
<link rel="preload">for the LCP image in the document head. This starts the download before the browser parses the body. - No lazy loading on LCP —
loading="lazy"on the LCP image delays it. This is the most common single mistake we see in audits.
// In Next.js App Router — LCP image component
import Image from 'next/image';
export function HeroImage() {
return (
<Image
src="/hero.jpg"
alt="Dashboard overview"
width={1200}
height={600}
priority // Generates <link rel="preload"> in <head>
// Do NOT add loading="lazy" here
/>
);
}
Third-Party Scripts and INP
Interaction to Next Paint (INP) replaced FID as a Core Web Vital. INP measures the full duration from user interaction to the next paint — including any long tasks that block the main thread during that window.
Third-party scripts are the most common INP killer. Analytics libraries, support chat widgets, and A/B testing SDKs all run on the main thread and frequently cause long tasks during user interaction.
Our approach is to load all non-critical third-party scripts after the page is interactive, via a deferred loader:
// components/ThirdPartyScripts.tsx
'use client';
import { useEffect } from 'react';
export function ThirdPartyScripts() {
useEffect(() => {
// Load only after first interaction or 5s timeout
const load = () => {
const script = document.createElement('script');
script.src = 'https://analytics.example.com/sdk.js';
script.async = true;
document.head.appendChild(script);
};
const events = ['pointerdown', 'keydown', 'touchstart'];
const onInteraction = () => {
load();
events.forEach(e => window.removeEventListener(e, onInteraction));
};
events.forEach(e => window.addEventListener(e, onInteraction, { passive: true }));
const timeout = setTimeout(load, 5000);
return () => {
clearTimeout(timeout);
events.forEach(e => window.removeEventListener(e, onInteraction));
};
}, []);
return null;
}
This pattern reduced our 75th-percentile INP from 280ms to 140ms on our highest-traffic pages.
Partial Prerendering
Next.js Partial Prerendering (PPR) — fully stable in Next.js 16, no longer behind an experimental flag — lets you prerender the static shell of a page at build time while streaming dynamic content from the server. This is the best of both worlds for pages that have a stable chrome and variable content.
// next.config.ts — Next.js 16: ppr is a top-level option
import type { NextConfig } from 'next';
const config: NextConfig = {
ppr: true,
// or 'incremental' to opt in per route with export const experimental_ppr = true
};
// app/dashboard/page.tsx
import { Suspense } from 'react';
import { DashboardShell } from './shell'; // Static — prerendered at build time
import { DynamicMetrics } from './metrics'; // Dynamic — streamed per request
export default function DashboardPage() {
return (
<DashboardShell>
<Suspense fallback={<MetricsSkeleton />}>
<DynamicMetrics />
</Suspense>
</DashboardShell>
);
}
On our dashboard route, PPR brought TTFB from ~380ms to ~60ms for the shell. Users see the navigation and layout immediately from CDN cache, while the data-heavy content streams in.
Edge Caching Strategy
SSR pages are cache-unfriendly by default — each request hits the server. But most SSR pages have components that are identical across users: navigation, feature banners, marketing copy. Caching these at the edge dramatically reduces server load and TTFB.
We use a combination of strategies:
- Stale-while-revalidate at the edge for public pages — serve cached, revalidate in background
- Vary: Cookie header segmentation — different cache buckets for authenticated vs. anonymous users
- Fragment caching — cache individual RSC payloads for shared components separately from the page response
Vary strategy — this is a data leak risk.
Key Takeaways
- Identify your actual LCP element with field data — it is often not what you expect
- Use
next/font/googleto self-host fonts automatically; setdisplay: 'optional'to eliminate font-related CLS - Never use
loading="lazy"on the LCP image; always usepriorityin Next.js Image - Load third-party scripts after first interaction to protect INP
- Use Partial Prerendering for pages with a stable shell — CDN-cached shell + streamed dynamic content
- Segment edge cache by user state with
Varyheaders; never cache authenticated user data naively