Skip to main content

Performance & Core Web Vitals

Performance is a user-experience concern first and a Google Search ranking signal second — slow pages lose users, and Google's page experience signals factor Core Web Vitals (CWV) into ranking. This page is the performance-testing pillar that complements Testing Strategy: it covers what Lighthouse measures, what the Core Web Vitals are and how they're scored, the difference between lab and field data, how to actually run audits (DevTools, CLI, PageSpeed Insights, Lighthouse CI, RUM), and a recommended Aliz workflow. Per-metric optimisation guidance stays as short checklists with links out to web.dev — no need to duplicate what's already well-documented there.

What Lighthouse is

Lighthouse is Google's open-source auditing tool for web pages. The same engine powers Chrome DevTools' Lighthouse panel, the lighthouse Node CLI, PageSpeed Insights (PSI), and Lighthouse CI — so a result you see in any of them is reproducible in the others.

Lighthouse currently reports four scored categories:

  • Performance — page-load and runtime metrics (FCP, LCP, TBT, CLS, Speed Index).
  • Accessibility — automated checks via axe-core; see Accessibility for the manual side.
  • Best Practices — HTTPS, console errors, deprecated APIs, image aspect ratios, etc.
  • SEO — basic crawlability and meta-tag checks.

Each category is scored 0–100 with the conventional bands: red 0–49, orange 50–89, green 90–100.

note

The standalone PWA category was removed in Lighthouse v12 (June 2024). PWA installability is now surfaced as individual audits rather than a scored category.

Core Web Vitals

Core Web Vitals are Google's user-centric performance metrics, covering the three dimensions that most affect perceived quality: loading, responsiveness, and visual stability. They are evaluated at the 75th percentile of real-user mobile traffic, and a URL only "passes" CWV if all three are in the "good" band.

MetricMeasuresGoodNeeds ImprovementPoor
LCPLoading — render time of the largest visible element≤ 2.5 s≤ 4.0 s> 4.0 s
INPResponsiveness — worst-case input-to-next-paint latency≤ 200 ms≤ 500 ms> 500 ms
CLSVisual stability — largest burst of unexpected layout shifts≤ 0.1≤ 0.25> 0.25

LCP — Largest Contentful Paint

LCP marks the moment the largest visible element in the viewport — usually a hero image, video poster, or a large block of text — has rendered. It's the closest single number to "the user feels the page is loaded." Common causes of poor LCP are slow servers, render-blocking resources, lazy-loaded hero images, and oversized image payloads.

INP — Interaction to Next Paint

INP measures the worst-case latency between a user input (click, tap, keypress) and the next frame the browser paints in response. Unlike its predecessor, INP samples every interaction across the page's lifetime and reports a near-worst-case value, so a single janky click can dominate the score. It is dominated by long tasks on the main thread — heavy JavaScript handlers, large React renders, and synchronous layout work.

note

INP replaced FID as a Core Web Vital on 12 March 2024. FID is no longer reported.

CLS — Cumulative Layout Shift

CLS measures the largest burst of unexpected layout shifts during the page's lifetime — content jumping around as images load, fonts swap, or late-injected banners push everything down. It's a pure visual-stability metric: the fix is almost always reserving space ahead of time rather than making things faster.

Supporting metrics and the Lighthouse score

Lighthouse reports more than CWV because lab tools can't measure INP — there are no real users in a synthetic run. Instead, Lighthouse uses Total Blocking Time (TBT) as a lab proxy and rounds out the picture with FCP, TTFB, and Speed Index.

MetricMeaningGoodPoor
FCP — First Contentful PaintFirst DOM content rendered≤ 1.8 s> 3.0 s
TTFB — Time to First ByteServer response latency; correlates with LCP≤ 0.8 s> 1.8 s
TBT — Total Blocking TimeLab proxy for INP — sum of long-task blocking time≤ 200 ms> 600 ms
Speed IndexHow quickly content visually populates≤ 3.4 s> 5.8 s

The Performance score in Lighthouse 10+ is a weighted average of five metrics:

MetricWeight
First Contentful Paint (FCP)10%
Speed Index (SI)10%
Largest Contentful Paint (LCP)25%
Total Blocking Time (TBT)30%
Cumulative Layout Shift (CLS)25%

The cause-and-effect chain is straightforward: fixing TBT in the lab usually moves INP in the field, and fixing TTFB usually moves LCP — so when you optimise lab metrics, you're indirectly optimising the field CWV that ranking actually uses.

Lab data vs. field data

Lighthouse produces lab data: a synthetic run on a throttled CPU and network, reproducible and great for pre-launch debugging — but it can't measure INP and doesn't reflect the long tail of real devices, networks, or user behaviour. Field data (also called RUM — real user monitoring) is what actual visitors experience. Google Search ranking uses field data, not your Lighthouse score.

Field CWV are available in two places:

caution

A single Lighthouse run is noisy — CPU contention, network jitter, and ad/script variability can swing scores by 10+ points. Always trust the median of multiple runs, or field data, over a single number.

The rule of thumb: optimise in lab, validate in field.

How to run Lighthouse

Chrome DevTools

Open DevTools → Lighthouse panel → choose categories and Mobile/Desktop → Analyze page load. Best for one-off debugging when you can already reproduce the issue locally.

Node CLI

The lighthouse CLI is convenient for ad-hoc audits, especially with the desktop preset, and for saving HTML reports to share.

npx --yes lighthouse@12.6.0 https://example.com --view --preset=desktop

For whole-site audits, see Unlighthouse, which crawls every page and aggregates Lighthouse results.

PageSpeed Insights

PageSpeed Insights runs Lighthouse against a public URL and layers in CrUX field data for the same origin. It's the best single check before shipping a change to production — you see lab and field side by side.

Lighthouse CI on pull requests

For a regression-proof setup, run Lighthouse on every PR against the Vercel preview URL with @lhci/cli and the treosh/lighthouse-ci-action. Define budgets and assertions in lighthouserc.json:

lighthouserc.json
{
"ci": {
"collect": {
"url": ["https://your-preview.vercel.app/"],
"numberOfRuns": 3
},
"assert": {
"assertions": {
"categories:performance": ["error", { "minScore": 0.9 }],
"categories:accessibility": ["error", { "minScore": 0.95 }],
"largest-contentful-paint": ["warn", { "maxNumericValue": 2500 }],
"cumulative-layout-shift": ["warn", { "maxNumericValue": 0.1 }]
}
}
}
}
.github/workflows/lighthouse.yml
- name: Lighthouse CI
uses: treosh/lighthouse-ci-action@3e7e23fb74242897f95c0ba9cabad3d0227b9b18 # v12
with:
configPath: ./lighthouserc.json
uploadArtifacts: true
temporaryPublicStorage: false
caution

Avoid temporary-public-storage for private apps. Setting temporaryPublicStorage: true (or "upload": { "target": "temporary-public-storage" } in lighthouserc.json) uploads the full HTML report to a public Google Cloud bucket — anyone with the URL can view it. That leaks preview URLs, page contents, and any tokens in query strings. The example above keeps results private: uploadArtifacts: true stores the report as a GitHub Actions artifact scoped to the repo. For historical tracking across runs, self-host an LHCI server.

tip

Run Lighthouse CI against your Vercel preview URL on every PR — see Deploy.

Field RUM in production

In production, ship the web-vitals library or enable Vercel Speed Insights. The web-vitals library is ~2 KB and reports each metric as it stabilises:

src/reportWebVitals.ts
import { onCLS, onINP, onLCP } from 'web-vitals';

function send(metric: { name: string; value: number; id: string }) {
// Forward to your analytics endpoint of choice
navigator.sendBeacon('/analytics/vitals', JSON.stringify(metric));
}

onCLS(send);
onINP(send);
onLCP(send);

Improving each Core Web Vital

These are checklists, not tutorials — for the full guidance, follow the web.dev links.

Improving LCP

  • Cut TTFB: serve from a CDN/edge, cache aggressively, choose the right rendering strategy (Rendering Strategies).
  • Prioritise the LCP image: add fetchpriority="high", never loading="lazy" on the LCP element, preconnect to its origin, use AVIF/WebP, and serve a responsive srcset.
  • Unblock the critical path: inline critical CSS, defer non-critical JS, and avoid render-blocking third-party scripts above the fold.
  • Mind the LCP sub-parts: TTFB → resource load delay → resource load duration → element render delay. Each is a distinct fix.
  • See Optimize LCP — web.dev.

Improving INP

  • Break up long tasks: use scheduler.yield() where supported, otherwise setTimeout(fn, 0) or requestIdleCallback to yield to the main thread.
  • Offload heavy work to Web Workers — anything CPU-bound that doesn't touch the DOM.
  • In React: wrap non-urgent state updates in useTransition, defer derived values with useDeferredValue, memoise expensive children, and virtualise long lists.
  • Tame third-party scripts: defer or lazy-load analytics, chat widgets, and ad tags.
  • Reduce DOM size and avoid forced synchronous layout (e.g., reading offsetHeight after a write) inside input handlers.
  • See Optimize INP — web.dev.

Improving CLS

  • Always set dimensions: width/height attributes or aspect-ratio on <img>, <video>, and <iframe>.
  • Reserve space for ads, embeds, cookie banners, and any late-injected content — use a min-height container.
  • Stabilise fonts: font-display: optional (or swap paired with size-adjust fallback metrics) to avoid late re-layout when web fonts arrive.
  • Never inject DOM above existing content unless it's a response to user interaction.
  • Animate via transform / opacity only — never top, left, width, or height.
  • See Optimize CLS — web.dev.

A pragmatic, four-stage workflow that catches regressions early without becoming noise:

  1. During development — open DevTools Lighthouse on the key pages: home, top conversion route, and the heaviest content page. Fix obvious red flags before they leave your machine.
  2. On every PR — run Lighthouse CI against the Vercel preview URL. Encode budgets and assertions in lighthouserc.json. Fail the build on regression so reviewers see the impact in the PR check.
  3. In production — collect field CWV continuously via Vercel Speed Insights or web-vitals shipped to your analytics. Watch the p75 — that's the number Search uses.
  4. Before launch — run PageSpeed Insights on the production URLs and confirm CrUX field data is in the "good" band for all three CWV.
tip

Set Lighthouse CI assertions as warnings first, then promote them to errors once your team has a stable baseline — otherwise noisy failures will train people to ignore them.

Remember: Core Web Vitals are a Google Search ranking signal via the page experience system. Performance work is product work.

Resources