What Are Core Web Vitals

What Are Core Web Vitals

The Three Metrics Google Uses to Judge Your Site Experience

Core Web Vitals are three specific performance metrics that Google uses to measure how real users experience your website: how quickly the main content loads, how fast the page responds to interaction, and how visually stable the page is while loading. They directly influence your search rankings, and more importantly, they quantify the experience your visitors are actually having. If your site feels sluggish, jumpy, or unresponsive, these metrics will tell you exactly where and why.

Google introduced Core Web Vitals in 2020 and made them a ranking signal in 2021. Since then, they have evolved slightly (one metric was replaced in 2024), but the core idea remains the same: measure what users actually feel, not just what servers report. These are field metrics, meaning they come from real Chrome users visiting your site, not from synthetic lab tests alone. That distinction matters enormously, and we will get into why shortly.

The Three Metrics Explained

Each Core Web Vital targets a different dimension of user experience. Think of them as measuring three distinct moments in a page visit: the moment content appears, the moment the page reacts to you, and whether things stay put while the page finishes loading.

Largest Contentful Paint (LCP)

LCP measures how long it takes for the largest visible element on the page to fully render. This is usually a hero image, a large heading, or a featured video thumbnail. It captures the moment when a visitor feels like the page has “loaded” in any meaningful sense. Google considers an LCP of 2.5 seconds or less to be good, between 2.5 and 4 seconds to need improvement, and anything over 4 seconds to be poor.

In practice, LCP is the metric most mid-market B2B sites struggle with. What we see on most projects is a homepage hero image that weighs 1.5MB, served without proper sizing or modern format support, sitting behind three render-blocking stylesheets and a tag manager that fires before any content appears. The server might respond in 200ms, but the browser cannot paint anything meaningful for another 3 to 5 seconds because it is busy downloading and processing things the user does not yet need.

LCP is not just about image optimisation, though that is often the most visible fix. It is also about server response time (how fast your hosting delivers the first byte), render-blocking resources (CSS and JavaScript files the browser must download before it can show anything), and resource load priority (whether the browser knows which element is most important to paint first). A fast LCP requires getting all of these right simultaneously, which is why retroactively fixing a slow site is so much harder than building speed in from the start.

Interaction to Next Paint (INP)

INP measures how responsive your page is to user interactions throughout its entire lifespan. When someone clicks a button, taps a menu, or types into a form field, INP captures the delay between that action and the next visual update the browser produces. Google considers an INP of 200 milliseconds or less to be good, and anything over 500 milliseconds to be poor.

INP replaced the original First Input Delay (FID) metric in March 2024. The key difference is that FID only measured the first interaction on a page, while INP considers all interactions and reports roughly the worst one. This was an important change because many sites passed FID easily but felt sluggish after the initial load, particularly once heavy JavaScript bundles had executed and event listeners were fighting for the main thread.

If your site relies on complex interactive elements, animated menus, dynamic filtering, live search, or chat widgets, INP is where problems surface. The typical culprit is excessive JavaScript execution on the main thread. Every time a user interacts with the page, the browser needs the main thread to process the event and update the display. If that thread is busy running analytics scripts, re-rendering a React component tree, or processing a third-party widget’s callback, the user’s click just waits in a queue. Two hundred milliseconds sounds generous until you realise that a single poorly optimised scroll handler or a synchronous analytics call can consume that budget entirely.

Cumulative Layout Shift (CLS)

CLS measures how much the visible content on your page shifts around unexpectedly during loading. If you have ever tried to tap a button on a mobile page only to have an ad or image load above it, pushing the button down and causing you to click the wrong thing, that is exactly what CLS quantifies. Google considers a CLS score of 0.1 or less to be good, and anything above 0.25 to be poor.

CLS is unitless. It is calculated by multiplying the fraction of the viewport that shifted by the distance it moved. A small text block nudging down 20 pixels barely registers. An entire page content area jumping 300 pixels because a late-loading banner ad inserted itself at the top will blow your CLS score immediately.

Common causes of layout shift include images and videos without explicit width and height attributes, web fonts that swap in and change the size of text blocks, dynamically injected content like cookie consent banners or promotional bars, and third-party embeds (social media widgets, maps, review carousels) that load asynchronously and claim space after the page has already painted. On the sites we audit, CLS problems are often the easiest to diagnose but require touching the most templates to fix, because the root cause tends to be repeated across dozens of page types.

Why These Metrics Matter Beyond SEO

The search ranking impact of Core Web Vitals gets all the attention, but the real business case is conversion performance. Google has published data showing that sites meeting all three Core Web Vitals thresholds see 24% fewer page abandonments. Our own project data aligns with this: when we have taken a B2B site from poor to good across all three metrics, we typically see a 15 to 30% improvement in form completion rates and a measurable reduction in bounce rate on key landing pages.

The reason is straightforward. A slow LCP means visitors are staring at a blank or half-rendered page and deciding whether to wait or hit the back button. A poor INP means their clicks feel unresponsive, which erodes trust, particularly on pages where you are asking someone to submit information or make a purchasing decision. High CLS creates a sense of instability that makes your site feel broken or unfinished, even if the content is excellent.

For B2B companies running paid campaigns, poor Core Web Vitals effectively increase your cost per acquisition. You are paying the same amount to drive a visitor to a page, but fewer of those visitors convert because the experience pushes them away. Fixing Core Web Vitals does not just improve your organic rankings; it improves the return on every traffic source pointing at your site.

Why These Metrics Matter Beyond SEO How Core Web Vitals Are Measured

How Core Web Vitals Are Measured

There are two fundamentally different ways to measure these metrics, and understanding the distinction is critical to diagnosing problems correctly.

Field Data (Real User Metrics)

Field data comes from real users visiting your site in real conditions. Google collects this through the Chrome User Experience Report (CrUX), which aggregates anonymised data from Chrome users who have opted into usage statistic reporting. This is the data Google uses for ranking purposes. It is collected over a rolling 28-day window and broken down by page URL and by origin (your whole domain).

You can access your field data through Google Search Console (the Core Web Vitals report), PageSpeed Insights (the top section showing “Discover what your real users are experiencing”), and the CrUX Dashboard on BigQuery. The important thing to understand is that this data reflects the actual devices, network connections, and geographical locations of your visitors. If most of your traffic comes from users on mid-range Android phones over 4G connections, that is the experience being measured, not the experience you see on your office MacBook over fibre broadband.

Field data has a significant lag. Changes you make today will not fully appear in your CrUX data for 28 days, and Google’s ranking systems may take additional time to process the improvement. This is why we always tell clients not to expect overnight SEO gains from performance work; the improvements are real but they build over weeks.

Lab Data (Synthetic Testing)

Lab data comes from running your page through a controlled test environment. Tools like Lighthouse (built into Chrome DevTools), WebPageTest, and the “Diagnose performance issues” section of PageSpeed Insights generate lab data. These tools simulate a specific device and network speed, then measure performance under those controlled conditions.

Lab data is essential for diagnosing issues because it is reproducible and gives you detailed diagnostic information: which resources blocked rendering, which scripts ran on the main thread and for how long, which elements triggered layout shifts. But lab data does not directly influence your Google rankings. Your Lighthouse score is not your Core Web Vitals score. We regularly see sites with a Lighthouse performance score of 85 that fail Core Web Vitals in the field, because the lab test uses a simulated environment that does not match real-world conditions.

The most productive workflow is to use field data to identify which metrics and which pages have problems, then use lab tools to diagnose the specific causes and validate your fixes before deploying them.

What “Good” Actually Looks Like

Google assesses your Core Web Vitals at the 75th percentile of page loads. This is a frequently misunderstood detail. It means that at least 75% of your real user visits need to meet the threshold for the page to be classified as “good.” You cannot have a fast median experience and pass; you need the vast majority of visits to be fast, including those from users on slower devices and connections.

Here are the thresholds in summary:

  • LCP: Good at 2.5s or less. Poor above 4s.
  • INP: Good at 200ms or less. Poor above 500ms.
  • CLS: Good at 0.1 or less. Poor above 0.25.

A page passes Core Web Vitals only when all three metrics are in the “good” range. Passing two out of three still means you fail. Google Search Console groups your pages into “Good,” “Needs Improvement,” and “Poor” categories, and a single failing metric on a page puts it into one of the latter two buckets.

For most B2B sites in the 10 to 250 employee range, the pattern we see is fairly consistent: LCP is the primary failure, often caused by unoptimised hero images and slow server response times. CLS is the second most common issue, typically caused by web font loading behaviour and late-injecting third-party scripts. INP tends to be better on these sites because B2B pages are generally less JavaScript-heavy than e-commerce or media sites, but it can still fail when marketing teams have accumulated numerous tracking scripts and chat tools over time.

The Most Common Reasons B2B Sites Fail

After auditing hundreds of mid-market sites, certain patterns repeat so often they are almost predictable.

Oversized, unoptimised images are the single most frequent LCP killer. Marketing teams upload the original photograph from a camera or stock library, the CMS creates a few size variants, but the hero image still renders as a 2000-pixel-wide JPEG at quality 90 when the viewport only needs an 800-pixel-wide WebP at quality 75. The difference can easily be 500KB per image, which at mobile 4G speeds adds over a second to LCP.

Slow hosting and lack of caching create a poor foundation that no amount of front-end optimisation can overcome. If your server takes 800ms to generate a page response (common on shared hosting running WordPress with a dozen plugins), you have already consumed a third of your LCP budget before the browser has received a single byte of HTML. Moving to a properly configured hosting environment with server-level caching, or placing a CDN in front of the origin, typically cuts that to under 200ms.

Third-party script accumulation is the silent killer. Google Tag Manager, analytics, heatmaps, A/B testing tools, chatbots, social proof widgets, retargeting pixels. Each one adds JavaScript that competes for the main thread. Individually, most are small. Collectively, they can add 2 to 4 seconds of processing time on a mid-range mobile device. The challenge is that these scripts are typically added by marketing teams over months or years, with no one tracking the cumulative performance cost. Our team typically recommends a third-party script audit as one of the first steps in any performance project, removing what is unused and deferring what remains.

Render-blocking CSS and JavaScript prevent the browser from showing any content until they have fully downloaded and executed. A common pattern is a monolithic CSS file containing styles for every page template on the site, loaded on every page. The browser must download and parse all of it before rendering anything, even though only 15 to 20% of those styles apply to the current page. Critical CSS extraction, where you inline the styles needed for above-the-fold content and defer the rest, is one of the most effective LCP improvements available, but it requires thoughtful implementation during the build process.

This is precisely why performance needs to be an architectural decision, not an afterthought. When image handling, hosting infrastructure, CSS strategy, and script management are planned before design and development begin, hitting Core Web Vitals thresholds is straightforward. When they are addressed after launch, you are retrofitting solutions onto a structure that was not designed for them, which is slower, more expensive, and often produces fragile results. We cover this approach in depth in our performance architecture guide, which walks through how to set performance budgets and make infrastructure decisions early.

The Most Common Reasons B2B Sites Fail How to Check Your Core Web Vitals Right Now

How to Check Your Core Web Vitals Right Now

You do not need any special tools or technical knowledge to get a baseline reading. Here is the fastest path to understanding where you stand.

Google Search Console is the definitive source. Log in, navigate to “Core Web Vitals” in the left sidebar under “Experience,” and you will see your pages grouped by status for both mobile and desktop. This uses real field data from your visitors, so it reflects actual experience. If you see a large number of URLs marked “Poor” or “Needs improvement” on mobile, that is your starting point.

PageSpeed Insights (pagespeed.web.dev) lets you test individual URLs. Enter a page URL and the tool will show both field data (if available for that URL) and lab data with specific diagnostic recommendations. Pay attention to the field data section first. The lab diagnostics underneath are useful for identifying causes, but do not fixate on the overall Lighthouse score; focus on the three Core Web Vitals metrics specifically.

For ongoing monitoring, real user monitoring (RUM) tools give you granular, page-by-page data with the ability to segment by device type, connection speed, geography, and other dimensions. Tools like SpeedCurve, Calibre, and web-vitals.js (Google’s open-source library) let you collect and analyse this data in much more detail than CrUX provides. For sites with sufficient traffic, this level of detail helps you prioritise which pages and which issues to fix first based on actual business impact.

Prioritising Fixes for Maximum Impact

When you have a site with dozens or hundreds of pages failing Core Web Vitals, the temptation is to try and fix everything at once. That approach usually stalls because the work feels overwhelming and progress is hard to measure. A more effective strategy is to prioritise by traffic and business value.

Start with the pages that receive the most organic search traffic and the pages that drive the most conversions (contact forms, demo requests, pricing pages). Improving Core Web Vitals on your top 10 to 15 pages by traffic will often cover 60 to 80% of your total page views, which shifts your origin-level CrUX data meaningfully and gives you ranking benefits across the site.

Within each page, the order of fixes that typically yields the highest return is: first address server response time and caching (this benefits every page), then tackle LCP issues (image optimisation, critical CSS, preloading key resources), then resolve CLS problems (dimension attributes, font loading strategy, reserved space for dynamic content), and finally optimise INP (deferring non-critical JavaScript, breaking up long tasks, reducing main-thread work).

This sequencing works because each layer builds on the previous one. There is no point optimising JavaScript execution if the server takes a full second to respond, and there is no point fine-tuning font loading if the hero image alone consumes your entire LCP budget.

Core Web Vitals and Site Redesigns

If you are planning a website redesign or migration, Core Web Vitals should be a defined requirement in your project brief, not something you hope to achieve after launch. We have seen too many redesigns where the new site looks polished but performs worse than the one it replaced, because nobody set performance targets or validated design decisions against them during the build process.

Specifically, this means setting LCP, INP, and CLS targets before wireframing begins, choosing a hosting platform and CDN configuration based on performance requirements rather than convenience, defining an image and media handling strategy (formats, compression, responsive sizing, lazy loading rules) as part of the technical specification, and auditing third-party scripts before migrating them to the new site. Scripts that were added to the old site two years ago may no longer serve any purpose, and a redesign is the perfect opportunity to audit and remove them.

Performance budgets are the mechanism that makes this work. A performance budget is a quantified limit on page weight, request count, and key timing metrics that every page template must meet. When a designer proposes a full-width video background or a developer suggests a new animation library, the performance budget gives you an objective way to evaluate whether it fits within your constraints or needs to be rethought.

What to Do Next

Check your Core Web Vitals in Google Search Console today. Look at the mobile report first, because that is where most failures occur and where Google’s mobile-first indexing focuses its attention. If you see pages in the “Poor” or “Needs improvement” categories, run your highest-traffic failing page through PageSpeed Insights and read the diagnostic details for LCP, INP, and CLS specifically. That will give you a concrete understanding of what is causing the failures and whether the fixes are quick wins (image optimisation, adding dimension attributes) or structural issues (hosting, CSS architecture, JavaScript dependency) that require deeper work. Knowing the difference is the first step toward building a site that is genuinely fast for the people you are trying to reach.

Related

REGISTER

User Pic

SIGN IN