How To Audit Your Website Performance Before Spending Money

How To Audit Your Website Performance Before Spending Money

Start With What You Can Measure for Free

Before you spend a penny on a new website, a redesign, or a performance optimisation project, you need to know exactly where you stand. A proper performance audit takes about two hours and costs nothing except your time. It will tell you whether your site has structural problems that require architectural changes, or surface-level issues that a developer can fix in a few days. That distinction alone can save you tens of thousands of pounds in misdirected budget.

Most mid-market companies we work with come to us after they’ve already spent money on the wrong thing. They’ve paid for a CDN they didn’t need, or hired a freelancer to “speed up” their site without diagnosing what was actually slow. A proper audit prevents that. It gives you a factual baseline, a prioritised list of problems, and the language to have an informed conversation with whoever you hire next.

This article walks you through a complete self-audit process. You won’t need any special tools beyond free ones from Google and a few open-source alternatives. By the end, you’ll have a clear picture of your site’s performance health and a much better sense of where your money should go.

Gather Your Core Web Vitals Data First

Google’s Core Web Vitals are the single most important set of metrics to check first, because they represent how real users experience your site, not just how fast it loads in a lab. The three metrics you need are Largest Contentful Paint (LCP), which measures how quickly the main content becomes visible; Interaction to Next Paint (INP), which measures responsiveness when someone clicks or taps something; and Cumulative Layout Shift (CLS), which measures how much the page layout jumps around during loading.

Go to Google PageSpeed Insights and enter your five most important pages: your homepage, your main service or product page, a blog post that gets decent traffic, your contact or demo request page, and one other high-traffic landing page. Don’t just test the homepage. We see this mistake constantly. A homepage might score well because it was the focus of your last redesign, while your product pages, which actually generate leads, are three seconds slower.

For each page, look at two separate sections in the report. The top section, labelled “Discover what your real users are experiencing”, shows field data collected from actual Chrome users over the previous 28 days. This is the data that matters most. The bottom section shows lab data from a simulated test. Lab data is useful for diagnosing specific issues, but field data tells you the truth about real-world performance.

What the numbers actually mean

For LCP, you want to see 2.5 seconds or less. Anything between 2.5 and 4 seconds needs improvement. Above 4 seconds means your visitors are actively leaving before the page finishes loading. For INP, the threshold is 200 milliseconds. If your site takes longer than that to respond to interactions, forms feel sluggish, menus feel broken, and users lose confidence. For CLS, you want a score below 0.1. Above 0.25 means elements are visibly jumping around the page, which is particularly damaging on mobile.

Record all of these numbers in a simple spreadsheet. You’ll refer back to them when evaluating proposals from developers or agencies. If someone tells you they can “fix your speed” but can’t explain which of these metrics they’re targeting and by how much, that’s a red flag.

Run a Waterfall Analysis to Find the Real Bottlenecks

PageSpeed Insights tells you what is slow. A waterfall chart tells you why. Open your browser’s developer tools (right-click anywhere on your page, select “Inspect”, then click the “Network” tab), reload the page, and look at the waterfall view. This shows every single file your page requests, in what order, and how long each one takes.

What you’re looking for are three specific patterns:

  • Render-blocking resources at the top of the waterfall. These are CSS and JavaScript files that load before anything appears on screen. Every render-blocking file adds delay to your LCP. On most mid-market sites we audit, there are between 5 and 15 render-blocking scripts, many of which are from plugins the team installed years ago and forgot about.
  • Large files that take a disproportionate time to download. Sort by file size and look for anything over 500KB. Uncompressed images, unminified JavaScript bundles, and web fonts loaded in multiple weights are the usual suspects.
  • Long chains of sequential requests. This happens when file A has to finish loading before file B can start, which has to finish before file C can start. Each link in the chain adds latency. A common example is a CSS file that references a web font, which then triggers a font file download. That’s three sequential requests before text can even render.

Take screenshots of the waterfall for your key pages. These will be invaluable when briefing a developer. Instead of saying “make the site faster,” you can point to specific resources and say, “this 1.2MB JavaScript bundle is blocking rendering for 2.3 seconds. Can we defer or split it?”

Run a Waterfall Analysis to Find the Real Bottlenecks Audit Your Third-Party Scripts

Audit Your Third-Party Scripts

Third-party scripts are the silent killers of website performance, and most businesses have no idea how many they’re running. These are scripts loaded from external domains: analytics tools, chat widgets, heatmap trackers, advertising pixels, A/B testing platforms, social media embeds, and cookie consent banners.

To audit yours, open the Network tab in your browser’s developer tools again, but this time filter by “third-party” or sort by domain. Count how many external domains your page is calling. On the average mid-market B2B site, we typically find between 8 and 25 third-party scripts. Many of our clients are shocked when they see the list, because marketing teams add tracking scripts over time without ever removing old ones.

For each third-party script, ask three questions:

  • Is anyone actually using the data this collects? If you have a heatmap tool installed but nobody has looked at a heatmap in six months, remove it. If you’re running a Facebook pixel but haven’t run Facebook ads in a year, remove it.
  • How much does this script cost in load time? Check the size and the time it takes to execute. Some chat widgets add 300-500KB of JavaScript and 200ms of main-thread blocking time. That’s a meaningful performance cost.
  • Can this be loaded later instead of immediately? Many scripts don’t need to run until after the page has finished its initial render. Analytics, for instance, can fire a few seconds after the page loads without losing data accuracy.

In our experience, simply removing unused third-party scripts and deferring the rest typically improves LCP by 0.5 to 1.5 seconds. That’s often the single biggest quick win in any audit.

Check Your Image Situation

Images are usually the largest assets on any web page, and they’re also the easiest to get wrong. Open your page and check three things about every visible image.

First, check the file format. If your images are still being served as JPEG or PNG when they could be WebP or AVIF, you’re sending files that are 25-50% larger than they need to be. Most modern browsers support WebP. AVIF support is growing rapidly. Your CMS should be converting images to these formats automatically on upload. If it isn’t, that’s a quick fix worth making.

Second, check the dimensions. Right-click an image, inspect it, and compare its intrinsic size (the actual pixel dimensions of the file) with its displayed size on screen. We regularly find hero images that are 3000 pixels wide being displayed at 800 pixels. That means the browser downloaded roughly 10x more data than it needed. This is especially painful on mobile, where a 2MB hero image over a 4G connection can add several seconds to your load time.

Third, check whether images below the fold are using lazy loading. Images that aren’t visible when the page first loads should have the loading="lazy" attribute, which tells the browser to only fetch them when the user scrolls near them. Conversely, your LCP image, the main hero image or banner that’s visible immediately, should not be lazy loaded. It should be loaded eagerly, ideally with a fetchpriority="high" attribute to tell the browser to prioritise it.

A quick way to quantify the opportunity: add up the total image weight on your five key pages. If the average is over 1MB per page, there’s meaningful savings available. Most well-optimised B2B pages keep total image weight under 400KB without sacrificing visual quality.

Evaluate Your Hosting and Server Response Time

Time to First Byte (TTFB) measures how long it takes your server to begin sending a response after the browser requests a page. You can find this in PageSpeed Insights under the diagnostics section, or in the Network tab of your developer tools (look at the “Waiting” or “TTFB” column for your initial HTML document request).

A good TTFB is under 200 milliseconds. Acceptable is under 600ms. Anything over 800ms means your server is genuinely struggling, and no amount of front-end optimisation will compensate. If your site takes a full second just to start sending HTML, your LCP cannot possibly be under 2.5 seconds. The maths doesn’t work.

Poor TTFB usually comes from one of four sources: shared hosting that’s overcrowded, a CMS that runs expensive database queries on every page load without caching, a server that’s geographically far from your audience, or missing page-level caching. WordPress sites without object caching or full-page caching frequently show TTFB above one second, particularly on pages with complex layouts or many plugin-driven elements.

If your TTFB is consistently high across multiple pages, this is a hosting or caching problem, not a design problem. Fixing it might mean upgrading your hosting plan, adding a caching layer, or in some cases, rethinking your CMS approach entirely. This is one of those areas where understanding the root cause before spending money is critical. A £200/month hosting upgrade might solve the problem, or it might not, depending on whether the bottleneck is actually at the server level or in the application layer.

Test Mobile Performance Separately

Desktop and mobile performance are not the same, and the gap is often larger than people expect. PageSpeed Insights defaults to showing mobile results, which is intentional: Google’s ranking algorithms use mobile performance data. But many companies only test on their desktop machines, see a fast-loading page, and assume everything is fine.

Mobile devices have slower processors, less memory, and often run on variable-quality network connections. A JavaScript bundle that executes in 200ms on your MacBook Pro might take 1.5 seconds on a mid-range Android phone. This is why INP problems often show up exclusively in mobile field data.

Test your key pages in PageSpeed Insights with the mobile toggle selected. Then, for a reality check, open Chrome’s developer tools and use the Performance tab with CPU throttling set to 4x slowdown and network throttling set to “Fast 3G.” This simulates the experience of your actual mobile visitors far more accurately than testing on your high-end laptop connected to your office Wi-Fi.

If your mobile scores are dramatically worse than desktop (which they usually are), note which metrics diverge most. If LCP is the problem on mobile but fine on desktop, the issue is likely image sizes or render-blocking resources that are tolerable on a fast connection but devastating on a slow one. If INP is poor on mobile, you’re probably shipping too much JavaScript that chokes slower processors.

Test Mobile Performance Separately Assess Your CMS and Plugin Overhead

Assess Your CMS and Plugin Overhead

If you’re running WordPress, Drupal, or any plugin-based CMS, your next step is to audit what’s actually installed and active. Log into your admin panel and review every plugin or module. For each one, note whether it adds front-end assets (CSS or JavaScript) to your pages.

Many plugins inject their scripts on every single page, even when they’re only used on one. A contact form plugin that loads its CSS and JavaScript on your homepage, your blog posts, and your about page, despite only being used on your contact page, is adding unnecessary weight everywhere. A slider plugin you used once for a campaign landing page two years ago might still be loading its 150KB JavaScript bundle on every page load.

Count your active plugins. On WordPress sites, what we typically see with mid-market companies is somewhere between 15 and 35 active plugins. Not all of these cause performance issues, but each one is a potential contributor. The cumulative effect matters. Ten plugins each adding 30KB of JavaScript means 300KB of additional scripting that the browser has to download, parse, and execute.

A useful test: temporarily deactivate all non-essential plugins on a staging environment (never do this on your live site) and re-run your PageSpeed Insights test. The difference in score will tell you exactly how much overhead your plugin stack is adding. We’ve seen this exercise reveal improvements of 20-40 points on the PageSpeed score, which translates directly into faster load times for real users.

Document What You Find in a Way That’s Actually Useful

An audit is only valuable if it produces a clear, actionable document you can hand to a developer, agency, or internal team. Avoid the trap of generating a 30-page report full of screenshots but no prioritisation. Instead, organise your findings into three tiers.

Tier 1: Quick wins that cost little and improve performance measurably. These typically include removing unused third-party scripts, enabling lazy loading on images, converting images to WebP, minifying CSS and JavaScript, and enabling browser caching headers. A competent developer can handle all of these in a day or two.

Tier 2: Medium-effort improvements that require some development work. This includes deferring render-blocking scripts, implementing critical CSS, adding proper image srcset attributes for responsive images, setting up a CDN, and optimising web font loading. These might take a week of focused work and often require testing across browsers and devices.

Tier 3: Architectural issues that need strategic decisions. These are the expensive ones: migrating hosting, replacing a bloated page builder with cleaner templates, refactoring a JavaScript-heavy front-end, or rebuilding sections of the site to reduce server-side processing. These require planning, budget, and a clear understanding of the expected return. For a deeper look at how to approach these structural decisions, our performance architecture guide covers the methodology of building speed into a site’s foundation from the start.

For each finding, record the specific metric it affects (LCP, INP, CLS, or TTFB), the estimated impact (even a rough guess like “likely saving 0.3-0.8s on LCP”), and the page or pages affected. This level of specificity turns your audit from a vague “the site is slow” complaint into a precise technical brief.

Know When the Audit Points to Bigger Problems

Sometimes an audit reveals that your performance issues aren’t fixable with tweaks. There are a few signals that suggest you’re dealing with a fundamentally slow architecture rather than a collection of fixable issues.

If your TTFB is over one second and doesn’t improve with caching enabled, your server or CMS is doing too much work on every request. If your total JavaScript payload exceeds 500KB after minification, you likely have framework or plugin bloat that can’t be trimmed without rethinking your approach. If your page makes more than 80 requests on initial load, you have a complexity problem that no single optimisation will solve. If your CLS is above 0.25 across multiple pages, your layout approach is structurally unstable, usually because images lack dimensions or dynamic content is being injected without reserved space.

Recognising these patterns before you start spending money is the entire point of the audit. A site with fundamentally slow architecture will eat optimisation budget without delivering lasting results. You’ll fix one thing, and another bottleneck will emerge. In these situations, the more cost-effective path is often to rebuild the affected pages (or the whole site) with performance as a design constraint from the beginning, rather than trying to retrofit speed onto a structure that was never built for it.

What To Do With Your Audit Results

You now have a prioritised list of performance issues, organised by effort level, with specific metrics and affected pages documented. Here’s how to use it.

If your audit mostly reveals Tier 1 issues, you probably don’t need an expensive engagement. A skilled freelance developer working from your audit document can handle these in a few focused days. Expect to pay £500-£2,000 depending on complexity, and expect to see measurable improvements within a week of deployment.

If you have a mix of Tier 1 and Tier 2 issues, you need someone with front-end performance expertise, not just general web development skills. Ask candidates to explain what they’d do about your specific render-blocking resources or your INP problem. If they can’t speak to those specifics, they’re not the right fit.

If your audit surfaces Tier 3 architectural problems, resist the urge to start patching. Every hour spent optimising a fundamentally slow architecture is an hour you’ll lose when you eventually rebuild. Instead, use your audit as the foundation for a proper performance-focused rebuild brief. You’ll get better proposals from agencies and developers because you’re giving them concrete data instead of a vague sense that “the site feels slow.”

The audit itself is the best money-saving tool you have. It ensures that whatever you spend next is directed at the actual problems, with clear before-and-after metrics to prove the work delivered results. Two hours of careful measurement now will prevent months of expensive guesswork later.

Related