how to measure content performance on my website

how to measure content performance on my website

Start With What “Performance” Actually Means for Your Content

Measuring content performance means connecting each piece of content on your website to a specific business outcome, then tracking whether it delivers. That sounds obvious, but most mid-market teams we work with are doing something different: they’re looking at pageviews, maybe bounce rate, and hoping the numbers go up. That isn’t measurement. That’s watching dials move without knowing what the dials control.

The real work starts before you open any analytics tool. You need to define what each piece of content is supposed to do, choose metrics that reflect that purpose, set up tracking that captures those metrics reliably, and then build a habit of reviewing the data at a cadence that lets you act on it. This article walks through each of those steps in practical detail, using the approaches we apply in our own projects at NexusBond.

Define the Job Each Piece of Content Is Doing

Not all content serves the same purpose, and treating it all the same is the most common measurement mistake we see. A blog post designed to attract organic traffic has a completely different job than a product comparison page designed to help someone in an active buying cycle. If you measure both with the same metrics, you’ll draw wrong conclusions about both.

Before you set up any tracking, categorise your content by its role in the customer journey. We typically use four categories:

  • Awareness content brings new visitors to your site. Blog posts, guides, thought leadership pieces. The job is reach and engagement.
  • Consideration content helps visitors evaluate your offering. Case studies, comparison pages, detailed service pages. The job is deepening interest and moving people toward a decision.
  • Conversion content gets visitors to take a specific action. Pricing pages, demo request pages, contact pages. The job is generating a measurable lead or sale.
  • Retention content keeps existing customers engaged. Help documentation, product updates, resource libraries. The job is reducing churn and increasing lifetime value.

Once you’ve mapped your content to these categories, you can assign metrics that actually match the job. This is the step most teams skip, and it’s why they end up with dashboards full of numbers that don’t inform any decision.

Choose Metrics That Match Content Purpose

Here’s where specificity matters. A vague metric like “engagement” means nothing until you define it in terms of observable user behaviour. Let’s go category by category.

Awareness Content Metrics

For content whose job is bringing new people to your site, you want to track new user sessions, organic search impressions and clicks (from Google Search Console), and referral sources. Pageviews matter here, but only in context. A blog post getting 5,000 pageviews from paid social is performing very differently from one getting 5,000 pageviews from organic search. The source tells you whether the content is working on its own or only when you push traffic to it.

You should also track scroll depth on awareness content. If 80% of visitors leave before reaching the halfway point of a 2,000-word guide, the content isn’t doing its job regardless of how many people land on it. In our projects at NexusBond, we set up scroll depth tracking as a standard event in Google Tag Manager, typically firing at 25%, 50%, 75%, and 90% thresholds. This takes about ten minutes to configure and gives you dramatically better insight than pageviews alone.

Consideration Content Metrics

For content designed to deepen interest, the key metrics shift toward engagement quality and navigation behaviour. You want to know whether someone who reads a case study then visits your pricing page, or whether someone who lands on a comparison page subsequently requests a demo. These are content-assisted conversions, and they’re invisible unless you’ve set up event tracking and defined conversion paths in your analytics tool.

Average engagement time (GA4’s replacement for the old time-on-page metric) is genuinely useful here. A case study with a 45-second average engagement time is probably being skimmed and abandoned. One with three minutes is being read carefully. That distinction matters for deciding whether to invest more in similar content.

Conversion Content Metrics

This is the most straightforward category. Conversion rate is the primary metric, and it should be tracked at the page level. What percentage of visitors to your pricing page end up submitting a contact form? What percentage of visitors to your demo request page actually complete the form? If you don’t know these numbers for your most important pages, you’re flying blind on the content that directly generates revenue.

But don’t stop at the conversion itself. Track form abandonment rate (how many people start filling in a form but don’t submit it) and micro-conversions along the way. A visitor clicking from a service page to the contact page is a micro-conversion. If lots of people make that click but few complete the form, you know the problem is the form or the contact page, not the service page that sent them there.

Retention Content Metrics

For help docs and resource content aimed at existing customers, track returning user sessions, support ticket deflection (if you can correlate), and search behaviour within the help section. If users are consistently searching for terms your help content doesn’t address, that’s a clear signal about what to create next. Internal site search data is one of the most underused sources of content insight we encounter on mid-market sites.

Choose Metrics That Match Content Purpose Set Up Tracking That Actually Works

Set Up Tracking That Actually Works

Choosing the right metrics is only half the battle. You need reliable data flowing into your analytics platform. And in our experience, most mid-market websites have significant tracking gaps that make their data unreliable from the start.

The most common issues we find during measurement audits include Google Analytics tags firing twice on some pages (inflating pageview counts), conversion events not being configured at all (meaning you’re tracking visits but not outcomes), and no event tracking for meaningful interactions like video plays, file downloads, or scroll depth. These aren’t exotic requirements. They’re fundamentals that get missed when tracking is treated as an afterthought rather than designed into the site architecture.

For a thorough look at how to build measurement into your website from the ground up, see our measurement systems guide, which covers the full approach we use with clients.

The Minimum Viable Tracking Setup

If you’re starting from scratch or suspect your current setup has gaps, here’s the baseline we recommend for content performance measurement:

  • Google Analytics 4 installed correctly via Google Tag Manager, with enhanced measurement enabled. This gives you pageviews, scroll events, outbound clicks, and file downloads without additional configuration.
  • Conversion events defined for every action that matters to your business. Form submissions, demo requests, purchases, email signups. Each one should be configured as a conversion in GA4.
  • Google Search Console connected to GA4, so you can see which queries drive traffic to specific content and how your click-through rates compare across pages.
  • Custom scroll depth events via Google Tag Manager, firing at 25%, 50%, 75%, and 90% scroll thresholds. GA4’s built-in scroll event only fires at 90%, which is nearly useless for understanding reading behaviour.
  • Internal link click tracking for key navigation paths. If you want to know whether your blog readers click through to service pages, you need to track those specific link clicks as events.

This setup covers 80% of what you need for content performance measurement. It can be implemented in a day or two by someone who knows Google Tag Manager well, and it doesn’t require any third-party tools beyond the free Google stack.

Build Content-Specific Reports, Not Generic Dashboards

One of the patterns we see repeatedly is teams building a single “content dashboard” that shows the same metrics for every page. This produces a wall of data that nobody acts on. The reports that drive decisions are the ones built around specific questions.

Instead of one dashboard, build three or four focused reports, each answering a distinct question your team actually asks.

Report 1: What’s Bringing People In?

This report shows your top landing pages by new users, segmented by traffic source. You want to see which content is earning organic traffic, which is performing on social, and which only gets visits when you send your email list to it. Sort by new users rather than total pageviews, because the question is specifically about acquisition. Pull in Search Console data to show average position and click-through rate for each page’s primary keyword. A page ranking position 11 for a high-volume keyword is one small optimisation away from significant traffic growth. A page ranking position 3 with a 1.5% click-through rate has a title tag or meta description problem.

Report 2: What’s Keeping People Engaged?

This report focuses on engagement metrics for consideration content. Show average engagement time, scroll depth distribution, and the percentage of readers who navigate to another page on your site after reading. A blog post where 60% of readers visit a second page is vastly more valuable than one where 95% leave immediately, even if the second post has higher raw traffic. This report helps you understand which content formats, topics, and lengths actually hold attention and drive deeper exploration.

Report 3: What’s Converting?

This is your content attribution report. For every conversion event (form submission, demo request, purchase), show the content pages that appeared in the user’s journey before converting. GA4’s path exploration report is useful here, though it requires some patience to configure. The goal is to answer: which content pages appear most frequently in the journeys of people who eventually convert? This is different from asking which pages have the highest conversion rate, because it includes pages that assist conversions even if the final conversion happens elsewhere.

What we typically find on mid-market sites is that 3-5 pieces of content are doing the heavy lifting for pipeline generation, and they’re rarely the pieces the marketing team would have guessed. One client we worked with discovered that a technical FAQ page, which nobody on the marketing team had touched in two years, appeared in 40% of converting journeys. That single insight led to a complete overhaul of their content strategy.

Use Content Grouping to See Patterns

Analysing content page by page works when you have twenty pages. When you have two hundred, you need a higher-level view. Content grouping lets you aggregate performance by category, topic, format, or any other dimension that matters to your business.

In GA4, you can create content groups using the content_group parameter, which you set via Google Tag Manager based on URL patterns or page metadata. For example, you might group all blog posts together, all case studies together, and all product pages together. Then you can compare entire categories: do case studies generate more consideration-stage engagement than blog posts? Do product pages convert at a higher rate when visited after a specific type of content?

We typically advise clients to create at least two grouping dimensions. The first is content type (blog, case study, service page, landing page). The second is topic or product area (so you can see, for example, whether content about a particular service line is outperforming others). With these two dimensions, you can answer questions like “are our blog posts about cybersecurity driving more qualified traffic than our blog posts about compliance?” That’s the kind of question that actually changes how you allocate content resources.

Establish Baselines Before You Optimise

One of the most practical things you can do when starting to measure content performance is resist the urge to optimise immediately. You need at least 4-6 weeks of clean data to establish reliable baselines, assuming your tracking is set up correctly. Without baselines, you have no way to know whether a change improved things or not.

During the baseline period, document the current performance of each content category against the metrics you’ve chosen. What’s the average engagement time for blog posts? What percentage of case study readers navigate to a contact page? What’s the organic click-through rate for your top 20 pages? Write these numbers down. They become the “before” in every future before-and-after comparison.

Teams that skip this step end up in a frustrating cycle of making changes and never knowing whether they worked. A client recently told us they’d “tried everything” to improve their blog performance, but when we asked what their engagement metrics looked like before each change, they had no answer. They’d been making changes in the dark for eighteen months.

Establish Baselines Before You Optimise Set a Review Cadence That Matches Your Publishing Pace

Set a Review Cadence That Matches Your Publishing Pace

Data that nobody reviews is the same as data that doesn’t exist. But the right review frequency depends on how often you publish and how quickly your traffic patterns shift.

If you publish 2-4 pieces per month, a monthly content performance review is appropriate. If you publish daily, you’ll want a weekly check on high-level metrics with a deeper monthly analysis. The review should be a structured meeting with a fixed agenda, not a casual glance at a dashboard. What we recommend to our clients is a simple three-part structure:

  • What performed above baseline? Identify the specific content pieces or categories that exceeded expectations, and discuss why.
  • What underperformed? Identify content that fell below baseline, and determine whether it’s a content quality issue, a distribution issue, or a tracking issue.
  • What action do we take? Every review should produce at least one concrete action: update a page, change a distribution approach, create a new piece on a topic that’s showing demand, or fix a tracking gap.

The action step is what separates measurement from reporting. Reporting tells you what happened. Measurement tells you what to do about it. If your content reviews aren’t producing actions, you’re over-complicating the data or under-empowering the team.

Avoid the Vanity Metrics Trap

Some metrics feel good but tell you nothing useful. Total pageviews is the classic example. Your pageviews went up 20% this month. Great. But was it because you published five new posts that each got modest traffic? Because a single post went viral on social media but attracted an audience that will never buy from you? Because a bot was crawling your site? Without context, the number is meaningless.

Similarly, average time on page can be wildly misleading. In older versions of Google Analytics, this metric was calculated in a way that excluded single-page sessions entirely, meaning it dramatically overestimated actual reading time. GA4’s engagement time metric is better but still has quirks. A user who opens your page in a background tab and comes back to it twenty minutes later will register a very different engagement time than one who reads it straight through, even if their actual reading behaviour was identical.

The antidote to vanity metrics is always to ask: “What decision would this metric change?” If you can’t name a specific decision that would go differently based on whether the metric is high or low, you don’t need it in your report. This filter alone typically eliminates about half the metrics teams track, freeing up attention for the numbers that actually matter.

Connect Content Performance to Revenue

The ultimate measure of content performance for most B2B companies is its contribution to revenue. This is also the hardest thing to measure, because the path from “someone read our blog post” to “someone signed a contract” can span weeks or months and involve multiple touchpoints.

The practical approach is to work backwards from closed deals. If you have a CRM (HubSpot, Salesforce, Pipedrive, or similar), tag incoming leads with their initial landing page and the content they engaged with before converting. Over time, this gives you a dataset that shows which content pieces appear most frequently in the journeys of people who eventually become customers, not just leads.

This isn’t perfect attribution. No attribution model is. But it gives you a directional understanding that’s far more useful than guessing. When you can say “prospects who read our implementation guide before requesting a demo close at twice the rate of those who don’t,” you’ve identified a piece of content that deserves prominent placement, regular updates, and possibly a paid distribution budget. That’s a business decision based on evidence, and it’s exactly what content performance measurement is for.

Putting It All Together

Measuring content performance is not a tool problem or a data problem. It’s a clarity problem. You need to know what each piece of content is supposed to do, track whether it does that thing, and review the results regularly enough to act on them. Start by categorising your content by purpose. Assign metrics that match each category. Set up tracking that captures those metrics reliably, with scroll depth events, conversion events, and content grouping as your baseline. Build focused reports that answer specific questions rather than generic dashboards that answer none. Establish baselines before you start optimising. Review on a fixed cadence with a bias toward action. And always tie content metrics back to business outcomes, even if the connection is imperfect. That’s the path from “we have analytics” to “we know what our content is worth.”

Related