The Short Answer
Attribution modelling is the method you use to assign credit for a conversion (a sale, a lead, a signup) to the marketing touchpoints that influenced it. It matters because without a clear attribution model, you are almost certainly overvaluing some channels and undervaluing others, which means your marketing budget is being misallocated. If you have ever wondered whether your Google Ads spend is actually driving revenue or just getting credit for work your email campaigns already did, that is an attribution problem.
Most mid-market companies we work with are making five- and six-figure annual decisions about where to spend marketing budget based on data that tells them who touched the ball last, not who actually built the play. Attribution modelling gives you a more accurate picture, and a more accurate picture leads to better decisions.
Why the Default View Misleads You
When you look at a standard Google Analytics report and see that organic search drove 40% of your conversions, you are seeing last-click attribution. This is the default model in most analytics tools, and it gives 100% of the credit for each conversion to the final touchpoint before the user converted. It is simple, easy to understand, and dangerously incomplete.
Think about how B2B purchases actually happen. A decision-maker sees a LinkedIn ad for your product on Tuesday. They do not click. On Thursday, they search your brand name and read a case study on your website. The following week, they receive a marketing email, click through, and fill in a contact form. Under last-click attribution, email gets all the credit. LinkedIn gets nothing. Your case study page gets nothing. The reality is that all three touchpoints played a role, but your data shows a story where email is the hero and everything else is invisible.
What we typically find on mid-market sites is that teams have been optimising towards last-click data for years without realising it. They cut spend on channels that look ineffective in last-click reports, then wonder why their pipeline starts to thin out six months later. The channel they cut was doing awareness work that fed the channels they kept. Attribution modelling exists to prevent exactly this kind of mistake.
The Core Attribution Models Explained
There are several standard attribution models, each with a different philosophy about how credit should be distributed. Understanding the differences is not academic. Each model will tell a materially different story about your marketing performance, and the one you choose shapes the decisions you make.
Last-Click Attribution
All credit goes to the final touchpoint before conversion. This is useful for understanding what closes deals but terrible for understanding what starts them. If you only use last-click, you will systematically underinvest in awareness channels. It is the model you are probably already using, whether you chose it or not.
First-Click Attribution
All credit goes to the very first touchpoint that introduced the user to your brand. This flips the bias completely. It overvalues discovery channels and ignores everything that happened between initial awareness and conversion. We rarely recommend this as a primary model, but it is useful as a comparison lens to see which channels are genuinely bringing new audiences to your site.
Linear Attribution
Credit is split equally across every touchpoint in the conversion path. If a user had four interactions before converting, each one gets 25%. This is more balanced than single-touch models, but it treats every interaction as equally important. A casual pageview from a social ad gets the same weight as a detailed product demo that took 20 minutes. That rarely reflects reality.
Time-Decay Attribution
More credit goes to touchpoints that happened closer to the conversion event, with earlier touchpoints receiving progressively less. This model works well for businesses with shorter consideration cycles, where recent interactions genuinely are more influential. For B2B companies with sales cycles of several weeks or months, it can still undervalue the top-of-funnel work that got the prospect into your world in the first place.
Position-Based (U-Shaped) Attribution
Typically, 40% of credit goes to the first touchpoint, 40% to the last, and the remaining 20% is distributed evenly across everything in between. This acknowledges that the introduction and the close are both critical moments. It is the model we most often recommend as a starting point for B2B companies because it respects both ends of the journey without completely ignoring the middle.
Data-Driven Attribution
This model uses machine learning to analyse your actual conversion data and assign credit based on the statistical impact each touchpoint has on conversion probability. Google Analytics 4 uses a version of this as its default model. It is the most sophisticated approach, but it requires a meaningful volume of conversion data to work reliably. If you are getting fewer than 300 to 400 conversions per month, the model may not have enough signal to produce trustworthy results, and you will be better served by one of the rules-based models above.

Why This Matters for Budget Decisions
The practical impact of attribution modelling shows up directly in how you allocate marketing spend. Here is a real-world scenario we see regularly. A company runs paid search, organic content, LinkedIn advertising, and email nurture campaigns. Their analytics dashboard, running on last-click, shows paid search driving 55% of conversions and LinkedIn driving 3%. The marketing director proposes cutting LinkedIn spend and doubling down on paid search.
When we help that same company set up position-based attribution and compare the data, the picture shifts dramatically. LinkedIn is often the first touchpoint in 25% to 35% of converting paths. It is doing heavy lifting at the top of the funnel, introducing prospects who later convert through other channels. Paid search is frequently the last click, but many of those clicks are branded searches from people who already know the company because of LinkedIn, content, or email.
Without attribution modelling, you cut the channel that fills the top of your funnel. With it, you understand the role each channel actually plays and can invest accordingly. The difference between these two decisions can represent tens of thousands of pounds in wasted or misallocated spend over a single quarter.
Attribution in Google Analytics 4
Google Analytics 4 (GA4) shifted the default attribution model to data-driven attribution, which was a significant change from the last-click default of Universal Analytics. This is broadly positive, but it comes with some practical complications that mid-market teams need to understand.
First, GA4’s data-driven model only attributes credit to Google channels and direct traffic by default in advertising reports. If you want to see the full cross-channel picture, you need to be working in the standard reports or exploration reports and understand which attribution settings apply where. This catches people out constantly.
Second, the model’s reliability depends on your data volume. If your site converts at low volumes, which is common in B2B where a “conversion” might be a demo request rather than an e-commerce purchase, the model falls back to more basic heuristics. You may think you are getting sophisticated attribution when you are actually getting a simpler model wearing a data-driven label.
Third, GA4 has limited lookback windows. For acquisition events, the default lookback is 30 days. For other conversion events, you can set it to 30, 60, or 90 days. If your sales cycle is six months, a 90-day window still misses the early touchpoints that started the relationship. This is a structural limitation of the tool, not a configuration error, and you need to account for it when interpreting your data.
In our projects, we define measurement requirements during prototyping, which includes specifying which attribution model will be used, what the lookback window should be based on the actual sales cycle, and how the team will interpret the resulting data. Getting this right at setup is far more effective than trying to retrofit it after months of misconfigured data. For a deeper look at how we approach this, our measurement systems guide walks through the full methodology.
Common Mistakes Companies Make With Attribution
After working through attribution setups on dozens of projects, the same mistakes come up repeatedly. Knowing them in advance saves significant time and prevents bad decisions.
Treating One Model as the Truth
No single attribution model is objectively correct. Each one is a lens that highlights certain behaviours and obscures others. The most effective approach is to compare multiple models side by side and look for channels that shift dramatically between them. If a channel looks weak under last-click but strong under first-click, that tells you something important about its role. The comparison is the insight, not any individual number.
Ignoring Offline and Dark Social Touchpoints
Attribution models can only credit what they can track. If a prospect heard about your company on a podcast, had a conversation at a conference, or saw your CEO’s post on LinkedIn without clicking, those touchpoints are invisible to your analytics. This does not mean they did not happen. Many B2B buyers report that their most influential touchpoint was something that never generated a trackable click. Smart companies supplement their attribution data with self-reported attribution (“How did you hear about us?”) on conversion forms. It is imperfect, but it fills a genuine blind spot.
Setting Up Conversions Incorrectly
Attribution is only as good as the conversion events it is modelling. If your “conversion” event fires inconsistently, double-counts, or captures low-intent actions alongside high-intent ones, your attribution data will be unreliable regardless of which model you use. We have audited sites where a single misconfigured event was inflating conversion counts by 40%, making every attribution insight downstream completely misleading. Clean conversion tracking is the prerequisite for meaningful attribution.
Confusing Correlation With Contribution
Just because a touchpoint appears in a converting path does not mean it contributed to the conversion. If your homepage appears in 90% of converting paths, that may simply mean everyone visits the homepage at some point, not that the homepage is your best converter. Attribution models distribute credit mechanically. You still need to apply judgement about whether a touchpoint was genuinely influential or simply present.

How to Choose the Right Model for Your Business
The right attribution model depends on three factors: your sales cycle length, your conversion volume, and what decision you are trying to make.
If your sales cycle is short (under two weeks) and you have high conversion volume, time-decay or data-driven attribution will serve you well. The model has enough data to work with, and recent touchpoints are genuinely more relevant in fast-moving purchase decisions.
If your sales cycle is long (four weeks or more), which is typical of the B2B companies we work with, position-based attribution is usually the most informative starting point. It gives proper credit to the channels that initiate relationships while still recognising what closes them. You can then layer on first-click and last-click comparisons to stress-test your assumptions.
If your conversion volume is low (under 100 per month), data-driven models become unreliable. Stick with rules-based models like position-based or linear, and be transparent with your team that the data is directional rather than precise. A rough but honest model is far more useful than a sophisticated one running on insufficient data.
The decision you are making also matters. If you are evaluating whether to cut a specific channel, compare its performance across multiple models before deciding. If you are optimising campaign creative within a single channel, last-click may be perfectly adequate because you are not making a cross-channel comparison. Match the model to the decision, not the other way around.
Beyond Tool-Based Attribution
It is worth acknowledging that tool-based digital attribution, even at its best, only captures part of the picture. The buyer journey for a £50,000 B2B contract does not happen neatly within a browser. It involves conversations, internal discussions, competitor research on devices you cannot track, and decision-making processes that no pixel or cookie can observe.
This does not make attribution modelling pointless. It makes it necessary but insufficient. You need attribution data to make informed decisions about your digital spend, but you should pair it with other inputs: sales team feedback on where leads say they came from, CRM data that connects marketing touchpoints to closed revenue, and periodic qualitative research with customers about their buying journey.
Our team recommends building what we call a blended attribution view. This combines your digital attribution data from GA4 or your analytics platform with self-reported attribution from forms, CRM-sourced revenue data tied back to original marketing source, and sales team observations about deal influence. No single source gives you the complete picture. Together, they get you much closer to reality than any one signal alone.
Practical Steps to Get Started
If you are currently relying on default analytics reports without thinking about attribution, here is a practical sequence for improving your visibility.
Start by auditing your conversion events. Make sure every conversion you are tracking is clearly defined, correctly implemented, and firing only when a genuinely valuable action occurs. Remove duplicates, fix miscounts, and separate high-intent actions (demo requests, contact form submissions) from low-intent ones (newsletter signups, PDF downloads). Attribution modelling on messy conversion data produces messy insights.
Next, compare at least three models side by side. In GA4, you can do this through the Advertising section or through custom explorations. Run last-click, first-click, and either position-based or data-driven (if your volume supports it). Look for channels where the credit shifts significantly between models. Those shifts are where the interesting strategic questions live.
Then, add self-reported attribution to your key conversion forms. A simple “How did you hear about us?” dropdown or open text field captures touchpoints that digital tracking misses entirely. Podcast mentions, word-of-mouth referrals, event appearances, and social posts that were seen but never clicked all become visible through this mechanism.
Finally, set a regular cadence for reviewing attribution data. Monthly is usually right for mid-market companies. Reviewing too frequently leads to reactive decisions based on noise. Reviewing too infrequently means problems compound before anyone notices. A monthly review where you look at channel performance across multiple attribution models, compare it to self-reported data, and check it against CRM revenue gives you a genuinely useful picture of marketing effectiveness.
What Good Attribution Practice Actually Looks Like
Companies that do attribution well share a few characteristics. They do not treat attribution as a reporting exercise. They treat it as a decision-making framework. When someone proposes changing channel spend, the team pulls up multi-model comparisons and self-reported data. When a new campaign launches, the measurement plan specifies which attribution model will be used to evaluate it and over what time period.
These companies also accept imperfection. They understand that no attribution model is perfectly accurate, and they do not chase the illusion of a single source of truth. Instead, they use multiple data sources, apply judgement, and make decisions that are directionally correct. That is a massive improvement over the alternative, which is making decisions based on a single default report that nobody set up intentionally.
If you take one thing away from this article, let it be this: the attribution model you use shapes the story your data tells, and the story your data tells shapes the decisions you make about where to invest. Choosing your model deliberately, understanding its limitations, and supplementing it with other data sources is one of the highest-value improvements you can make to your marketing measurement. It does not require expensive tools or complex technology. It requires clarity of thinking and a willingness to look at your data from more than one angle.


