Imagine this situation. You work in a large organisation as a digital marketing manager. You observe a sharp increase in GA direct, organic and brand generic web traffic. You want to explain the uptick but there’s nothing in your digital data that can explain it. However, you do know that your brand launched a TV/AV campaign last week and some new out of home is breaking this week.

You also know that there is a small seasonal factor because this is a month when you expect a modest rise in traffic – but not this much. At the same time the trading team have made some downward adjustments to your product pricing. One large competitor has also launched a new TV and cinema campaign.

You suspect that some or maybe all of these factors have increased your traffic, but which ones, and to what degree?

Enter full funnel attribution:

Full funnel attribution ditches both conventional linear measurement and simple attribution models (first, last, even etc). Instead, full funnel attribution (sometimes called Multi Touch Attribution) uses statistical technique to detect explanatory patterns between driver variables (e.g. spend in different channels) and response data.

This channel agnostic analysis is of immense value to marketers. Why? Because it is only by analysing these patterns across the entire funnel that marketers can get an integrated view of how their marketing really works.

Enter statistical relationships:

Statistical modelling lies at the heart of this type of attribution analysis. Analysts use techniques like linear, logistic and multiple regression to identify and quantify relationships between variables like spend in different channels and their impact on response variables, usually total sales. Rather than looking at the last touch, these techniques allow marketers to identify the channels and spend weights that drive traffic into the mid and lower funnel. Statistics are agnostic; they reveal the true patterns and relationships within the data. This enables marketers to make much more reliable decisions about optimising media investment.

Enter independent and dependent variables, x and y.

These statistical relationships are called predictive relationships because they explain – and predict – how one dependent variable (y) changes as another independent variable (x) changes. The variable we are predicting is the dependent variable because its movement is dependent on the movement of the independent variables. For example we may be interested in understand how sales rise as a result of increasing advertising spend. The dependent variable (here, sales) is always called “y”.

The independent variable (here, advertising spend) is always called “x”. We may have several independent variables in which case they are called x1, x2, x3 etc.

When we quantify the relationship between advertising and sales, we might find:

  • For every £1k we spend on advertising (x1) this channel delivers 100 sales (y)
  • For every £1 we increase our prices (x2), we lose 1,000 sales (y)
  • For every £1k our competitors spend (x3) we lose 50 sales (y).

Enter controlling variables:

Control variables take account of other factors in the mix that could be affecting your model. Examples of controlling variables are price, distribution, seasonality and competitor spend. If you’re not controlling for all the x variables potentially affecting your sales, any findings are likely to be inaccurate. Only when you have statistically controlled the effect of other variables can you make any statements about ROI from any one channel.

Enter AdStock:

AdStock concerns the measurement of the delayed effect of advertising. AdStock is one of the most influential yet least understood concepts in advertising. It has a scientific pedigree and has been proven and reprove in multiple studies. Typically, with TV this is around 10% decay per week. If a campaign generates an advertising awareness of 50% a 10% decay rate means this 50% ad awareness would require more than 30 weeks to decay to 1%. During this time AdStock contributes to driving sales, many of which are misattributed to other lower funnel channels. Most short term measures do not consider AdStock but it’s effect is almost always present and powerful, particularly for larger advertisers.

Enter the baseline:

It’s also critical for marketing and media planning to know where your base is. The base is the number of sales you would generate without marketing activity. Technically it’s where your sales cross the (y) axis when all your x values are zero. When we talk about long and short term effects the short term is generally effects that are incremental to the base while long term effects build the base itself.

How do we know if our model is accurate?

Modelling provides a number of diagnostic tests which indicate how relatively the model and its outputs are. R2 is the best-known indicator of model accuracy it is not the only indicator or the most reliable.

Enter T tests and P values:

The t test and p value important diagnostic tests to help us understand how reliable our model is.

Model coefficients are always reported with an error and the t test is the coefficient divided by its standard error. Clearly we want this number to be high because that indicates the error is low ie it can be divided into the coefficient many times. If the t value is low (ie the error can’t be divided into the coefficient many times) the. the model findings are inaccurate.

The p value indicates the probability that the null hypothesis is valid i.e. that there is no relationship between the variables e.g. spend and sales. In layman’s terms is the probability that our model is wrong. So, want the p value to be low. The popular cut off for p values is anything over 0.05 indicates the model is inaccurate.

Enter actual vs fitted forecasts:

When we have built our model we can compare what our model forecasts to what actually happened. This helps see how well the model predicts the past and indicates how well it might predict the future.

Actionable outputs

Of course all analysis is relatively unhelpful unless we can provide actionable outputs. So, here’s a list of what we should be looking for to help improve how marketing works in a practical sense:

  1. Our base: This is important – it’s the number of sales we expect to see if we don’t do any marketing. Marketing existing to deliver incremental sales in the short term and grow the base (sales that are not attributable to any one channel – basically, brand equity).
  2. The impact of price: Price is an important determinant of sales. If our price is too high it will have a negative effect on sales. We must know what that effect is.
  3. The impact of each independent variable on sales: These are the marketing levers we can pull. The more of x we invest, the more our sales (y) will grow.
  4. Diminishing returns: Our model will produce a response curve for each channel. As we invest more the rate of ROI return will decrease. It will reach a point where no more sales will be generated from increased spend. It’s essential to know where that point is to avoid budget wastage.
  5. The impact of carryover, sometimes called AdStock, the delayed effect of past adverting or future awareness and sales.

With all this insight available it is possible to forecast what might happen when your investments are organised in different ways. This scenario planning allows marketers to explore a range of investment scenarios to identify the plan that is:

  1. Most likely to deliver the volumes of sales they seek
  2. At the price that works for the business
  3. At a cost per sale that is economically viable.