In our last Retail Leaders Forum, our Insight Director, David Lockwood, shared Tapestry’s guidelines on how multi-channel retailers can build a fractional attribution model to have a deeper understanding of marketing channel performance for more efficient budgeting allocation. Here are some of the key questions that retailers posed in the session.
When does your business have enough data (to carry out attribution modelling)?
One jewellery retailer wanted to know how much incrementality data is needed to be statistically valid, and whether there’s a different approach depending on company size.
David explained that attribution modelling can be used by almost any size retailer, down to those with a turnover of around £1 million. While smaller businesses probably don’t use as many channels, or may not spend enough in some channels to see their impact, the effects of those with adequate budgets can be understood. However, it’s more challenging to understand the performance of channels with attribution if retailers are spending less than £1,000 per month per channel, especially for those where suspending activity would be too costly, such as direct mail.
As channels increase in size and number, you can add layers of complexity as needed. A good approach is to start testing the incrementality of the channel with the largest spend, to understand its contribution to your overall marketing mix. Then test the next-largest, and so on. This way, you can learn about your most important data first, while building your model bit by bit.
How often should you update your attribution model?
As a business already running incrementality testing on display ads and social media ads, one homeware supplier asked how often they should refresh their attribution model.
The period of validity depends on the rate at which your marketing mix is changing. The faster it is, and indeed the faster you believe the market itself is shifting, the more frequently that updates are needed. If you aren’t fundamentally changing your marketing mix, adding new channels, or introducing entirely new product ranges, then your model may remain robust for up to two years. As a rule of thumb, Tapestry recommends rebuilding your model if channel spend mix changes by more than 15%. However, David suggests regularly running incrementality tests for validation to keep your model healthy and to evolve it gradually.
In addition, an indicator of validity is the ratio of marketing cost to sales. If the proportional spend is constant or decreasing, your model is probably stable, but if it’s increasing, you may need to adjust your attribution model.
How do you incorporate consideration windows?
Selling a product that typically includes a period of contemplation before purchase, a pet food subscription provider wanted to know how Tapestry incorporates longer attribution windows into their modelling.
This is usually fairly easy to achieve. You run incrementality tests for a longer period of time. You can then see when customer behaviour in the holdout group changes, and when it returns to normal. This period indicates the consideration window within which you need to examine activity.
Is there a relationship between email engagement and lifetime value?
One gardening supplier, on investigating content-led emails, wanted to know whether there’s a connection between email engagement (not necessarily purchase) and lifetime value.
In one example, David explained that content emails were achieving very high engagement, but low purchase rates. Content emails were removed from a holdout group to test purchase behaviour and intent, but Tapestry also implemented an engagement scoring system. Prospects’ engagement was scored based on how recently and how frequently they opened each email, and this was compared with their future value score. We found that there was a direct correlation: the more engaged a prospect was, the more likely they were to have a longer relationship and spend more with the retailer.
How do you test the incrementality of inserts?
With inserts being the biggest channel for one ready meal supplier, they wanted to know how to do holdout testing on these.
National press inserts can generally be holdout-tested on a regional basis. For example, you can suspend inserts in Liverpool, then measure the response rate compared to a geographical area with a similar demographic and size. If you already have data available for regional metrics, such as revenue per customer, you can also measure the effect on these.
While the method isn’t perfect – responses will bleed between regions, and demand is usually under-reported as insert data isn’t captured as precisely as promotional codes for example – David emphasised that direct-to-consumer data is predictive and predictable enough to understand the uplift.
Building a fractional model for attribution is a complex task, but with the right support in place, and long-term plan for optimisation, you’ll start to see predictable results you can act on fairly quickly.