There is an active and polarising debate about whether Bayesian priors should be used in MMM. Broadly speaking there is a progressive group of academics and practitioners who argue that the use of Bayesian prior values, effectively applying guideline results bounds to your model, makes models more flexible and insightful. And on the other side of the debate are the statistical purists who argue the classic approach is more reliable. The argument against using Bayesian priors is that they “fix” model outputs to meet the expectations of marketing teams, thereby undermining the very foundations of MMM: unbiased and independent full funnel attribution. [1,2,3,4].
I often get asked for a POV on this, so I guess a lot of people are asking the same question.
Recap: What are Priors?
In a Bayesian media mix model, priors are simply what you believe the results might be before you look at the data or do any modelling. Think of them as expressing real-world knowledge — like “it’s very unlikely that my sales go down when I spend more on advertising” — in mathematical language. [5].
My view, in summary: Priors introduce risks of bias which can undermine model integrity — particularly when the priors are derived from last‑touch, platform‑reported metrics or poorly specified experiments. The core issues are that last touch platform reporting is biased to last touch and therefore inaccurate, and experiments are rarely undertaken with the levels of statistical rigour required to make them reliable.
Potential Advantages of Using Priors
Let’s look at the potential advantages of using priors.
- The biggest argument for Bayesian Priors is that they reduce uncertainty in an MMM environment. Bayesian MMM doesn’t just give you a single point estimate for the impact of each marketing channel. It provides a probability distribution of possible impacts… This allows you to understand the uncertainty associated with the estimates, leading to more informed decision-making [6].
- Model outputs (e.g. CPA by channel) look “right” and are relatable in the real world: To a large extent, priors fix the model outcomes into sets of values that seem to match the marketing team’s expectations and experience.
- Priors can help when the data available for modelling is limited: Sometimes there isn’t sufficient spend data to accurate estimate coefficients for media channel performance. Spend may be too low, or the spend in one channel is drowned out by higher spends in other channels at the same time, or the number of weeks with spend may be very low. In these situations, priors can help by preventing coefficients from collapsing toward zero because the data is weak.
- Priors help when spend variables are highly correlated: Many media campaigns feature multiple channels running at the same time campaigns often take place over 2–8-week periods with media channels being combined to extend reach or delivery efficiency frequency. This means input variables are highly correlated with each other which is a problem for regression models – they can’t isolate the effects of correlated channels. Using priors in a model helps the model find clarity by providing guides to what channels effects should be.
Key Risks and Limitations
- Priors are beliefs, not data, so they risk injecting bias into your model: A key issue is that Priors are beliefs about media performance. Because they are beliefs about media performance e.g. “our Last Touch Social CPA is always between £10 and £20, so let’s fix that range into our MMM results” – we risk using those findings to inform the model outcomes – clearly this is a form of confirmation bias [7].
- Priors often rely on inaccurate last‑touch or platform‑reported ROI Data: In practice, most priors come from platform dashboards rather than controlled experiments. The problem with platforms is that they generally provide last touch attribution (sometimes, first etc, but never full funnel i.e. including trend and seasonality an marketing mix variables for example). This means that from the “get-go” Last Touch reporting is inaccurate – our industry knows that- so why use these results as priors in MMM? But obviously using Last Touch platform sourced priors is only going to transfer that uncertainty directly into your marketing mix model. This is ironic because most MMMs are commission to provide an alternative view to Last Touch reporting.
- Priors can pull MMM results toward last‑touch, and away from more statistically reliable findings: One of the big arguments for using priors is that they keep MMM results grounded in reality. But if you are using priors, you are creating an artificial result. If the prior is biased (e.g. towards last touch reporting), the model output will also be biased. Whilst this can make the MMM appear more aligned with platform reporting, but that alignment is artificial and the results are erroneous.
- Priors can destabilise other coefficients: Let’s go back to high school maths. We know that equations have to balance on both sides – output e.g. Sales on the left, inputs e.g. media spend on the right. If we fix one or more of the input components of our equation on the right, other parts of it will have to move in order to accommodate that prior fix. This means your model will almost certainly inflate some media channels, suppress other, channels especially upper‑funnel media and produce inaccurate results
- Priors reduce transparency: Stakeholders often struggle to understand how much of a coefficient is “data‑driven” versus “prior‑driven,” which can undermine trust.
- Priors can mask genuine performance shifts: If a channel’s true effectiveness changes (e.g., due to creative fatigue, privacy changes, or market dynamics), a strong prior can prevent the model from detecting it. Equally performance might improve – dramatically. Let’s say you launch a new social media campaign with a new offer and new creative. Your MMM Social CPA falls from between £10 and £20 to £5. That means your social campaign has become much more effective. But MMM priors would likely exclude that result – or at least suggest it is unrealistic.
- Priors Do Not Include Adstock or Diminishing Returns: There is a common belief that priors somehow incorporate adstock or saturation assumptions. They do not. Adstock and diminishing returns are structural modelling choices — they define how media works overtime and at different spend levels. They are often subjectively judged but they can and should be extracted from the MMM dataset itself using a grid search loss minimisation technique [8]. This can’t be circumnavigated by experiments as Adstock and diminishing returns are critical parts of advertising evaluation [9]. They can only be determined from long time-series datasets, not short‑term experiments.
- Priors can come from experiments, but these are notoriously difficult to get right: Priors can be defensible when grounded in rigorous, repeatable experiments. However, this is a technically demanding area. Getting marketing experiements right requires:
- Clean treatment and control regions
- No spillover
- Stable delivery
- Repeatability
- Sufficient spend, time and statistical power
In practice, achieving this level of scientific discipline in marketing experiments is difficult. Geographical regions can be more fluid than they look on a map – we live in a mobile world when people can move from one region to another in very short periods of time. [10].
Real‑World Example: Identical Creative, Different Results
There have been cases where a team ran an A/B test using the same creative in both treatment and control. Despite being identical, the experiment produced different lift results for each “creative.” This highlights how algorithmic targeting, user heterogeneity, and data aggregation conspire to confound the magnitude, and even the sign, of ad A/B test result [11]. If identical creatives can produce different results, it shows how fragile and noisy marketing experiments can be — and why they often lack the stability required to serve as robust priors.
Conclusion:
I conclude with these points:
- Avoid using Bayesian priors if you want a high integrity MMM.
- Not using priors may produce more challenging results – but isn’t that what you want? You are building your MMM to get a different perspective on your marketing performance; to identify hidden opporutunities and to improve marketing ROIs. Why gloss over that valuable insight?
- This may result in a more challenging conversation with the C-suite – but it’s safter to report a challenging result from a high integrity model than any result from a flawed model.
- If you use priors to shape the answer, then the answer will look better. That doesn’t mean it’s the right answer, in as much as any model can produce the right answer.
- Priors introduce confirmation bias into your model. If you believe the enemy of good modelling is bias, you must question the use of priors in your MMM project. They may be acceptable to some, but not to others.
- If you are going to use priors, you must be very careful about where you source them. There are three main sources – Experience, Platforms and Experiments, but all have flaws and all run the risk of introducing bias into your model.
Implications for Marketers – checklist
- If you are going to use priors you must be certain that they are accurate.
- Be aware that in most marketing contexts the risks of priors being inaccurate are high.
- Don’t rely on last touch data for priors.
- If you use experiments, ensure they are properly specified, and even then, use them with care.
- In cases where you have too little data for full MMM, be careful about using Bayesian priors to overcome this problem
- In data light situations, consider reviewing your data, accepting lower granularity or looking at a different attribution modelling technique like regularisation.
References
- J Martin and P Perez, Frequentists vs Bayesians and Marketing Science, Quantified Nation, July 2024 – https://open.substack.com/pub/quantifiednation/p/qn9-frequentists-vs-bayesians-and
- Hits and Misses of Meridian – A Thorough Deep Dive, Aryma Labs Feb 2025 https://arymalabs.substack.com/p/hits-and-misses-of-meridian-a-thorough
- Duncan Stoddard, Is Bayesian MMM worth the faff? DS Analytics Blog, February 2024, https://dsanalytics.co.uk/thoughts/is-bayesian-mmm-worth-the-faff
- Two key problems that ail Bayesian MMM – Aryma Labs April 2024 https://arymalabs.substack.com/p/two-key-problems-that-ails-bayesian
- Marty Sanchez, What Are Priors in MMM – And Why They’re Difficult to Get Right (But You Need To) Get Recast Blog June 2025 – https://getrecast.com/what-are-priors-in-mmm-and-why-theyre-difficult-to-get-right-but-you-need-to/
- Rohit Nair, What is Bayesian MMM & why use it? Medium April 2025 – https://medium.com/@rohitnair.inft/what-is-bayesian-mmm-why-use-it-942d0193e7eb
- Cognitive bias and data: how human psychology impacts data interpretation – Penn LLPS Features October 2025 https://lpsonline.sas.upenn.edu/features/cognitive-bias-and-data-how-human-psychology-impacts-data-interpretation
- Nephade, D., A Predictive Modeling Approach to Multi Objective Marketing Mix Optimization: Balancing Performance, Acquisition, and Efficiency – in International Journal on Science and Technology (IJSAT) Volume 16, Issue 1, January-March 2025.
- Gijsenberg et al., Understanding the Role of Adstock in Advertising Decisions, SSRN, 2011.
- Tyler Buffington, Eppo, The Bet test – Spotting Problems in Bayesian A/B Test Analysis Dec 2024, https://www.geteppo.com/blog/the-bet-test-problems-in-bayesian-ab-test-analysis
- Braun and Schwartz, Where A/B Testing Goes Wrong: How Divergent Delivery Affects What Online Experiments Cannot (and Can) Tell You About How Customers Respond to Advertising, Journal of Marketing, American Marketing Association, August 2024. https://journals.sagepub.com/doi/abs/10.1177/00222429241275886

