resources
/
Incrementality
Risk-Adjusted ROAS: Why Standard ROAS Misleads Budget Decisions
We analyzed 1,221 MMMs across $1.85B in ad spend. Standard ROAS misranks channels by up to 11x. Here's the formula that fixes it.


Get a weekly dose of insightful people strategy content
Abstract
We analyzed 1,221 Marketing Mix Models across 123 brands, 22 markets, 83 advertising channels, and $1.85B in measured ad spend (2020–2025). The finding: standard ROAS misranks channel performance by up to 11x compared to risk-adjusted measurements. Channels that look efficient under standard ROAS often carry hidden variance that erodes real-world returns. This article introduces the Risk-Adjusted ROAS formula, shows how it reshuffles channel rankings using real data, and provides a step-by-step method to run this analysis on your own portfolio.
Keywords: risk-adjusted ROAS, marketing mix modeling, budget allocation, confidence intervals, incrementality, portfolio management.
Table of Contents
Abstract
Why Standard ROAS Is Insufficient
The Risk-Adjusted ROAS Formula
What the Data Shows: Channel Rankings Before and After Risk Adjustment
A Real Case: How Risk Adjustment Reversed a Budget Decision
Portfolio-Level Implications
How to Run This Analysis With Your Data
Known Limitations
Why Standard ROAS Is Insufficient
Standard ROAS is a point estimate. It takes revenue attributed to a channel, divides by spend, and returns a single ratio. A 4x ROAS means four dollars out for every dollar in.
This is operationally useful. It is also incomplete in a way that leads to material budget misallocation.
The core problem: standard ROAS ignores the variance of that return over time. Two channels can show identical median ROAS while having completely different outcome distributions. One channel delivers 3.5x consistently, week after week. Another channel averages 3.5x but oscillates between 1.2x and 6.8x depending on creative cycle, auction dynamics, and platform algorithm changes.
These are not the same investment. Treating them as equivalent — which is what standard ROAS does — is the marketing equivalent of treating a government bond and a speculative equity as interchangeable because they happen to have the same trailing return.
We see this pattern across our dataset. In the 2026 Media Effectiveness Benchmarks, we showed that some channels with high median ROI drop significantly after risk adjustment because their outcomes vary widely across brands and execution conditions. The rankings change. Budget implications follow.
The Risk-Adjusted ROAS Formula
The formula we use at Cassandra:
Risk-Adjusted ROAS = Median ROAS / (1 + Confidence Interval Width)
Where:
Component | Definition |
|---|---|
Median ROAS | The 50th percentile return across model runs or time periods. We use median rather than mean to reduce sensitivity to outliers. |
Confidence Interval Width | The range between the upper and lower bounds of the 90% credible interval from the Bayesian posterior. Wider interval = more uncertainty = more risk. |
A narrow confidence interval indicates consistent performance across different brands, budgets, creatives, and execution conditions. A wide interval means outcomes vary significantly — which increases planning risk and reduces your ability to forecast revenue reliably.
This is conceptually similar to the Sharpe Ratio in finance (return normalized by volatility), adapted for the structure of MMM outputs where uncertainty is expressed as a Bayesian credible interval rather than a time-series standard deviation.
What the Data Shows: Channel Rankings Before and After Risk Adjustment
From our analysis of 1,221 MMMs, here are the results for the most commonly measured channels:
Before Risk Adjustment (Median ROAS Only)
Channel | Clients Measured | Total Spend Analyzed | Median ROAS |
|---|---|---|---|
114 | $630M | 4.37x | |
Meta | 108 | $323M | 2.94x |
Amazon | 6 | $70M | 3.12x |
TikTok | 56 | $38M | 2.68x |
TV | 31 | $52M | 2.41x |
Ranking by median ROAS: Google > Amazon > Meta > TikTok > TV.
A marketer looking at this table alone concludes: maximize Google, then Amazon, then Meta. Standard playbook.
After Risk Adjustment
Channel | Median ROAS | CI Width | Risk-Adjusted ROAS | Rank Change |
|---|---|---|---|---|
4.37x | 1.38 | 1.84 | — | |
Meta | 2.94x | 0.78 | 1.65 | ↑ |
Amazon | 3.12x | 2.91 | 0.80 | ↓↓ |
TikTok | 2.68x | 1.85 | 0.94 | ↓ |
TV | 2.41x | 1.52 | 0.96 | ↑ |
Risk-adjusted ranking: Google > Meta > TV > TikTok > Amazon.
Three observations from this data:
1. Meta improves relative to Google. Meta's median ROAS is 33% lower than Google's. But its confidence interval is 43% narrower. The outcome is more predictable. On a risk-adjusted basis, the gap between them shrinks from 1.43x to 1.12x. Meta is a more reliable return per dollar than the raw ROAS suggests.2. Amazon drops from 2nd to last. Amazon's 3.12x median ROAS looks strong. But a confidence interval width of 2.91 means the actual ROAS for any given brand ranges from very high to near-zero. The channel is highly execution-dependent and varies enormously across business contexts. A Risk-Adjusted ROAS below 1.0 means that, accounting for uncertainty, the expected return is not significantly better than breakeven.3. TV outperforms TikTok after adjustment. TV's median ROAS (2.41x) is lower than TikTok's (2.68x), but TV's outcomes are more consistent. This doesn't mean TikTok is a bad channel — it means TikTok requires stronger creative testing, measurement discipline, and operational maturity to reliably extract value. We categorize this distinction using a portfolio framework: Foundation, Growth, and Innovation tiers.
A Real Case: How Risk Adjustment Reversed a Budget Decision
In Why Attribution Misleads Budget Decisions, we documented a case where we compared Google Search and Google Video for a single brand using both attribution and MMM measurement.
The attribution data showed:
Google Search attributed conversions: 85,897 (70.6% of total)
Google Video attributed conversions: 672 (0.55% of total)
Search CPA (attribution): $5.50
Video CPA (attribution): $1,404
Under attribution, the decision is obvious: Search is 255x more efficient. Shift all budget to Search.
The MMM told a different story:
Google Search incremental conversions: 4,721 (3.86% of total)
Google Video incremental conversions: 7,866 (6.8% of total)
Search CPA (MMM): $452
Video CPA (MMM): $147
The difference: Search attribution overstates contribution by 19x. Video attribution understates contribution by 11x.
We ran geo-experiments to validate. The experimental CPAs confirmed the MMM estimates:
Google Video experimental CPA: $110–$150 (90% CI)
Google Search experimental CPA: $270–$632 (90% CI)
Note the confidence intervals. Video's CI is $40 wide. Search's CI is $362 wide. Even setting aside the median difference, Search carries 9x more uncertainty in its cost-per-conversion estimate.
Risk-Adjusted CPA (lower is better):
Channel | Median CPA | CI Width | Risk-Adj CPA |
|---|---|---|---|
Video | $130 | $40 | $130 × (1 + 0.31) = $170 |
Search | $451 | $362 | $451 × (1 + 0.80) = $812 |
The brand shifted 25% of budget from Search to Video. Result: +18% incremental conversions from the same total budget.
This is what risk-adjusted measurement makes visible. Without it, the attribution data would have continued directing budget toward the wrong channel.
Portfolio-Level Implications
Individual channel risk-adjustment is step one. The second-order effect is at the portfolio level.
Channels do not operate independently. When Meta's performance drops due to an iOS privacy change, Google Search often picks up the demand. When TV drives awareness, digital channels see improved conversion rates in subsequent weeks. These correlations — positive and negative — determine portfolio-level risk.
The concept is identical to Modern Portfolio Theory: two assets with low correlation provide diversification that reduces total portfolio variance below the weighted average of individual variances.
For marketing, this means:
A portfolio of 3 channels with moderate individual risk but low correlation can have lower total risk than a portfolio concentrated in 1 channel with low individual risk.
This is the foundation of the Marketing Efficient Frontier — finding the combination of channels that maximizes expected return for a given level of portfolio risk, or equivalently, minimizes risk for a given return target.
Risk-Adjusted ROAS is the input that makes this optimization possible. Without it, the frontier is calculated on point estimates and produces suboptimal allocations.
How to Run This Analysis With Your Data
Step 1: Collect granular performance data
You need at least 12 months of weekly data per channel. For each channel-week, record:
Spend
Incremental revenue (from MMM or incrementality tests, not last-click attribution)
Calculated ROAS
If you only have attribution data, the analysis still has value — but be aware that the ROAS inputs themselves carry the attribution biases we documented in our attribution case study.
Step 2: Calculate three metrics per channel
Metric | Formula | Tool |
|---|---|---|
Median ROAS | 50th percentile of weekly ROAS values | `=MEDIAN()` in Sheets |
CI Width | P95 ROAS − P5 ROAS (90% interval) | `=PERCENTILE(range, 0.95) - PERCENTILE(range, 0.05)` |
Risk-Adjusted ROAS | Median / (1 + CI Width) | Direct calculation |
Step 3: Compute maximum drawdown
For each channel, find the largest peak-to-trough decline in ROAS over any consecutive period. A channel with max drawdown >50% carries significant tail risk. If you are allocating >25% of budget to such a channel, you have a concentration risk problem.
Step 4: Plot and re-rank
Plot channels on a scatterplot: X-axis = CI Width (risk), Y-axis = Median ROAS (return). Channels in the upper-left quadrant (high return, low risk) are your foundation. Channels in the lower-right are liabilities.
Step 5: Reallocate
Use risk-adjusted rankings to inform budget allocation:
Channels with high Risk-Adjusted ROAS get priority for incremental budget
Channels with low Risk-Adjusted ROAS need non-ROAS justification (brand building, audience development)
Channels with deteriorating Risk-Adjusted ROAS over time signal trouble before the raw number shows it
Known Limitations
Sample composition. Our dataset of 1,221 MMMs skews toward mid-to-large brands with $500K+ monthly ad spend. Results may differ for smaller budgets where channel dynamics are less stable.Confidence interval as risk proxy. Bayesian credible intervals capture model uncertainty but do not fully account for execution risk (creative quality, team capability, agency management). Two brands running the same channel can have very different outcome distributions based on operational factors not captured in the model.Temporal stability. Channel risk profiles shift over time. The CI widths reported here reflect 2020–2025 aggregate data. A channel's risk profile in 2026 may be narrower or wider depending on platform changes, privacy regulations, and competitive dynamics.Not investment advice. This analysis applies marketing science methodology to advertising budget allocation. It should be validated with your own measurement program, including controlled experiments specific to your business context.We built Cassandra to make this analysis native — every channel estimate includes a full uncertainty distribution, not just a point estimate. If you want to see Risk-Adjusted ROAS for your own channels, book a call.
The results don't lie
See how 100+ marketing teams trust us to deliver












