blog

/

research

Which Campaign to Scale When MMM Stops at Channel Level

Your MMM gives channel-level incrementality. Learn how to distribute it to campaign level without holdout tests.

Get a weekly dose of insightful people strategy content

Channel-level incrementality is identifiable from data. Campaign-level incrementality requires a transparent set of assumptions to be distributed from it.

Key Takeaways

  • MMMs measure incrementality at channel level, not campaign level — statistical power, multicollinearity and campaign churn make campaign-level causal measurement impossible with the same data.

  • You don't need holdout tests for every campaign. Rule B distributes the channel-level incremental revenue to campaigns using spend and attributed ROAS from your platform reports — data you already have.

  • Attributed ROAS carries the relative-performance signal the MMM is blind to. Used within a channel between campaigns of the same type, it reflects creative and audience efficiency differences that spend alone cannot capture.

  • Apply a spend floor before distributing: campaigns below €5K/month or 5% of channel spend should be held aside, not included in the distribution.

  • The 8-week trailing iROAS, not the weekly snapshot, is what drives scale/hold/retire decisions.

The Problem: Channel-Level Measurement, Campaign-Level Decisions

Marketing Mix Models recover incremental revenue at the channel level. Campaign decisions are made at the campaign level. The gap between the two is where most marketing teams operate every week — and where measurement loses contact with the spending decisions it is supposed to inform.

Two prior articles in this series set the constraints for the work below:

  • The Incrementality Multiplier Is Broken — How to model attribution - incrementality relationship in an advanced way

  • Marketing Attribution Software Is Lying to You — platform-attributed revenue systematically over-reports incremental impact, with the bias varying by channel and spend level. If you're using only Attributed ROAS to plan your budget you have high risk of making decisions that hurt your growth instead of helping it. A simple solution to this problem exists.

The decision a Head of Performance Marketing has to make on a typical week is at the campaign level: which campaigns inside Meta Prospecting to scale, which to hold, which to retire; which Google Search Brand campaigns are protecting revenue, which are cannibalising organic. Marketing Mix Models do not produce campaign-level coefficients. Attribution platforms produce campaign-level numbers that over-report. This article describes a transparent rule for distributing channel-level incremental revenue down to the campaign — week by week — and the assumption set that has to hold for the rule to be defensible.

Why Campaign-Level Incrementality Cannot Be Measured Causally

A Marketing Mix Model can identify incremental contribution at the channel level. It cannot identify it at the campaign level inside a channel. There are three reasons, and they compound — each one alone is enough to block identification, and in practice all three apply at once.

1. Statistical power: campaign spend is too small to move the needle

An MMM detects incrementality from variation in spend. To call a contribution "real", that contribution must be large enough to rise above day-to-day revenue noise — the model's minimum detectable effect.

At the channel level, this works. A €120K/month channel can move ±20% week to week, which generates enough revenue variation to be statistically detectable.

At the campaign level, a single campaign typically holds 5–15% of channel spend. A 10–20% week-on-week swing on €15K of spend is €1.5–3K of variation — far below the daily revenue noise floor of most DTC businesses. The MMM does not see a signal, because there is no detectable signal to see. The minimum detectable effect at campaign level is structurally larger than the variance the campaign actually produces.

2. Multicollinearity: campaign spend signals move together

An MMM separates the impact of one driver from another by exploiting differences in their movement over time. If two drivers move together, the model cannot tell them apart — multicollinearity.

A typical account has 3–5 campaign types per channel and 30–60 individual campaigns. Going from 3 to 60 variables means the model is asked to disentangle 10–15 times more drivers, most of which respond to the same external triggers: the same Monday-morning budget reallocation, the same seasonal flight, the same BFCM ramp, the same offer launch. Their spend time series move in step. The model has no statistical lever to attribute revenue to "BFCM Creative Push" rather than "Always-On Bestseller" when both campaigns ramped up on the same day for the same offer window.

The number of campaigns the model is asked to separate compounds the problem: more variables, more correlated movement, less identifiability. With enough campaigns the regression simply distributes credit arbitrarily across collinear inputs.

3. Consistency over time: campaigns don't live long enough to be modelled

An MMM needs persistent presence over many weeks to estimate a stable coefficient — a campaign has to actually run long enough to be observed against many different spend levels and many different baseline conditions.

In a real performance team, campaigns are launched, paused, scaled and shut down on rolling 2–6 week cycles. A campaign that ran for three weeks and was retired never accumulates enough observations to be modelled. And even where a coefficient could be fitted on a campaign that has now been turned off, the coefficient describes the past version of a campaign that no longer exists in the account. Using that coefficient to predict the contribution of the next iteration is not a safe assumption — the next creative, audience cut and offer combination may behave very differently.

A model fit on campaigns that come and go produces estimates that are simultaneously under-identified (not enough observations) and not predictive (the unit being measured no longer exists by the time you'd act on the estimate).

What this means

Statistical power, multicollinearity and the lack of consistency over time are independent reasons. Together they mean campaign-level incremental contribution cannot be estimated as a coefficient from the same observational data the channel-level MMM is fit on. Any campaign-level incremental figure has to come from a different construction: distributing the channel-level result downstream under explicit assumptions, with the inputs that are available at campaign level (spend and attributed ROAS) used as proxies for the within-channel relative differences the MMM cannot resolve.

The rule below is that construction.

First-Principles: what affects campaign efficiency?

Campaign efficiency, inside a single channel, depends on at least four things:

  1. Spend — the diminishing-returns curve we already characterised at channel level.

  2. Creative — the ad itself; some creatives convert better than others at the same spend.

  3. Seasonality — when the campaign runs.

  4. Offer — the promotion or product mix promoted.

An MMM, by construction, only sees spend (and occasionally crude proxies for seasonality and offer through dummy variables). It is structurally blind to the creative and offer differences within a channel.

But those differences exist, and the platform sees them. If two campaigns in the same channel have similar spend but very different attributed ROAS, the difference is unlikely to be the channel mechanic — it has to be the creative, the offer, or the audience cut. Attributed ROAS within-channel, between-campaign, it carries the signal the MMM is blind to.

This gives us the central move:

Spend carries the diminishing-returns shape (from the MMM).
Attributed ROAS, used within a channel and between campaigns of the same type, carries the relative-performance shape the MMM cannot see.
Combine them, with an explicit weighting rule, to distribute channel-level incrementality down to the campaign.

Both ingredients are imperfect. The combination is honest if and only if you state the assumptions out loud.

The Assumptions, Named

Before any formula, the assumption set the reader must accept (or reject) before using the rule:

#

Assumption

What it means in plain English

What would prove it wrong

1

Within a single channel-and-campaign-type, the diminishing-returns curve is approximately the same shape for every campaign.

Two prospecting campaigns on Meta saturate against the same audience pool with the same approximate elasticity.

A campaign-level holdout test in which a small campaign inside the channel responds to scaling with very different elasticity than the channel average.

2

Between campaigns of the same type, attributed ROAS carries genuine relative-performance information — i.e., a campaign with 2x the attributed ROAS of its peers really is more efficient at converting some genuine incremental conversions, not only audience overlap.

The creative that gets credit is also genuinely doing the work, not just sitting in front of the highest-intent users.

A switchback test where the high-attributed-ROAS campaign is paused for a week and channel-level revenue holds. If pausing the "best" campaign doesn't move channel-level revenue, the attribution was just selection.

3

At very low spend levels, attribution becomes mechanically noisy and stops carrying useful information.

A €500/month campaign that converts a single high-AOV customer reports 30x ROAS. That is not a signal — it's a sample of one.

Apply the rule and observe that a €500/month campaign is "winning" the distribution. That alone is the failure mode you must guard against — not by tuning the rule, but by setting a minimum spend threshold (we recommend 5% of channel spend or €5K/month, whichever is higher).

4

The combined rule is a decision aid, not a measurement.

The number it returns is "the most defensible distribution we can build from the data we have," not "the causal incremental ROAS of campaign X."

Nothing — assumption 4 is by construction. Believing the output is causal is the failure.

Assumption 4 is the load-bearing one: the rule produces a distribution of channel-level incremental revenue under a stated assumption set, not a causal measurement. Reporting the rule's output without the assumption set strips it of the property that makes it usable.

Why assumption 3 is load-bearing. At low spend, attributed ROAS is dominated by single-conversion noise — a phantom 7.2x at €800/mo is one or two lucky orders, not a signal. Above the €5K spend floor the within-channel ranking becomes stable enough to use.

Two Methodologies to hack incrementality distribution

The distribution rule is applied at every time period t (typically a week, optionally a day). Both the channel-level incremental revenue and the campaign-level inputs are time-varying, so the within-channel split must be recomputed each period — a single static distribution computed once over a long window will misallocate during seasonality shifts, creative-fatigue cycles and offer windows.

Rule A — Distribute by attributed share

The simplest rule splits the channel-level incremental revenue at time t across campaigns in proportion to each campaign's share of attributed revenue at the same t.

Incrementali,t = Channel_Incrementalt × (Attributed_Revenuei,t / Σj Attributed_Revenuej,t)

Where:

  • i indexes the campaign inside the channel.

  • t indexes the time period (week or day).

  • The denominator sums over all j active campaigns inside the channel at t.

What Rule A returns: a per-campaign, per-period incremental revenue figure that always reconciles to the channel-level incremental revenue at the same period. The output is a time series of campaign-level incremental contributions, summing exactly to the channel-level incremental at every t.

Where Rule A breaks down: at low spend volumes attributed revenue is dominated by single-conversion noise. A €500-spend campaign that gets credit for one €1,500 high-AOV order reports a 3.0x attributed ROAS for that period. By share-of-attributed, that campaign claims a disproportionate slice of channel-level incremental revenue from a sample of one. The same mechanism systematically overweights long-tail campaigns and underweights scaled campaigns in a way driven by the arithmetic of small numbers, not by creative effectiveness.

Rule B — Distribute by spend × attributed-ROAS, both compressed

Rule B replaces the single-signal share with a two-signal weight that compresses both inputs.

Weighti,t = (Spendi,t)β × (Attributed_ROASi,t)β

Incrementali,t = Channel_Incrementalt × (Weighti,t / Σj Weightj,t)

Recommended default: β = 0.3.

What each term contributes, in plain language:

  • Spendi,tβ — larger campaigns absorb more of the channel-level incremental at period t, but the exponent β compresses the relationship so that twice the spend does not earn twice the incremental. This reflects the channel-level diminishing-returns shape established in the prequel article.

  • Attributed_ROASi,tβ — campaigns that convert efficiently inside the channel earn a larger share, but the same exponent compresses the relationship so that a 2x attributed ROAS lead does not earn 2x more incremental. Attributed ROAS is inflated, and the same compression dampens the over-reporting.

  • Using the same β on both terms enforces a balance between volume and efficiency: neither signal dominates, both are dampened by the same factor.

What Rule B returns: a per-campaign, per-period incremental revenue figure that, like Rule A, reconciles to the channel-level incremental at t. The difference shows up in which campaign is credited: Rule B rewards mid-spend campaigns with strong attributed ROAS (where signal-to-noise is high and the campaign is operationally meaningful) and dampens both very small high-AOV outliers and very large campaigns whose share would otherwise be carried entirely by spend volume.

The recommended default β = 0.3 is the cross-validated median across ~200 channel-level decompositions in the Cassandra book. β is calibrated against channel-level MMM decompositions, not against causal holdout results — it is a defensible default, not a measured constant. A higher β (e.g. 0.5) gives more weight to the existing spread between campaigns; a lower β (e.g. 0.15) pulls the distribution toward equal-share. Neither is uniquely correct; β = 0.3 is the recommended starting point for accounts that have not yet calibrated their own.

A Worked Example — Start at the Channel Type

To make this concrete, take a hypothetical DTC brand running Meta Prospecting at €120K/month. Before going anywhere near campaign-level numbers, it is worth pinning down what each measurement system says at the channel-type level for this same period (t = last month).

Step 1 — What each measurement system reports at the channel level

Measurement system

Reported revenue from Meta Prospecting

Reported ROAS

What it actually measures

Platform attribution (Meta-reported)

€432,800

3.61x

Conversions credited to a Meta touchpoint by the platform's attribution window — over-reported because the same user often sees the same conversion through multiple platforms.

Marketing Mix Model (channel-level)

€156,000

1.30x

The incremental revenue Meta Prospecting genuinely caused, after netting out baseline demand, seasonality and the other channels — calibrated against geo-experiments where available.

Gap (over-reporting ratio)

2.77×

Platform attribution at this scale claims roughly 2.77× the revenue the MMM identifies as incremental. The €276,800 difference is the over-reporting we documented in Marketing Attribution Software Is Lying to You.

The MMM gives one defensible number for the whole channel: €156,000 incremental. Platform attribution gives a per-campaign breakdown, but at an inflated level. Both are correct at the granularity each was designed for. Neither, on its own, answers the operational question.

Step 2 — The question for the rest of this section

We have €156,000 of incremental revenue earned by Meta Prospecting at t = last month. The channel contains four campaigns:

Campaign

Spend (€/mo)

Attributed Revenue

Attributed ROAS

Notes

BFCM Creative Push

48,000

220,800

4.60x

High-creative-rotation hero campaign

Always-On Bestseller

36,000

86,400

2.40x

Steady-state, mature creative

Lookalike Test

32,000

108,800

3.40x

Mid-spend test of new audience

Long-Tail Interests

4,000

16,800

4.20x

Small spend, long-tail audiences

Channel total

120,000

432,800

3.61x


The question is: how do we distribute the €156K of channel-level incremental across these four campaigns? The two rules below answer it under different assumption sets.

Step 3 — Rule A applied: distribute by attributed share

Campaign

Attributed share of channel

Distributed incremental (€)

Distributed iROAS

BFCM Creative Push

51.0%

79,560

1.66x

Always-On Bestseller

20.0%

31,200

0.87x

Lookalike Test

25.1%

39,156

1.22x

Long-Tail Interests

3.9%

6,084

1.52x

Total

100%

156,000


Rule A reports Long-Tail Interests at 1.52x distributed iROAS — better than the channel average. The signal is mechanically driven by attribution noise: €4K of spend earning a 4.20x attributed ROAS is one or two lucky high-AOV orders away from a phantom result.

Step 4 — Rule B applied: distribute by spend × attributed-ROAS, both compressed at β = 0.3

Campaign

Spend 0.3

Attr ROAS0.3

Weight

Nominal share

Distributed incremental (€)

Distributed iROAS

BFCM Creative Push

25.4

1.58

40.1

33.0%

51,558

1.07x

Always-On Bestseller

23.3

1.30

30.3

25.0%

38,939

1.08x

Lookalike Test

22.5

1.44

32.5

26.7%

41,712

1.30x

Long-Tail Interests

12.0

1.54

18.5

15.3%

(held — fails spend floor)

Total distributed (3 campaigns)





132,209


Held (Long-Tail nominal share)





23,791


Channel total





156,000


Implementation note: the columns reconcile. Distributed €132,209 plus held €23,791 equals the €156,000 channel-level incremental. Any Rule B implementation that produces distributed values exceeding the channel total is back-deriving from per-campaign iROAS instead of computing weights first and distributing by weight.

Long-Tail Interests at €4K/month falls below both the absolute floor (€5K) and the relative floor (5% of €120K = €6K) — assumption 3 of the rule applies. Its share is held aside rather than redistributed; redistributing it would silently inflate the other three campaigns' iROAS by absorbing a slice we explicitly said we don't trust.

Step 5 — Rule A vs Rule B at a glance

The same four campaigns, the same €156,000 channel-level incremental, two different distributions:

Campaign

Spend

Attr. ROAS

Rule A iROAS

Rule B iROAS

Difference

BFCM Creative Push

€48K

4.60x

1.66x

1.07x

Rule A overstates the hero by 0.59x — driven by its high attributed share.

Always-On Bestseller

€36K

2.40x

0.87x

1.08x

Rule A understates the workhorse by 0.21x — driven by its lower attributed ROAS.

Lookalike Test

€32K

3.40x

1.22x

1.30x

Both rules agree directionally. Rule B credits efficiency more cleanly.

Long-Tail Interests

€4K

4.20x

1.52x

held

Rule A reports a phantom winner. Rule B's spend-floor protects against it.

And the rules themselves, side by side:


Rule A — Attributed Share

Rule B — Spend × Attr-ROAS, compressed at β = 0.3

What it does

Splits the channel-level incremental in proportion to each campaign's share of attributed revenue inside the channel.

Splits the channel-level incremental by a weight that combines spend volume and attributed efficiency, with both signals compressed by the same exponent so neither dominates.

Inputs needed per period

Attributed revenue per campaign.

Spend AND attributed ROAS per campaign.

Who wins (the bias)

Campaigns with high attributed ROAS at low spend — the small-numbers bias rewards single-conversion noise.

Mid-spend campaigns with strong attributed ROAS — efficient operators with enough volume to be statistically meaningful.

Failure mode

A €500/mo campaign with one lucky high-AOV order claims an outsized slice of channel-level incremental.

None when the spend-floor rule is enforced; without the floor, the same small-numbers bias would still apply (smaller, but present).

Use when

Quick first-pass cut, low-stakes reporting, accounts with little attribution-noise asymmetry.

Production weekly reallocation calls; whenever the within-channel iROAS ranking will drive a scale/hold decision.

Directional output for this account

Says BFCM is your hero campaign and Long-Tail is healthy. Both are misleading.

Says Lookalike Test is the campaign to scale and Long-Tail is too small to trust. Both are defensible.

Step 6 — The action

Campaign

Distributed incremental (€)

Distributed iROAS

Action

BFCM Creative Push

51,558

1.07x

Hold — performing slightly below channel average; watch for creative fatigue.

Always-On Bestseller

38,939

1.08x

Hold — performing at channel average.

Lookalike Test

41,712

1.30x

Scale — outperforming channel by 21%.

Long-Tail Interests

not distributed

n/a

Hold spend, re-evaluate at 8 weeks.

Notice what is not in this output: a kill recommendation. Inside a same-type channel, campaigns rarely earn a "kill" call from this rule alone. Kills should come from creative-fatigue signals, audience exhaustion, or cohort-quality drops — not from a distribution rule.

The Monitoring Table

This is the table to put on the wall. Build it, save it daily or weekly, and let it accumulate. The eight-week trailing average is what makes the rule useful operationally — single-week snapshots are too noisy.

Channel

Campaign

Week

Spend

Attributed ROAS

Distributed Incremental

Distributed iROAS

8-week trailing iROAS

Spend-floor pass?

Meta Prospecting

BFCM Creative Push

W17

12,200

4.50x

13,090

1.07x

1.06x

Meta Prospecting

Always-On Bestseller

W17

9,100

2.30x

9,830

1.08x

1.06x

Meta Prospecting

Lookalike Test

W17

7,300

3.50x

9,520

1.30x

1.28x

Meta Prospecting

Long-Tail Interests

W17

950

4.10x

held

held

held

Google Search Brand

Brand Core

W17

14,500

18.30x

16,240

1.12x

1.10x

Google Search Brand

Brand + Product

W17

6,800

9.40x

8,090

1.19x

1.16x

The "8-week trailing iROAS" column is what gets quoted on the Friday pacing call. The single-week column is what your team uses to spot a divergence. If a campaign's weekly iROAS departs from its trailing average by more than ~30% for two consecutive weeks, that's a signal that the underlying creative, audience, or offer has shifted — and that the assumptions of the rule may have broken for that campaign. That is your re-investigation trigger.

We are building this table directly into Cassandra dashboards, alongside the channel-level incrementality outputs. If you'd rather build it yourself first, the build is a daily cron job that pulls campaign-spend and attributed-ROAS from your platform reports and joins them to the latest channel-level incremental output from your MMM. The full SQL pattern fits in 80 lines.

What Would Prove This Rule Wrong

Per assumption 1: a campaign-level holdout test where a small campaign responds to scaling with elasticity that diverges materially from the channel average. We are running a programme of these tests with three of our enterprise customers in 2026, and we will publish results by Q1 2027 regardless of outcome. If the rule survives, we'll publish. If it doesn't, we'll publish the failure and rewrite this article.

Per assumption 2: a switchback test where pausing the highest-distributed-iROAS campaign for one week does not depress channel-level revenue. That would indicate attributed ROAS was capturing audience overlap, not genuine creative effectiveness, and Rule B's weighting is misallocating credit.

Per assumption 3: a small campaign repeatedly winning the iROAS leaderboard, only to revert to the mean once it scales past the spend floor. This is the single most common failure mode in the wild, and the spend-floor rule is what protects you from it.

Per assumption 4: any application of Rule B that treats its output as a causal measurement rather than a distribution conditional on a stated assumption set. The rule produces a defensible split, not a causal estimate. The output is only as defensible as the assumption set it rests on.

How To Use The Output

The rule is intended to support, not replace, the weekly within-channel reallocation conversation. A typical use:

  1. The MMM produces channel-level incremental revenue per period (for example, €156,000 for Meta Prospecting last week).

  2. Rule B distributes that figure across the active campaigns in the channel for the same period, applying the spend-floor rule to small campaigns and reporting their nominal share as held.

  3. The 8-week trailing distributed iROAS becomes the within-channel ranking signal used for scale, hold and retire decisions.

  4. Any decision based on the rule is reported alongside the assumption set used. Where assumption fragility is high (for example, a campaign with attributed ROAS far above its peers but consistent low spend), the rule's output is held aside and the campaign is investigated separately.

The rule is a structured procedure that produces a number from inputs (channel-level incremental from the MMM, campaign-level spend and attributed ROAS from the platform) and a fixed parameter set (β, spend floor) that is held constant within a quarter. It is not a causal measurement. Where the assumptions hold for a given account, the rule's output is the defensible distribution of channel-level incrementality across campaigns in that account; where the assumptions fail, the falsifiability tests above describe how to detect it.

Talk to us about wiring Rule B into your live MMM

This article is part of a series on incrementality measurement. Prior pieces: The Incrementality Multiplier Is Broken and Marketing Attribution Software Is Lying to You: 792-Model Proof.

Methodology references: 792 MMM models, 194 advertisers, 2017–2025 substrate. Distribution rule fitted against the channel-level decompositions of that substrate. The β = 0.3 default is the cross-validated median across the book; client-specific calibration available on request.

Author: Gabriele Franco, Founder & CEO of Cassandra

Frequently Asked Questions

Why can't I measure campaign-level incrementality directly from an MMM?

Three compounding reasons. First, statistical power: a single campaign at 5–15% of channel spend generates too little revenue variation to rise above the model's noise floor. Second, multicollinearity: campaigns inside the same channel respond to the same external triggers simultaneously, so the model cannot separate their individual contributions. Third, consistency: most campaigns run for 2–6 weeks and are then paused or retired — not long enough to estimate a stable coefficient. Any one of these reasons alone blocks causal identification at campaign level.

Does this method work without running holdout tests for every campaign?

Yes. Rule B only requires two inputs per campaign per week: spend and attributed ROAS from your platform reports. The channel-level incremental revenue comes from your existing MMM. No additional holdout tests are needed to apply the distribution rule — the entire method runs on data your team already has access to.

What is the difference between Rule A and Rule B?

Rule A splits channel-level incremental revenue in proportion to each campaign's share of attributed revenue inside the channel. It is simple but biased toward small campaigns with single-conversion noise. Rule B weights campaigns by spend × attributed ROAS, with both signals compressed by the same exponent (β = 0.3), so neither dominates. Rule B is recommended for any reallocation decision that will actually move budget.

What is the β parameter in Rule B and how do I set it?

β is the compression exponent applied to both spend and attributed ROAS in the weighting formula. The recommended default is β = 0.3, which is the cross-validated median across approximately 200 channel-level MMM decompositions in Cassandra's dataset. A higher β (e.g. 0.5) increases the spread between campaigns — campaigns with more spend or higher ROAS earn disproportionately more. A lower β (e.g. 0.15) pulls the distribution toward equal share. Start at 0.3 and adjust only if you have a specific calibration reason.

What is the minimum campaign spend to apply Rule B?

Apply a spend floor of €5,000/month or 5% of total channel spend, whichever is higher. Campaigns below this threshold are held aside — their nominal share is reported separately rather than redistributed. This protects the distribution from single-conversion noise: a €500/month campaign that converts one high-AOV order reports an inflated attributed ROAS that would distort the rule if included.

How often should I recompute the campaign-level incrementality distribution?

Weekly, using the same period as your MMM refresh cycle. Build the monitoring table on a rolling basis and use the 8-week trailing iROAS as the primary signal for scale/hold decisions — single-week snapshots are too noisy to act on alone. If a campaign's weekly iROAS diverges from its 8-week trailing average by more than 30% for two consecutive weeks, treat that as an investigation trigger, not an immediate scaling decision.

Copyright © 2025 – All Right Reserved

Copyright © 2024-2025 – All Right Reserved