blog
/
research
Real eCommerce MMM Case Study: +13% Conversions, Same Ad Spend
How Cassandra MMM reallocated a €50K eCommerce budget across 11 channels—delivering +13% conversions, +42% profit margin, and -7% CAC in 6 weeks.


Get a weekly dose of insightful people strategy content
The Short Version
A B2C eCommerce advertiser was spending €50,000 per month across 11 channels. After four months of manual budget experiments that produced a continuous downward trend in sales, they had no reliable measurement framework to act on. Google Analytics attributed 80% of conversions to "direct"—a non-answer. We ran Marketing Mix Modeling on their historical data, identified the channels with the lowest cost per order, and shifted budget accordingly. Two reallocation cycles later, with zero additional spend:
+13% conversions in month one
+42% profit margin
+22% ROAS
-7% customer acquisition cost (validated via geo incrementality experiment)
This article walks through exactly what we did, how the model's predictions compared to actual results, and how we validated a new channel (Twitter Ads) without risking the full budget.
Gabriele Franco — Founder, Cassandra | Originally presented as a video walkthrough. Watch the full case study above.
Four Months of Declining Conversions and No Reliable Data
The advertiser's agency had been adjusting the budget allocation across their 11 channels every few weeks for four months. Every intervention produced the same outcome: continued decline in sales and revenue.
The root cause was a measurement failure. When the team opened Google Analytics, approximately 80% of conversions were attributed to the "direct" channel. Direct attribution is, by definition, a measurement dead end—it tells you that a user arrived without a tracked referral, nothing more. No channel-level signal, no hypothesis to act on, no basis for a reallocation decision.
Every budget move the agency made was a guess. Four months of guesses produced four months of downward trend.
That is when we stepped in with Cassandra's measurement system.
How Marketing Mix Modeling Diagnosed the Budget
Cassandra's AI model ingested the full history of ad spend and conversion data across all 11 channels. The output was two-dimensional: spend shares (where the budget was actually going) and cost per order (CPO) per channel, isolated from attribution bias.
The diagnosis was immediate:
Meta Ads was absorbing the largest share of budget and showed the highest CPO across all channels. The channel was contributing to conversions—but the advertiser was past the point of diminishing returns, overspending relative to what each additional euro was returning.
Google Search and Google Video were significantly underfunded relative to their CPO performance. Both channels showed the lowest cost per incremental order in the dataset.
This was the opposite of what platform attribution had implied. Attribution was directing budget toward the channels that harvested bottom-funnel intent. MMM showed where budget was actually being wasted and where capacity remained untapped.
With the diagnosis in place, we asked the model a direct question: How should we distribute this budget over the next two weeks to maximize the number of conversions from the same €50,000?
Recommendations
Meta -20%, Google Search +15%, Google Video +5%
The model's output was specific:
Reduce Meta Ads by 20%
Reallocate +15% to Google Search
Reallocate +5% to Google Video
Cassandra predicted this reallocation would generate +5.5% more conversions over the previous two-week period, with no change in total budget.
The advertiser was direct about their conditions: if the actual result was close to the prediction, they would continue. If it was not, the engagement would end.
Prediction vs. Result
Predicted lift: +5.5% Actual lift: +5.75%
After months of declining results, the first two-week period following the reallocation produced positive growth—slightly above model prediction. The downward trend broke.
Adjusting for Seasonality and Channel Efficiency
Channel performance is not static. Efficiency shifts with seasonality, competitive pressure, and audience saturation. Two weeks later, we refreshed the model on updated data and generated a new recommendation:
Decrease Google Search by 12%
Increase Meta Ads back up by 4%
Invest the remaining 8% in Google Discovery
This was a meaningfully different allocation than the first cycle. Google Search efficiency had shifted—the initial underfunding had been corrected, and marginal returns were compressing. Meta Ads showed renewed capacity. Google Discovery emerged as an underexplored channel worth testing at scale.
Prediction vs. Result
Predicted lift: +6.6% Actual lift: +7.6%
The second cycle compounded on the first. The model's prediction was again exceeded.
Month One: +13% Conversions, +42% Profit Margin, +22% ROAS
Combining both two-week reallocation cycles, the first full month produced:
Metric | Result |
|---|---|
Conversions | +13% |
Profit margin | +42% |
ROAS | +22% |
Additional budget | €0 |
Every improvement came from reallocating the existing €50,000 more accurately—not from spending more. The CPO compression on Meta Ads, combined with the efficiency unlocked on Google Search and Google Discovery, drove the profit margin expansion. More conversions at lower average acquisition cost compounded directly into higher ROAS.
At this point, the advertiser wanted to test a channel with no prior spend history: Twitter Ads. This introduced a problem the budget allocator cannot solve alone.
Testing a New Channel: Geo Incrementality Experiment
Marketing Mix Models work from historical expenditure data. A channel with no prior spend has no history to model. Without historical data, any allocation suggestion for that channel is arbitrary—the model has nothing to fit.
To resolve this, Cassandra's incrementality testing tool ran a controlled geographic experiment.
Hypothesis: Does advertising on Twitter Ads reduce customer acquisition cost?
Experiment design:
Target: 5 regions in Italy
Minimum spend: €2,000
Duration: 15 days
Control group: equivalent regions receiving no Twitter Ads
Result: -7% reduction in overall CAC in the test regions.
The hypothesis was validated. Twitter Ads, at the right budget level and targeting, reduced acquisition cost relative to the existing channel mix. This result served two purposes: it gave the advertiser a data-backed go/no-go decision on Twitter Ads, and it provided the baseline expenditure data needed to calibrate the budget allocator for future allocation cycles.
For a deeper explanation of how geographic incrementality testing works, see Understanding Geo-Based Incrementality Testing.
What Changed Beyond the Numbers
The measurement failure that produced four months of declining sales was not fixed by spending more—it was fixed by measuring correctly. Once the model had a clear signal on which channels were over- and underfunded relative to their actual incremental contribution, the reallocation was straightforward.
The operational impact extended beyond the results:
30 hours of monthly analysis eliminated. The advertiser's prior workflow required manually pulling platform data, reconciling contradictory attribution numbers, and making allocation decisions under uncertainty. That process was replaced by reviewing model outputs and approving recommendations.
Attribution dependency removed. Budget decisions no longer ran through a measurement system that credited bottom-funnel channels for demand created elsewhere.
New channel validation framework in place. The incrementality testing workflow is now reusable for any future channel the advertiser wants to test.
If your team is making budget decisions based on platform attribution data, you are working with a measurement system that systematically misleads budget allocation. The gap between attributed ROAS and true incremental ROI is not a rounding error—it is a structural feature of how attribution works.
Cassandra replaces attribution with a measurement approach built on MMM and incrementality testing. If you want to see how it applies to your channels and budget, book a 45-minute walkthrough.
Frequently Asked Questions
What is a Marketing Mix Model and how does it differ from attribution? A Marketing Mix Model (MMM) is a statistical framework that estimates the incremental contribution of each marketing channel to total conversions or revenue. Unlike attribution, which tracks user-level click paths and assigns credit to touchpoints, MMM uses regression analysis on aggregate spend and outcome data to isolate each channel's causal impact—including channels attribution cannot track (TV, radio, organic social, offline). The result is a more accurate picture of where budget is actually working.
How does Cassandra identify which channels are overfunded or underfunded? Cassandra's model calculates cost per incremental order for each channel by fitting a diminishing returns curve to historical spend and outcome data. Channels with high CPO relative to their current spend share are candidates for reduction. Channels with low CPO and untapped capacity are candidates for increased investment. The model outputs a specific reallocation that maximizes total conversions for a fixed budget.
How accurate are the conversion lift predictions? In this case study, the first prediction was +5.5% and the actual result was +5.75%. The second prediction was +6.6% and the actual result was +7.6%. Both exceeded the prediction slightly. Accuracy improves as the model accumulates more data from the specific account, channels, and seasonal patterns.
What is incrementality testing and when is it required? Incrementality testing measures the causal impact of advertising on a specific audience by comparing a group exposed to ads against a matched control group that is not. It is required when a channel has no prior spend history (as in this case with Twitter Ads) because the budget allocator cannot model what it has never observed. Geo experiments—targeting specific regions for a defined period—are the most practical implementation for most advertisers.
How long does it take to see results? In this case study, the first positive results appeared within two weeks of implementing the first reallocation. Full month-one results (+13% conversions, +42% profit margin) were achieved within approximately four weeks. The incrementality experiment on Twitter Ads ran for 15 days and produced a measurable CAC reduction.
