PRODUCT UPDATES

PRODUCT UPDATES

Introducing Always on Incrementality

Ongoing Incrementality Measurements that Calibrate Your MMM Automatically

Get a weekly dose of insightful people strategy content

Get a weekly dose of insightful people strategy content

Ongoing Incrementality Measurements that Calibrate Your MMM Automatically

Date: October 31, 2025

Marketing teams require accurate measurement to allocate budget effectively, but traditional approaches create a trade-off between speed and reliability.

Marketing Mix Modeling (MMM) provides comprehensive visibility into cross-channel performance, but it requires ongoing fine tuning (calibration) through incrementality tests—lift studies, platform experiments, and geo-tests—to maintain accuracy and reduce uncertainty. 

Without continuous calibration, MMM produces wide confidence intervals in its measurements that limit the precision of budget recommendations and reduce stakeholder confidence in the model's outputs. 

Why Traditional Measurement Approaches Create Delays

The MMM Challenge:

  • Requires 2-3 years of historical data 

  • Model accuracy depends also on running future experiments that haven't been designed yet

  • Wide confidence intervals reduce “precision” in budget allocation recommendations

The Incrementality Testing Challenge:

  • Creating opportunity cost during the testing period

  • Requires cross-functional alignment on test design and execution parameters

  • Running tests often requires pausing or modifying active campaigns

  • Proper analysis and validation requires 2-6 weeks—during which company goals might be misaligned with measurements.

The Solution: Extracting Calibration Data from Historical Campaign Activity

Marketing platforms generate “natural experiments” continuously through routine campaign operations—budget changes, creative refreshes, targeting adjustments, and platform outages. 

Each of these events creates measurable variation in spending that can be analyzed for incrementality signals.

Cassandra's Always-On Incrementality (AOI) identifies these natural experiments in your historical data and quantifies their incremental impact. 

By connecting your ad platforms (Google Ads, Meta, TikTok) and conversion data (CMS/CRM), the system analyzes historical spending patterns to detect incrementality signals and compare them against attributed performance metrics.

The process runs on your existing data—no new tests required, no campaign modifications needed and it allows to estimate true incremental ROI for digital channels automatically.

This Article Covers:

  1. What Always-On Incrementality is - How natural experiments in campaign data provide calibration signals

  2. How the detection works - The methodology for identifying and measuring incrementality from historical data

  3. Validation results - Performance metrics across 1,200+ campaign scenarios

  4. Impact on MMM accuracy - Quantified improvements in forecasting precision and confidence intervals

  5. Implementation - How to apply AOI to your measurement stack

1. What is Always-On Incrementality?

Always-On Incrementality is an automated system that analyzes historical media spend and conversion data to identify naturally occurring spending variations across geographic regions. These variations—caused by budget shifts, platform issues, campaign scheduling differences, or operational decisions—create conditions similar to controlled geo-experiments.

The system applies causal inference methods to measure the incremental impact of these spending variations, then feeds these measurements into your MMM as calibration inputs. This process updates the model's parameters based on observed incrementality signals from actual campaign operations.

How AOI differs from traditional incrementality testing:

Data source: Analyzes historical data that already exists in your ad platforms and analytics systems, rather than requiring prospective test design

Frequency: Runs continuously as new data becomes available, identifying multiple calibration signals over time

Campaign impact: Requires no changes to live campaigns—no budget holds, no geographic exclusions, no creative modifications

Time to results: Produces incrementality measurements as soon as historical data is processed, typically within minutes of platform connection

3. Testing AOI on 1200 datasets

We validated AOI using synthetic datasets where the true incremental impact of each channel was known. 

Starting with real client spend patterns, we generated revenue using standard MMM transformations (geometric adstock, saturation curves) and distributed it across 50+ geographies over 2+ years of daily data.

Performance Results:

Across 1,200 validation runs, AOI identified an average of 10-13 natural experiments per dataset.

Three factors determined detection rate:

  • Spend volume: Higher spend produces more detectable experiments. 

  • Active days: More days with active spending increases detection rate.

  • Spend stability: Volatile week-to-week spending reduces detection rate.

AOI performs best on channels with steady baseline spending and occasional variations—the typical pattern in established marketing programs.

4. Results: A/B Testing MMMs With & Without AOI

We trained multiple MMM configurations on two datasets (US market with 51 states, EU market with 26 countries) using Bayesian frameworks in Cassandra

Each configuration ran 10 times to measure out of sample prediction accuracy, parameter stability, and contribution accuracy. 

Channel Contribution Accuracy

Channel contribution refers to how much each marketing channel (e.g., TV, Meta ads, Google Search, Influencers, Display, OOH) is estimated to have caused in terms of incremental business outcomes. We look at the accuracy between the distribution measured by the MMM, and the ground truth distribution.

  1. Average reduction in Contribution error: 9.1%

  2. Top-performing configurations achieved over 30% error reduction. 

    1. More accurate channel contribution estimates increase precision in budget recommendations.

Parameter Stability Across Rolling Time Windows


This test measures whether a model produces consistent ROI estimates when trained on different time periods. Stable models show similar channel ROIs regardless of which historical window is used for training; unstable models produce varying ROI estimates across different training periods.

Average improvement: +19.3%, with top configurations reaching +42.47%. This means your month-over-month incremental ROI estimates won't swing wildly based on which time window you analyze.

Out-of-Sample Prediction Accuracy

Improvement range: +0.23% to +0.57%

Improved forecasting accuracy increases confidence in budget scenario planning, allowing more reliable "what-if" analysis for proposed allocations.

Limitations

Lower-spend channels detect fewer experiments due to reduced spending volume, yet AOI still improved MMM performance across all channel types tested.

5. How to Implement AOI

AOI is currently available in beta to selected clients.

Implementation Process

  1. Connect data sources: Link ad platforms (Google Ads, Meta, TikTok) and conversion tracking systems (CMS/CRM)

  2. Analysis runs automatically: AOI processes historical data to identify natural experiments

  3. Review results: Access report comparing incremental effects against attributed performance

Initial Output

  • Identified natural experiments across channels and geographies

  • Measured marginal incremental ROAS for detected events meeting statistical thresholds

  • Calibration parameters formatted for MMM integration


Ongoing Operation

As new data becomes available, AOI:

  • Identifies newly occurring spending variations

  • Measures their incremental impact

  • Updates MMM calibration inputs automatically

  • Refines model confidence intervals as additional signals accumulate

Optimal Conditions for AOI

AOI performs best with:

  • Geographic-level spend data spanning multiple regions

  • Channels with regular baseline spending (low week-to-week volatility)

  • Portfolio of channels with varied spend levels

  • Minimum 12-18 months of historical data

Conclusion

Always-On Incrementality changes the trade-off in MMM calibration. Rather than choosing between running prospective geo-experiments or operating with uncalibrated models, organizations can extract calibration signals from historical campaign data.

The validation results demonstrate measurable improvements: 9.1% reduction in contribution error, 19.3% improvement in parameter stability across time windows, and consistent gains in out-of-sample forecasting accuracy—without requiring campaign modifications or multi-week testing periods.

For organizations where the operational cost or timeline of traditional incrementality testing has been prohibitive, AOI provides an alternative calibration approach using existing data infrastructure.

AOI is currently available in beta for selected clients.

Copyright © 2025 – All Right Reserved

Copyright © 2024-2025 – All Right Reserved