Skip to Main Content

Why Email Gets Underweighted in Media Mix Models, and What Smart Banners Do About It

Email doesn’t lose MMM budget fights because it underperforms. It loses because its signal format is wrong for how models work. Smart banners create the block-level revenue data that finally gives email attribution parity with paid media.

A bearded man wearing a black shirt and wireless earbuds sits in a brightly lit, modern airport terminal.
Robert Haydock
CEO, Zembula

Three out of four marketers say their measurement approaches, including attribution, incrementality, and media mix modeling, are not delivering the speed, accuracy, or trust they need. That finding comes from the IAB and BWG Global State of Data 2026 report, and it should bother every CMO who has watched their email channel get systematically underweighted in budget planning. Email’s problem in the MMM conversation isn’t performance. Smart banners running on the Zembula platform averaged 13.6% click-to-conversion in Q4 2025, compared to a 2.5% email baseline, with top abandoned cart variants hitting 27.9% CTC. The problem is that this performance is invisible to the model that allocates the budget.

This post is about a specific structural flaw in how email data reaches media mix models, why that flaw is different from anything wrong with email itself, and how block-level attribution from smart banners creates the signal format that gives email attribution parity with paid channels. If you run an MMM and email keeps showing up as a low-contribution channel despite strong actual results, this is the mechanism behind that disconnect.

The Budget Room Problem: Why Email Keeps Losing Arguments It Should Win

Marketing budgets fell 15% in a single year during 2023-2024, primarily because marketing could not prove ROI to Finance (Gartner CMO Spend Survey, cited in LiftLab analysis). That budget pressure hit owned channels harder than paid channels, because paid channels have a built-in spend-variation signal that MMM can detect. Email doesn’t. The result: even when email is technically included in the model, the coefficient estimate is low.

Here’s the irony. Average ecommerce ROAS fell to 2.87 in 2025, down across 13 of 14 industries (Upcounting). Meta CPMs are up 20% year-over-year (Triple Whale). Ecommerce CAC has climbed 40-60% since 2023, with Shopify merchants seeing $274 to $318 per acquisition YoY (Shopify GCR). Meanwhile, email as a performance channel is producing returns that make paid media look expensive by comparison. Zembula’s platform delivered 15x aggregate ROAS across active customers in the last 30 days of April 2026. That is a platform-wide statistic, not a cherry-picked account. But if your MMM doesn’t see it, the budget goes elsewhere.

As Tommy Albrecht, Head of Performance at Funnel.io, puts it: “A common pitfall is allocating budgets based on conversion performance. The result is people over-indexing to bottom-of-funnel channels. When teams use MMM and MTA, it becomes easier to see that it makes sense to move budget to channels that don’t show conversions directly, but are beneficial.” The catch is that email does show conversions directly. The model just can’t read them.

How MMM Works, and Why Email’s Signal Format Is the Wrong Shape

MMM finds statistical correlation between spend variation and revenue variation, week over week, across channels. That’s the core mechanic. Scale your Meta budget 30% in week 12 and revenue lifts in week 12. Cut Google Shopping spend 20% in week 15 and revenue dips. The model captures these correlations and estimates how much each channel contributes.

Now consider email. Your ESP charges a flat per-subscriber fee. Whether you send one campaign or twelve in a given week, the cost is essentially the same. Whether that campaign features a hand-coded hero image or a personalized smart banner pulling real-time abandoned cart data, the model sees no spending change. Tuesday costs the same as Monday. This week costs the same as last week.

MMM interprets “no spend variation” as “low contribution.” That’s not a bug in the model. It’s working exactly as designed. The model has no lever to correlate with revenue because there is no lever. Paid media creates clear spend-revenue signal pairs. Email creates a flat line.

This is worth sitting with for a moment. Over half of US marketers now use MMM, according to Kantar survey data cited by eMarketer in December 2024 (53.5%, the highest adoption figure in the modern MMM era). That means the majority of budget-setting conversations are now running through a model that structurally cannot see email’s contribution, even when email is listed as an input channel. Simply adding email as a line item does not fix this. The data type is wrong.

The Signal Mismatch in Plain Numbers: What MMM Needs vs. What Email Produces

MMM needs two things from a channel to estimate its contribution: a variable that changes week over week (usually spend) and a business outcome that moves in correspondence. For paid search, the variable is ad spend. For TV, it’s GRP or spot count. For email, the standard inputs are send volume, open rate, and click rate. All three have two fatal problems as MMM inputs.

First, none of them represent a spend variable. Send volume barely changes week to week for most programs. Open rates and click rates are aggregated percentages that don’t map to dollars. The model has nothing to correlate against revenue.

Second, these metrics aggregate individual-recipient signals into channel-level averages. A 22% open rate tells you nothing about which subscribers saw which content or what those subscribers bought. The variance structure MMM needs to isolate email’s contribution from seasonality and promotional overlap is destroyed in the aggregation.

The result: email appears as a near-constant input in a model designed to detect variation. The model penalizes it. Not because email underperforms, but because it generates the wrong data shape. Meanwhile, the actual performance gap between email content types is enormous.

Paid social averages roughly 1% CTC. The email baseline sits at 2.5%. Zembula smart banners average 13.6% CTC. The top-performing variant combination (cart + coupon + countdown) hits 27.9%. That performance gap is real. But if you’re feeding your MMM an aggregated email open rate, none of it is visible to the model.

The Block-Level Fix: How Smart Banners Create the Weekly Revenue Signal MMM Can Read

The fix is not a better model. It’s a better data type. Specifically: block-level RPM as a weekly time series.

RPM is revenue per 1,000 impressions, measured at the individual module level. When a smart banner renders inside an email, Zembula measures the revenue attributed to that specific block, for that specific variant, for that specific subscriber cohort. Abandoned cart smart banners on the platform range from $135 to $470 RPM depending on the variant combination and subscriber segment (Q4 2025 Benchmark data).

Here’s why this matters for MMM: RPM varies week over week based on content decisions. Which use cases ran that week. Which variant won the experiment. How many subscribers had behavioral signals (abandoned carts, browse history, loyalty tier changes) that triggered personalized blocks. These decisions create the kind of variation signal that MMM was designed to detect. It’s the same structural principle as ad spend variation, except the variable is content effectiveness rather than dollars deployed.

Zembula’s Campaign Decision Engine routes each subscriber to the highest-value use case at open time, creating subscriber-cohort-level revenue variation that can be segmented into MMM inputs by use case category. Smart Kickers add a second block-level data stream for cross-block signal verification. The attribution framework produces 7-day click-based revenue attribution at block level and variant level, generating the weekly RPM variation the model needs.

Feed this into your MMM as a time series input instead of (or alongside) aggregate email metrics, and you give the model something it can actually use. The email coefficient changes because the signal format changed.

The Collapsed-Pixel Holdout: Email’s Incrementality Proof That Paid Media Cannot Replicate

Block-level RPM fixes the MMM input problem. But there’s a second tool that makes the email coefficient defensible at the board level: the collapsed-pixel holdout.

This is a person-locked longitudinal A/B test. Audience A sees the smart banner content. Audience B receives the same email, but the smart banner module renders as a 1×1 pixel (invisible, no content shown). Same subscriber population, same sends, same everything else in the email. The only difference is whether the personalized block was present. Compare transactions across the two groups, and you get a clean causal lift estimate.

This test produces something no paid media channel can replicate by design. You cannot A/B test “ad shown vs. no ad in the same placement for the same user” on Meta or Google. Ad platforms don’t let you serve a control group that sees the placement without the ad. The collapsed-pixel holdout is structurally unique to in-email block-level content.

For the MMM conversation, this holdout result serves as a Bayesian prior that calibrates the email coefficient. LiftLab’s analysis of MMM vs. incrementality testing confirms that “observational models systematically overestimate credit to channels that capture existing demand rather than create it.” The collapsed-pixel holdout inverts that problem for email: instead of the model underestimating email because it can’t see the signal, you provide direct causal evidence of the lift. The coefficient estimate becomes defensible because it’s anchored to an experiment, not just a correlation. Holdout testing is worth understanding in detail if you’re building this case.

From Underweighted to Defensible: The Budget Conversation That Becomes Possible

When you combine block-level RPM time series with collapsed-pixel holdout results, the budget conversation shifts. You’re no longer arguing that email “should get more credit.” You’re presenting the CMO and CFO with two data artifacts they can evaluate directly.

First: a weekly time series that correlates content variation with revenue outcomes, in the exact format the MMM already ingests for paid channels. Second: a controlled experiment proving causal lift from personalized email content, with confidence intervals the finance team can interrogate.

Nielsen research, cited in LiftLab’s analysis, finds that 85% of marketers say they can measure holistic ROI, but only 32% actually do. That 53-point confidence gap is exactly what smart banners close at the email channel level. The RPM time series gives the model readable signal. The holdout gives Finance causal proof. Together, they move email from “underweighted because invisible” to “defensible because instrumented.”

The practical implication for budget reallocation is significant. With ecommerce CAC at $318 and climbing, every dollar moved from declining-ROAS paid channels toward an owned channel producing 54x aggregate ROAS is a dollar that works harder. But that reallocation only happens when the model says it should. And the model only says it should when the signal format is right.

Key Takeaways

  • Email’s MMM problem is signal format, not performance. ESPs charge flat per-subscriber fees, producing zero spend variation. MMM interprets no variation as low contribution, regardless of actual conversion rates.
  • Standard email metrics are wrong for MMM. Open rate, click rate, and send volume are aggregated averages that destroy the variance structure MMM needs. They don’t map to spend, and they don’t vary meaningfully week over week.
  • Smart banners produce block-level RPM as a weekly time series. Revenue per 1,000 impressions, by module type and subscriber cohort, creates the content-variation signal that correlates with revenue outcomes the way ad spend variation does for paid channels.
  • The collapsed-pixel holdout is structurally unique to email. Smart banner shown vs. 1×1 pixel hidden, same subscriber, same sends. This produces causal lift proof that paid media cannot replicate. It serves as the Bayesian prior that makes the email coefficient in an MMM defensible.
  • The fix is the data type, not the model. You don’t need a different MMM. You need to feed your existing model block-level RPM instead of (or alongside) aggregate email metrics. The email coefficient changes because the signal changes.
  • Attribution parity is achievable now. 53.5% of US marketers use MMM. The audience for this argument is the majority. Smart banners produce the data artifact that turns email from a model blind spot into a measurable, defensible channel in the budget room.
A bearded man wearing a black shirt and wireless earbuds sits in a brightly lit, modern airport terminal.
Robert Haydock
CEO, Zembula

Robert Haydock co-founded Zembula with the mission to give retail performance marketers measurements through image personalization so they can grow revenue from owned channels.

Grow your business and total sales

Book a Demo
Full Width CTA Graphic