Skip to Main Content

Smart Banners and the CEO P&L Case for Autonomous Email: What Every Maturity Level Earns

Every published email maturity model measures the wrong thing. Here’s the P&L case for smart banners, block-level attribution, and the autonomous email program your CFO can actually defend.

A bearded man wearing a black shirt and wireless earbuds sits in a brightly lit, modern airport terminal.
Robert Haydock
CEO, Zembula

If you asked most email marketers how mature their program is, they’d point to the number of automation flows running, the ESP features activated, or whether their templates render on mobile. That’s the framework Salesforce, Litmus, and every major ESP has published for years. And it misses the point entirely. None of those models measure what the program actually earns. None track smart banners performance, block-level RPM, or revenue per subscriber as the advancement criterion. The result is a P&L gap that grows every quarter without anyone noticing.

I run a company that works with ecommerce and retail brands on email personalization at the content-block level. What I see in the benchmark data, every day, is a structural divide between programs that measure at the content-block level and programs that don’t. The programs that deploy smart banners with real attribution produce numbers that look like performance marketing. The programs that don’t produce numbers that look like hope. This post is the investment argument for moving from one to the other.

Why Published Maturity Models Measure the Wrong Thing

The Salesforce State of Marketing report, Litmus’s five-level framework, and most ESP maturity assessments advance brands based on channel adoption: how many automations are live, whether mobile-optimized templates are deployed, whether AI features are activated. A brand can score “Leader” on every one of these scorecards while running 95% of its broadcast volume with zero content-level personalization. That’s the problem.

As David Swift wrote in his analysis of autonomous email ROI: “Organisations that measure email performance with shallow metrics will consistently undervalue the contribution of autonomous hyper-personalisation” (Medium, February 2025). Open rates, click rates, even revenue attributed at the campaign level tell you almost nothing about which piece of creative inside the email drove the conversion. You know how many people drove past the billboard. You don’t know which part of it worked.

Zembula’s email maturity model is the only published framework that advances based on economic output, specifically whether the program generates enough block-level RPM and click-to-conversion (CTC) data to make a defensible investment case to a CFO.

Level 1: The True Cost of Opacity (and What Smart Banners Solve First)

At Level 1, your email program has a compounding cost problem that doesn’t appear on any existing maturity scorecard. Real revenue per subscriber fell approximately 35% between 2018 and 2024, while send frequency rose 63%. The program got louder. It didn’t get more productive.

Meanwhile, the paid media side of the house is getting more expensive, not less. Average ecommerce ROAS fell to 2.87 in 2025, down across 13 of 14 industries (Upcounting). Meta CPMs climbed 20% year over year. Google CPCs rose nearly 13% (Search Engine Land). Ecommerce CAC is up 40 to 60% since 2023. iOS ATT means only 40 to 60% of ad-driven conversions are even visible to the platforms.

Email sits on top of a first-party identity graph that ad platforms would kill for. In fact, they already use it: Meta Custom Audiences, Google Customer Match, CDPs like Segment and LiveRamp, lookalike modeling, all of it seeded from email audiences. The ad industry runs on email data. The question is whether your own email program is capturing value from that same data, or handing it to someone else.

At Level 1, the answer is: you’re handing it away. And the fix starts with smart banners. They’re the first dial that produces block-level RPM and CTC data, the first thing that converts an opaque channel into something a performance marketing team can actually tune.

Level 2: When Smart Banners Make the Program Visible and Defensible

The Level 1 to Level 2 transition is the most important in the model. It takes 12 weeks to reach a clear understanding of Smart Banner and Smart Kicker performance, requires zero daily workflow change for your existing email team, and produces the first block-level number the performance team has ever had. Smart banners and Smart Kickers drop into existing email templates without restructuring a single campaign. What changes is measurement, not production.

At Level 2, variant-level attribution kicks in: you’re measuring revenue performance at the individual content variant level, not just at the email or campaign level. Once you have block-level revenue attribution — RPM per module, CTC per variant — the program becomes visible to a CFO for the first time. Not “email drove $X in revenue” (the opaque billboard number), but “this abandoned cart smart banner produced $Y in attributed revenue at Z click-to-conversion rate against this subscriber segment.” That’s a performance marketing conversation. That’s what gets budget reallocated.

McKinsey’s personalization research confirms the economic logic: “Fast-growing companies generate 40% more revenue from personalization than their slower-growing counterparts. Personalization most often drives 5 to 15 percent revenue lift” (McKinsey). At Level 2, you start seeing that lift in real numbers, measured at the block and variant level.

This is where the email-as-performance-channel thesis gets concrete. Email has structurally better economics than paid media: owned audience, first-party identity, privacy-durable measurement. Level 2 is where you prove it with data your board can read.

Level 3: Cross-Block Systems — When Blocks Work Together (or Against Each Other)

Level 2 tells you which individual block or variant is performing. Level 3 answers a harder question: how do the blocks inside a single email interact as a system?

This is the stage where cross-block dynamics become visible. Does a loyalty-point smart banner in the header cannibalize the abandoned-cart product grid below it, or amplify it? When a promotional kicker at the bottom echoes the same offer as the hero, does click-through rise or does attention split? At Level 3, you have enough block-level data flowing across enough sends to answer these questions empirically, not hypothetically.

The practical stakes are high. A smart banner driving strong RPM on its own might be suppressing performance of the content below it. A Smart Kicker might be rescuing opens that would otherwise produce zero clicks. Without cross-block measurement, you optimize each module in isolation and miss the system-level revenue picture. Level 3 is where the email stops being a stack of independent blocks and starts behaving as an integrated performance unit — where the relationship between modules becomes a lever the team can actually pull.

This is also where the template library starts to pay compounding dividends. Every new module added to the system doesn’t just add its own RPM; it changes the performance equation for every module around it. The data from cross-block interactions feeds back into the decisioning logic, making the next send smarter than the last.

Levels 4 and 5: From Multi-Send Coordination to Autonomous Operations

Level 3 optimizes within a single email. Level 4 zooms out to optimize across sends.

At Level 4, the system coordinates subject lines and hero titles across multiple sends to shape how a brand appears in the inbox over time. This matters more than most teams realize, because consumers search their inboxes. When a subscriber searches for your brand name — or a product category, or a sale — what they see is an aggregated list of your subject lines. Those subject lines aren’t just open-rate drivers for individual sends; they’re a searchable storefront that persists for weeks.

The revenue impact is concrete: over 10% of email revenue comes from messages opened more than 7 days after they were sent. That means subject-line strategy isn’t just about today’s open rate. It’s about whether your email is findable and compelling when a subscriber comes back to their inbox a week from now, searching for the deal they vaguely remember seeing. At Level 4, the coordination of subject lines and hero titles across a send calendar becomes a deliberate revenue lever — not an afterthought.

Zembula’s Campaign Decision Engine becomes the infrastructure backbone at this stage. Open-time decisioning means the content a subscriber sees is selected at the moment of open, not at the moment of send. The system chooses from 100+ behavioral use cases across abandoned cart, loyalty, browse abandonment, and offer management. And because over 10% of revenue comes from delayed opens, the content that renders a week after send can be dramatically different — and more relevant — than what would have been static at send time.

At Level 5, the operating model shifts from a production team to a three-team structure: Editorial (narrative and guardrails), Data (template library and signal logic), and Performance Marketing (block-level dials). Production workload drops because the system handles content selection. The team focuses on strategy, creative direction, and optimization.

Prophet’s July 2025 analysis supports the velocity argument: organizations can leap from inconsistent content to scalable personalization in a single quarter, and IDC estimates GenAI will increase marketing productivity 40%+ by 2029 (Prophet). Level 5 autonomous email marketing is achievable within a single planning cycle, not a multi-year roadmap.

The Compounding Infrastructure Math: Template Library, Data Flywheel, and Revenue Per Subscriber Over 36 Months

Here’s where the CEO math gets interesting. David Swift’s modeling for an ecommerce brand sending 5M emails per month found that autonomous ML personalization at a conservative 18% RPE improvement delivers £4.56M in annual revenue delta. Over 36 months with compounding model improvement, the cumulative advantage exceeds £15M before accounting for staffing cost reductions of £1.2M+ (Swift, Medium).

The template library is the mechanism that makes this compound. Every new smart banner template, every new product grid, every new hero variant extends what the system can show. BCG’s June 2025 research on AI ROI in finance found that top performers follow a “string-of-pearls” approach, linking connected use cases so infrastructure investment stretches further (BCG). That’s the exact pattern: Smart Banners first, then product grids, then hero modules, each building on the same infrastructure.

SPI Research’s 2025 Professional Services Maturity Benchmark validates this across industries: firms at Level 5 saw on average a 739% increase in revenue growth over Level 1 organizations (Kantata/SPI Research). The proof that maturity models anchored to economic output produce measurable P&L gaps is not theoretical. It’s empirical.

The RPM metric is what makes this visible to a CFO. Revenue per mille (revenue per thousand impressions of a content block) is the email-native equivalent of the metric every paid media buyer already understands. When a CFO sees an email block generating strong RPM, that’s a language they already speak from the ad team’s reporting.

The CEO Decision: Investment Structure, De-Risk Timeline, and the 12-Week Structured Test That Produces the First Real Number

Sebastian Stange and his co-authors at BCG’s Center for CFO Excellence put it directly: “Teams generating strong ROI [from AI] are making different choices. They focus on value from the start, not on learning for learning’s sake” (BCG, June 2025).

The Level 1 to Level 2 transition follows that principle. You don’t start everywhere. You start where value is fastest: smart banners in broadcast email, the 95% of your personalized email volume that currently carries no content-level personalization. The structured test takes 12 weeks. It requires no daily workflow change. And it produces the first block-level CTC and RPM data the performance team has ever seen.

Twelve weeks is not a long time horizon when you consider the alternative: another year of rising send volume, declining revenue per subscriber, and zero visibility into what’s working inside the email. The 12-week window produces the first real number — a block-level RPM you can compare against every other marketing channel’s unit economics. That number either justifies the next phase of investment or it doesn’t. Either way, you’re making decisions from data instead of assumption.

Key Takeaways

  • Every published email maturity model measures technology adoption, not economic output. That means most “Leader” programs still run 95% of broadcast volume without content-level personalization and have no idea they’re sitting on a structural revenue decline.
  • Smart banners are the Level 2 entry point because they produce the first block-level RPM and CTC data — including variant-level attribution — without requiring any workflow change from your existing email team. The transition takes 12 weeks to reach a clear performance understanding.
  • Real revenue per subscriber has declined ~35% since 2018 while send frequency rose 63%. Level 1 has a compounding cost that doesn’t appear in any existing maturity scorecard.
  • Level 3 is about cross-block systems: how smart banners, product grids, hero modules, and kickers interact inside a single email. Optimizing blocks in isolation misses the system-level revenue picture.
  • Level 4 coordinates subject lines and hero titles across multiple sends. Over 10% of email revenue comes from messages opened 7+ days after send. Your subject lines are a searchable storefront — and most programs don’t manage them that way.
  • Email has structurally better unit economics than paid ads: owned audience, first-party identity, privacy-durable measurement. Average ecommerce ROAS fell to 2.87 in 2025. Mature email programs running block-level personalization operate at fundamentally different return multiples.
  • The three-team operating model at Level 5 (Editorial, Data, Performance Marketing) replaces production-first email work with strategy-first optimization. Every new template extends system reach. The asset appreciates, it doesn’t depreciate.
  • The compounding infrastructure math is real. Conservative modeling shows £15M+ cumulative advantage over 36 months for a 5M-send-per-month program. The BCG “string of pearls” approach (start with smart banners, add modules sequentially) is the de-risk path.
  • The CEO argument is not “personalization is good.” It’s that Level 1 has a compounding cost invisible to current scorecards, and the investment to reveal it takes 12 weeks and produces the first real number.
A bearded man wearing a black shirt and wireless earbuds sits in a brightly lit, modern airport terminal.
Robert Haydock
CEO, Zembula

Robert Haydock co-founded Zembula with the mission to give retail performance marketers measurements through image personalization so they can grow revenue from owned channels.

Grow your business and total sales

Book a Demo
Full Width CTA Graphic