CIMM Paper Argues Programmatic Ad Tech Systematically Undervalues Media Quality

A new paper from the Coalition for Innovative Media Measurement challenges the ad industry's fixation on short-term outcomes, arguing that programmatic infrastructure ignores measurable differences in media quality. The paper proposes probabilistic metrics like attention scoring and contextual relevance as correctives.

By Jamie Okonkwo··8 min read

The Coalition for Innovative Media Measurement released a paper at CIMM East in New York in early April that takes direct aim at one of programmatic advertising's foundational assumptions: that an impression is an impression. The paper, covered by AdExchanger, argues that programmatic ad tech systematically overlooks differences in media quality — and that the industry's fixation on tying media value to short-term outcomes has made the problem worse.

The argument is not new in principle. Media buyers have long known that a pre-roll ad on a premium streaming service performs differently than a banner buried in a mobile game. What CIMM's paper does differently is provide a structured framework for measuring those differences and embedding them into programmatic decisioning — moving the conversation from intuition to infrastructure.

The Core Problem: Programmatic Treats All Impressions as Commodities

Programmatic ad tech was built to optimize for audience targeting and cost efficiency. The bid request contains audience signals, device type, and format — but it largely ignores the quality of the media environment in which the ad appears. A CPM is a CPM, whether the ad runs during primetime on a top-tier streaming platform or at 3 AM on a low-engagement FAST channel.

As MediaPost reported, the CIMM paper highlights a specific example: major streaming platforms carry high price tags even if ads are served in the middle of the night when audiences are less receptive. The paper asks why advertisers should accept that 40% of their streaming budgets may be spent on overnight inventory where attention and engagement are fundamentally different from primetime placements.

This is not merely a pricing complaint. It is a measurement failure. If the systems used to plan, buy, and evaluate media cannot distinguish between high-quality and low-quality impressions, then every downstream metric — brand lift, attribution, ROAS — is built on a flawed foundation.

The Framework: Separating Media, Creative, and Audience Quality

The paper proposes a three-part quality framework that separates media quality from creative quality and audience quality. This distinction matters because programmatic has historically collapsed all three into a single metric: did the impression convert?

  • Media quality encompasses the environment — content adjacency, time of day, platform reputation, ad load, share of screen, and whether the viewer is actively engaged with the content.
  • Creative quality covers the ad itself — format, length, messaging clarity, and relevance to the viewer.
  • Audience quality addresses whether the right person saw the ad — traditional targeting effectiveness.
  • CIMM argues that programmatic infrastructure optimizes heavily for audience quality (through data segments and identity resolution) and somewhat for creative quality (through DCO and A/B testing), but almost entirely ignores media quality. The result is that two impressions reaching the same audience with the same creative can produce wildly different outcomes depending on where and when they run — and the buying system has no way to account for that difference at bid time.

    Probabilistic Over Deterministic: Attention as a Quality Signal

    Perhaps the paper's most consequential recommendation is the shift from deterministic to probabilistic measurement for media quality. Rather than binary signals — viewable or not, completed or not — the paper advocates for non-binary, relative, and probabilistic metrics that assess quality on a spectrum.

    Attention scoring is the primary example. Companies like Adelaide, TVision, and Lumen have built models that predict how much attention an impression is likely to receive based on contextual signals. These predictions are inherently probabilistic — they estimate likelihood, not certainty. But CIMM argues that probabilistic estimates of attention are more useful than deterministic completion rates because they capture the quality dimension that completion metrics miss.

    This builds directly on CIMM and IAB's Attention Measurement Playbook released in November 2025, which established standardized definitions and validation requirements for attention metrics. The new paper takes the next step by arguing these metrics should be integrated into programmatic bidding infrastructure — not just used for post-campaign analysis.

    Contextual Relevance as a Measurable Quality Dimension

    Beyond attention, the paper highlights contextual relevance as an underutilized quality signal. The argument is straightforward: an ad for a travel brand running alongside travel content is more likely to drive outcomes than the same ad running alongside unrelated content, even if both impressions target the same audience segment.

    CIMM's 2026 priorities include an empirical study on the effectiveness of contextual advertising in CTV — a signal that the organization intends to back this recommendation with data. If the study demonstrates measurable outcome differences based on content alignment, it would strengthen the case for incorporating contextual signals into bid evaluation alongside audience and attention data.

    Why This Matters Now

    The timing of this paper is not accidental. Several industry dynamics make the media quality argument more urgent than it was a year ago.

    First, CTV inventory is expanding rapidly, but quality varies enormously. The rise of FAST channels, ad-supported tiers from premium platforms, and programmatic CTV buying means advertisers are accessing more streaming inventory than ever — but the quality spread between the best and worst of that inventory is wider than in linear TV, where a limited number of networks maintained relatively consistent quality standards.

    Second, the IAB's State of Data 2026 report found that 75% of buy-side leaders say core measurement approaches underperform on rigor, timeliness, and trust. If measurement is already under scrutiny, ignoring a major quality dimension makes the credibility problem worse.

    Third, attention metrics have moved from experimental to operational. With xpln.ai integrated into Index Exchange for pre-bid attention targeting and Adelaide pursuing MRC accreditation, the infrastructure to act on attention-based quality signals already exists. CIMM's paper provides the intellectual framework to justify deploying it more broadly.

    What This Means for Measurement Teams

    Audit your quality blind spots. If your programmatic buying evaluates audience and creative but not media quality, you are making bid decisions with incomplete information. Ask your DSP and SSP partners what media quality signals — attention scores, contextual relevance, time-of-day performance — are available in their platforms.

    Question overnight and low-engagement inventory. The paper's critique of overnight streaming spend is specific and actionable. Pull daypart reports for your CTV campaigns and compare performance across time slots. If a significant portion of impressions runs when attention is lowest, that is a media quality issue, not a targeting issue.

    Test probabilistic quality metrics. If you are not already incorporating attention scoring or contextual relevance into your media evaluation, start with a controlled test. Compare campaign performance between attention-optimized and standard buys to quantify the quality gap in your specific category.

    Connect quality to incrementality. The ultimate test of media quality is whether higher-quality impressions drive more incremental outcomes. If you are running incrementality tests, segment results by media quality indicators to determine whether the quality framework CIMM proposes actually predicts outcome differences in your data.

    The ad industry built programmatic to make buying efficient. CIMM's paper argues that efficiency without quality assessment is a false economy — you might be buying the right audience in the wrong environment and measuring the outcome without understanding the cause. For measurement teams, the paper is a reminder that what you measure shapes what you optimize, and if quality is not in the measurement framework, it will never be in the bid.