Jessica’s weekly performance report lands in her inbox right on time. On the surface, it looks good: Paid search is humming, social is efficient and remarketing? Still on top for ROAS. The agency’s reports show clear wins, clean dashboards and recommendations seem obvious.
But Jessica, senior marketing manager at a national retail brand, hesitates. She knows something the report doesn’t: her audience doesn’t move in a straight line.
- They’re mobile-first, screen-hopping, content-absorbing and attention-fractured.
- They’re discovering her brand in podcasts, on CTV and mobile games — spaces where ad impact is real, but measurement is fuzzy.
What should she optimize? The tidy data? Or the complex, messier consumer truth?
When the highest performers are just the best-measured channels
There’s a reason remarketing, paid search and lower-funnel social ads often look like top performers. They appear in the ad platform reports with strong last-click conversions, clean ROAS and well-defined audience targeting. But what about:
- The CTV campaign that made someone search for the brand later?
- The mobile game placement that sparked awareness?
- The podcast sponsorship that built trust before the user even hit the website?
Those channels don’t show up in standard reporting. And they certainly don’t get credit in a last-click model.
What Jessica sees is classic: the channels that are easiest to measure get the most credit. But they’re not always doing the most work.
Dig deeper: How smarter measurement can fix marketing’s performance trap
The role of the ad server: The (often overlooked) source of truth
Adding to the complexity? The ad server. Jessica’s media runs across multiple platforms — search, social, DSPs — but her media team uses a third-party ad server to stitch together impressions, clicks and conversions. It’s supposed to be the neutral source of truth. But it doesn’t always align with platform-reported data.
Why? Platforms tend to self-attribute. The ad server, meanwhile, is more conservative. It tracks unassisted conversions using a stricter attribution window and favors last-click attribution. This helps avoid double-counting but also tends to downplay upper-funnel impact, especially when view-through behavior is involved.
When Jessica compares ad platform performance with the ad server’s report, the numbers don’t match. And neither of them includes what her internal CRM and analytics platform is showing. Welcome to the data conflict triangle: ad platform vs. ad server vs. internal data.
View-through conversions: A messy middle ground
Jessica knows she can’t ignore view-through conversions — especially in channels like display, video and CTV, where clicks are rare, but impressions drive brand recall. But not all view-throughs are created equal.
- Assisted view-through conversions suggest the ad played a role but wasn’t the final touch.
- Unassisted (or last-view) conversions credit the last impression before the conversion.
View-throughs can dramatically change which channels look like top performers depending on how attribution is set up. The ad platform might show CTV driving conversions via view-through. But her ad server might not agree — or might show those users converting later via search or direct traffic.
What’s real? It’s not about right or wrong. It’s about knowing what kind of influence you’re measuring — and making sure your optimization decisions reflect that nuance.
The cross-device conundrum
And then there’s the wildcard: cross-device tracking. Jessica’s audience doesn’t just move across channels — they move across screens. A user might:
- Watch a CTV ad at night.
- Search the brand on their phone the next morning.
- Click a remarketing ad on desktop at lunch.
But those touchpoints look disconnected unless you’ve got strong identity resolution (and most brands don’t have perfect stitching across devices).
Guess who gets the credit? The last one: the desktop click. The problem isn’t that last-click doesn’t matter; it’s that everything before it mattered too — you just didn’t see it.
Mobile tracking issues, app-to-web journeys, and privacy limitation (think iOS and SKAdNetwork) attribution breaks make it worse. Upper-funnel mobile impact is often ignored.
Dig deeper: The real reason marketing measurement keeps failing
What should you optimize then?
This is where great marketers rise above good reporting. Great marketers shift their focus from “what performed?” to “what influenced?” Here’s how.
1. Look at attribution in layers
Compare multiple attribution models: last-click, first-touch and multi-touch. See how performance shifts. Channels like podcasts, CTV and gaming may look weak in last-click but strong in assist roles. If your CTV and podcast efforts show lift in first-touch or assist models, that’s a signal, not a failure.
2. Use the ad server as a check, not the bible
Ad servers help clean up inflated platform data but they’re not flawless. Use them to balance reporting and to see neutral cross-platform behavior, especially when trying to untangle overlapping conversions.
3. Isolate impact with holdouts or geo tests
Run holdout tests if you’re unsure about a channel’s value — say, CTV or podcasts. Pause the campaign in a region or segment. If other channels dip, too, guess what? That underperforming channel was pulling more weight than you thought.
4. Look for proxy signals where direct tracking fails
In cross-device or mobile-heavy journeys, use proxy indicators:
- Branded search volume.
- Direct traffic spikes.
- Social mentions or TikTok engagement.
- Post-exposure surveys.
Even imperfect signals can guide better decisions than click-only data.
Dig deeper: The smarter approach to marketing measurement
Bottom line: Trust the data, but trust your audience more
You need data. You need structure. But your dashboard is not your strategy. Because when channels don’t perform in reports — but your audience is there — it’s not a media problem. It’s a measurement one.
- Optimize what works, not just what’s visible.
- Ask hard questions before shifting spend.
- And never forget: the customer journey doesn’t care about your attribution model.
Contributing authors are invited to create content for MarTech and are chosen for their expertise and contribution to the martech community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. The opinions they express are their own.