Writing · Growth Engineering

Full-Funnel Growth Engineering: How To Stop Lying To Yourself About What's Working

Most attribution is theater. A real growth engineering practice rebuilds the measurement layer from first principles and treats it as infrastructure.

If you ask the average growth team what is working, they will pull up a dashboard. The dashboard will tell them Facebook is up forty percent week over week, Google is flat, and TikTok is the new cheap channel. The dashboard is wrong. Not directionally wrong — structurally wrong.

Last-click attribution overcounts the platforms that show ads after the customer was already going to buy. Platform-reported numbers come from the platform that is paid more when the number is higher. View-through windows are arbitrary. Cross-device, cross-browser, cross-app stitching has been broken since Apple shipped App Tracking Transparency. The walled gardens send you back aggregated, modeled, sometimes lagged numbers that they themselves admit are estimates. And then a marketing team takes those numbers, averages them, and decides where to put two million dollars next month.

This is not measurement. This is theater dressed as measurement. Every operator I work with eventually realizes their dashboards have been lying for years, and the budget decisions made off those numbers were a coin flip with worse-than-coin-flip downside. Rebuilding the measurement layer is the highest-leverage thing a growth team can do.

Why dashboards lie

Three structural problems make most growth dashboards untrustworthy.

First, post-iOS14 changes broke the ID graph. The deterministic device identifiers that the entire ad tech stack was built on are gone for a meaningful slice of traffic. The platforms backfilled with modeled conversions, which means a Facebook-reported conversion is now partly a guess. The guess is calibrated against ground truth in aggregate, but it is not reliable per-campaign or per-creative.

Second, third-party data has effectively died. Cookies are deprecated or aggressively expired. Cross-site tracking is broken in most browsers by default. The data that powered ten years of attribution models is no longer being collected. The dashboards still display the old views, but they are running on increasingly thin ground truth.

Third, the walled gardens have a structural conflict of interest. Each platform wants credit for every conversion it can plausibly claim. Add up the conversions reported by Google, Meta, TikTok, and your other channels for a typical week, and you will often see two to three times more conversions than you actually had. Everyone is double-counting. Everyone is incentivized to.

What a real measurement architecture looks like

A working measurement layer rests on four pieces. None of them are optional.

Server-side event collection. Stop relying exclusively on client-side pixels. Capture every meaningful event server-side — orders, signups, qualified leads, downstream behavior — and replay them out to ad platforms via Conversion API, GA4 Measurement Protocol, server-side tags. Server-side data is your source of truth. Client-side data is what you send to the platforms when you have no choice. This single change typically improves match rates by twenty to forty percent and gives you data the platforms cannot lose for you.

Identity resolution. Build a persistent customer identifier that you control. Email-based hashed IDs, logged-in user IDs, first-party cookies with long horizons. Stitch sessions to identities at every touchpoint where the customer self-identifies. The goal is not to track people across the open web — that ship has sailed. The goal is to know your own customer journey on your own surfaces. If a customer signed up six weeks ago and converted today, your system should know it was the same person.

MMM as the truth source. Marketing Mix Modeling is the only attribution method that works at the channel level when the ID graph is broken. You take total spend per channel per week, total conversions, control variables (seasonality, promotions, holidays, brand search volume, weather if relevant), and you fit a model that estimates each channel's contribution to total outcomes. It is messy. It is statistically demanding. The numbers it produces are estimates with confidence intervals, not deterministic credits. But it is the right answer when deterministic credit is impossible. Open-source MMM tooling (Robyn, LightweightMMM, Meridian) has matured to the point that any serious team can run it. Calibrated MMM is the closest thing to truth most growth teams will ever have.

Incrementality testing as the audit. MMM gives you the ongoing read. Geo holdouts, ghost ads, and PSA tests give you the periodic ground-truth audit. Pick a market, turn off a channel for two to four weeks, measure what happens to your overall conversion rate, and compare it to the MMM's prediction. If they agree, you trust the MMM. If they disagree, you fix the MMM. Incrementality testing is the calibration layer that keeps the model honest.

The four numbers that actually matter

Most growth dashboards have fifty metrics. You need four. Everything else is a distraction or a leading indicator of one of these.

CAC payback period. How long does it take a cohort of acquired customers to generate enough contribution margin to repay the cost of acquiring them. Twelve months is a reasonable benchmark for many subscription businesses. Six is excellent. Twenty-four is a problem. CAC payback ties unit economics to cash flow in a way LTV/CAC alone does not.

Contribution margin per customer. Revenue minus variable costs (cost of goods, payment processing, fulfillment, customer support attributable to that customer). This is the actual money the customer generates for you, not the topline. Most growth teams optimize toward revenue and discover later that revenue and contribution margin moved in opposite directions.

LTV/CAC by cohort. Aggregate LTV/CAC is a lie because it is dominated by survivor bias and old customers. By cohort, you can see whether the system is improving or degrading over time. If your March 2024 cohort has a 4x LTV/CAC and your March 2025 cohort has a 2.5x LTV/CAC, the channels are getting worse, the customers are getting worse, or both. Aggregate numbers hide the trend.

Marginal CAC at the next dollar. The most underused number. The next dollar of spend at any given channel is more expensive than the average dollar already spent. Average CAC tells you what already happened. Marginal CAC tells you what happens if you increase the budget. Channels with low average CAC and high marginal CAC are saturated; spending more is wasted. Channels with high average CAC but flat marginal CAC have room to scale. The growth teams that scale efficiently make the marginal calculation explicitly. The ones that don't pour money into channels that have stopped responding.

You cannot allocate budget intelligently if your numbers are reported instead of calculated. Calculate them yourself, with your own data, and stop trusting the platform self-report.

How to run an experimentation program that doesn't lie

Most A/B testing programs are sophisticated forms of overfitting. The problems are familiar. Underpowered tests. Peeking at results before the test concludes. Testing too many variants at once. Not pre-registering the metric. Calling tests "winners" based on noise. I have seen teams run a hundred tests a quarter, declare an eighty percent win rate, and ship effectively zero compounding improvement.

Three rules clean this up. First, every test gets a sample size calculation up front, based on the smallest effect size that would justify shipping the change. If the test is not powered to detect that effect, it does not run. Second, the metric is pre-registered and frozen before the test starts. No fishing for which segment moves. Third, no peeking. The test runs to completion, even if the early read looks great. Sequential tests with proper sequential statistics are fine; ad-hoc early stopping is not.

Run fewer, larger, better-powered tests. Ship the ones that win. Hold the line on the ones that draw.

The one report I make every operator build

One page. Updated weekly. Three sections.

Top: total spend, total conversions, blended CAC, contribution margin, by week, last twelve weeks. Middle: MMM-attributed contribution by channel, with confidence intervals, plus the latest incrementality test result for each major channel and how it compared to the MMM's prediction. Bottom: cohort table — last twelve cohorts, by month, showing CAC, contribution margin to date, and projected LTV/CAC at twelve months.

That report is the single source of truth for budget allocation, channel decisions, and cohort health. Everything else is supplementary. If the report cannot be produced reliably every week, the measurement infrastructure is not yet good enough and that is the next thing to fix.

Growth engineering is not about clever creative or hot channel arbitrage. Both of those help, but neither compounds. What compounds is the measurement infrastructure. Teams that get measurement right make better budget decisions every week than teams that don't, and the gap widens with every quarter. Teams that get measurement wrong run on platform self-reports, ship A/B tests that don't replicate, and eventually wake up to discover their unit economics quietly broke six months ago.

Build the measurement layer first. Build it like infrastructure. Then everything downstream gets easier.

What infrastructure looks like in practice

Concretely, here is the stack I build for any operator who is starting from a typical setup of pixels, GA, and platform reports. A first-party event capture layer running server-side, sending events to your own warehouse and to platforms via Conversion API. An identity resolution table maintained nightly that stitches sessions, devices, and known customer IDs into a single customer key. A modeled conversion table that combines deterministic conversions with platform-modeled conversions, flagged so you always know which is which. An MMM pipeline that reruns weekly on twelve to twenty-four months of history, producing channel-level contribution and budget recommendations with confidence intervals. An incrementality testing calendar that runs at least one geo or holdout test per quarter against each major channel.

That is roughly four to six months of work for a focused team. It is the foundation everything else stands on. Skip it and every downstream decision is built on data that cannot be trusted, no matter how clever the dashboards above it.

The teams I have worked with that did this work consistently outperform their peers on marginal CAC and contribution margin. The teams that didn’t kept pouring money into channels the platforms told them were working, and were genuinely surprised when growth stalled. The dashboards never told them. The dashboards were the problem.

Ajit Samuel is a New York City based founder and operator. He architects, ships, and operates production AI, agentic systems, real-time data platforms, advertising technology, and growth infrastructure. ajitsamuel.com.