The Cost of Methodological Lock-In
By
Gabriel Mohanna
·
3 minute read
About the Author
Gabriel is Head of AI Strategy at ScanmarQED, bringing deep expertise across the full marketing measurement stack — MMM, multi-touch attribution, and incrementality experiments. He has spent his career translating complex models into decisions that CMOs and growth teams can actually act on, bridging the gap between data science rigour and business clarity. As the founder of MMM Labs — a SaaS platform built on leading open source frameworks like Robyn, PyMC-Marketing, and Meridian — he has been through the full arc of building and scaling a measurement product, which ScanmarQED acquired to accelerate its MMM offering. At ScanmarQED, Gabriel leads AI strategy company-wide — defining how AI transforms product, marketing, sales, and operations, and moving the organization from AI-curious to AI-native.
Most MMM practitioners don’t think of methodological lock-in as a risk — they think of it as a decision they already made. That distinction matters more than it might seem.
The decision that doesn’t feel like a decision
When a team selects an MMM package for a stakeholder project, it rarely feels like a high-stakes moment. There’s a project to deliver, a timeline to hit, and a methodology the team already knows well. The choice gets made quickly, pragmatically, and usually sensibly given the information available at the time.
What follows is gradual and largely invisible. Familiarity deepens, processes form around the tool, and stakeholders develop expectations based on its outputs. New team members are onboarded into its logic. Meanwhile, the field moves on; new engines emerge with stronger priors, better diagnostics, or more appropriate assumptions for a particular stakeholder context. But switching feels increasingly disruptive relative to the effort of staying put. Over time, the package stops being a choice that could be revisited and starts functioning as infrastructure, load bearing in ways that only become apparent when something forces a change.
This is how methodological lock-in works. It is not the result of a bad decision. It is the natural consequence of competence accumulating in one place.
Two costs, only one of which is obvious
The operational cost of switching packages is real and well understood. Teams need retraining, workflows need rebuilding, and methodology changes require careful explanation to stakeholders who have built confidence in a particular approach. These costs are significant, but they are at least visible and therefore manageable.
The second cost is less visible and more consequential. When the toolkit defines what can be tested, it quietly constrains what conclusions are reachable. A team that has worked with one engine for several years does not just have operational investment in it, they have built intuition, developed benchmarks, and constructed stakeholder narratives around its specific outputs. Switching engines does not simply change the tool; it destabilises the interpretive framework the entire practice has been built on.
That is not a software problem. It is a methodological one, and it compounds quietly over time.
Open source is not the problem. Access is.
Some of the most rigorous MMM work being done today is happening in open source. Frameworks like Robyn, Meridian, and PyMC-Marketing are continuously developed and refined by thousands of practitioners and PhD researchers pushing the boundaries of what marketing measurement can do. They are transparent, peer-reviewed in practice, and methodologically serious in a way that proprietary black-box platforms rarely are.
The problem is not the quality of the code. It is the barrier to using it. These frameworks require immense technical expertise to implement correctly; specialist coding skills to set up, configure priors, interpret diagnostics, and maintain over time. That concentrates access in a small number of people, which in turn concentrates analytical capability, limits scalability across markets or stakeholder portfolios, and creates the conditions for a different kind of lock-in: not to a vendor, but to an individual.
Choosing between open-source frameworks is also still a substantive methodological commitment. Robyn, Meridian, and PyMC-Marketing embed meaningfully distinct assumptions around priors, carryover effects, and saturation curves. Picking one and building around it reintroduces the same lock-in dynamic, even with best-in-class code underneath.
The goal, then, is not to avoid open source. It is to make open source accessible at scale, without sacrificing the rigour that makes it worth using in the first place.
A better question to be asking
The standard framing for methodology selection is: evaluate the available options, choose the best one, and build on it. The question practitioners ask is “which package should we use?” and the goal is to answer it well once.
A more useful question is: how do we retain the ability to compare? Methodology selection should be treated as an ongoing empirical question rather than a settled infrastructure decision. The teams that will be best positioned in three years are not necessarily those who identified the superior package in 2024. They’re the ones who built practices flexible enough to test, compare, and adapt as the field continues to evolve. That requires infrastructure designed for comparison, not just execution; one that makes the rigour of open source available to every marketer on the team, not just the specialists who can write the code.
Methodological lock-in is not the cost of making a poor decision. It is the cost of making a good one — and then never questioning it again.
See it in practice
MMM Labs was built on exactly this principle: trusted, transparent open-source engines, delivered through a clickable interface that every marketer can use — no specialist coding required. Run multiple engines side by side on the same data, compare outputs objectively, and select or ensemble the best fit for each business question, without rebuilding workflows or retraining teams every time the methodology evolves.
In our recent webinar, Marketing Mix Modeling is going Multi-Engine — Introducing MMM Labs, we walk through how this works in practice — including a live demonstration of multiple engines running on the same dataset.