AI ROI Is an Operating Model Problem

Bold rainbow-colored text reading Operationalize against a dark background with abstract digital elements and colorful paint splashes

Microsoft’s 2026 Work Trend Index found that organizational factors account for roughly two-thirds of the variance in AI performance outcomes. Your AI ROI problem is an operating model problem.

Key Takeaways

  • Microsoft's 20,000-user study found organizational factors account for 67% of AI performance variance. Technology and individual skill combined account for 32%.
  • Organizations extracting measurable AI value redesigned how they work, not what they bought. Fewer than one in ten qualify as high performers across independent surveys.
  • The first move isn't an AI strategy document. It's selecting one workflow where the operating model is the visible bottleneck and redesigning decision rights, data flows, and measurement before scaling.
  • Reality check: operating model redesign is slower, harder to fund, and harder to present on a quarterly dashboard than a platform purchase. That's why most organizations keep buying tools instead.

The Belief That Won’t Die

Every conversation about AI underperformance follows the same script. The technology isn’t advanced enough. Teams don’t have the right skills. Vendors oversold the platform. These three explanations absorb nearly all the budget, attention, and executive frustration aimed at closing the AI ROI gap.

They also account for roughly a third of the actual problem.

What 20,000 AI Users Revealed

Microsoft’s 2026 Work Trend Index surveyed 20,000 AI users across 10 countries and measured which factors are associated with AI performance outcomes (1. Microsoft, 2026). Organizational factors, including how decisions get made, how workflows are designed, and how goals get communicated, account for approximately 67% of the variance. Individual factors, including skill, training, and personal adoption habits, account for 32%.

The study reports these as statistical associations, not proven causal relationships. But the magnitude is hard to dismiss. The organizational environment around the AI user matters roughly twice as much as the user’s own capability.

The same study found that 31% of AI users are misaligned, actively using AI tools in ways disconnected from their organization’s goals (1. Microsoft, 2026). These aren’t reluctant adopters or undertrained staff. They’re people using the technology without organizational direction on what to use it for. The implementation worked. The organization around it never changed to support it.

The Constraint You’re Not Addressing

I’ve argued before that fixing AI performance starts with operationalizing marketing itself. Microsoft’s data quantifies why. And the pattern holds beyond their study. McKinsey’s 2025 State of AI survey found organizations extracting value are three times more likely to have redesigned workflows and three times more likely to have senior leaders demonstrating direct ownership of AI initiatives (3. Singla et al., 2025). Fewer than one in ten of McKinsey’s respondents qualified as high performers (3. Singla et al., 2025).

The technology works. And the organization extracts a fraction of the value because its operating model was designed for a world where humans performed every step. Decision rights, data governance, measurement contracts, coordination protocols: all built for human-only execution.

Kathy Katz, VP of Revenue Operations at Clari, identified the mechanism: “The measurement infrastructure needed to prove AI ROI is the same infrastructure needed to run a clean GTM operation” (2. Omoregie, 2026). She’s describing the compounding nature of organizational capability. Every operating model improvement you make for AI generates returns that extend beyond AI.

Clean data governance supports better attribution. Clear decision rights reduce coordination overhead across every initiative, not just AI-driven ones. The capability you build compounds across everything the organization touches. Platforms depreciate the day you sign the contract. The organizational capacity you build around them doesn’t.

Andrea Hornaday, VP of Marketing Operations at Bain & Company, diagnosed it directly: the AI performance gap “is a systems problem” (2. Omoregie, 2026). How the organization coordinates around its technology determines whether AI performs or stalls.

The distinction between integration and orchestration explains why. Most organizations have integrated their AI tools. The systems exchange data, the APIs connect, the dashboards consolidate. Far fewer have orchestrated them, designing the operating model so that AI-generated outputs flow through decision frameworks, accountability structures, and measurement systems that make the outputs actionable. Integration is a technology achievement. Orchestration is an operating model achievement. The ROI lives in the second one.

The CMO’s First Move

The temptation is to respond with an enterprise-wide AI strategy document. Resist it. Operating model redesign at scale is a multi-year transformation, and the data is clear that most organizations aren’t ready for that scope.

Start with one workflow. Pick the workflow where AI coordination currently depends on human handoffs between disconnected tools. The one where someone manually moves an AI-generated output from one system to another, applies judgment, routes it forward, and tracks the outcome in a spreadsheet. That workflow is where your operating model constraint is most visible and most fixable.

Redesign three things for that single workflow:

Decision rights. Who approves what the AI produces? Who overrides it? Who’s accountable when the output is wrong? If the answer to any of these is “it depends” or “whoever’s available,” the operating model is the bottleneck.

Data access. Does the AI agent have access to the context it needs, or does a human assemble information from three systems before the agent can do its work? Every manual data assembly step is an operating model failure masquerading as a technology limitation.

Measurement. Define the business outcome this workflow exists to produce before the AI touches it. Adoption rate and time saved measure technology activity, not business value. Build the measurement infrastructure first. Per Katz’s insight, you get both the proof mechanism and the operational backbone (2. Omoregie, 2026).

One workflow. Three redesigns. The capability you build transfers to the next workflow, and the one after that. The organizational muscle compounds in ways a platform purchase never will.

About the Author

Gene De Libero, Founder, Digital Mindshare LLC

Gene De Libero has spent more than thirty years in marketing technology — as buyer, seller, builder, and advisor. He is the architect of the Marketing Technology Transformation® Framework, sponsor of How Marketing Technology Works®, and Principal Consultant at Digital Mindshare LLC, a New York consultancy serving CMOs whose stacks have stopped paying for themselves. He believes most martech investments fail not because the technology is wrong, but because the organization was never built to use it. He fixes that.

Frequently Asked Questions

What does 'operating model' mean in the context of AI ROI?

Operating model covers how decisions get made, how work flows between people and systems, how success gets measured, and how data moves through the organization. When these structures don’t support AI, the technology performs below its capability regardless of how advanced the platform is or how skilled the team using it.

Why doesn't better AI training solve the performance gap?

Training addresses individual capability, which Microsoft’s data associates with roughly a third of AI performance variance. The remaining two-thirds sits in organizational structures. A well-trained team operating inside a poorly designed workflow still produces mediocre outcomes because the constraint isn’t skill. It’s the system around the skill.

Where should a CMO start with operating model redesign for AI?

Pick one workflow where AI coordination currently depends on human handoffs between disconnected tools. Redesign decision rights, data access, and success metrics for that single workflow before expanding. Start with measurement infrastructure, because the systems needed to prove AI value are the same systems needed to run clean operations.
References
  1. Microsoft. (2026). 2026 Work Trend Index Annual Report. Microsoft WorkLab. https://www.microsoft.com/en-us/worklab/work-trend-index/agents-human-agency-and-the-opportunity-for-every-organization
  2. Omoregie, A. (2026). AI Adoption in Marketing Operations: How Does Your Team Actually Stack Up? MarketingOps.com. https://marketingops.com/ai-adoption-in-marketing-operations-how-does-your-team-actually-stack-up/
  3. Singla, A., Sukharevsky, A., Hall, B., Yee, L., & Chui, M. (2025). The State of AI in 2025: Agents, Innovation, and Transformation. McKinsey & Company. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai