Senior martech leaders are choosing foundations over agentic DXP features because those features redistribute operational complexity rather than eliminating it. Research from the vendors themselves confirms that AI saves time but fails to restore the strategic and creative bandwidth it promised.
Key Takeaways
- Nearly half of martech leaders report that vendor AI agents fail to meet the business performance those agents were sold to deliver.
- AI capabilities save task time for marketers, but the recovered hours flow into coordination work, not strategy or creative thinking.
- Agentic DXP features shift governance and integration burdens onto buyer teams without reducing total coordination load.
- Marketing platform replacement rates dropped sharply in 2025 because practitioners prioritize stability over feature-driven switching.
- Evaluating agent readiness starts with governance infrastructure and team capacity, not vendor roadmaps.
The Award-Winning Contradiction
The DXP and headless CMS market spent the last six months shipping agentic AI features that redistribute operational complexity rather than eliminating it. Vendors celebrate the capability. Practitioners absorb the overhead. The pattern is complexity redistribution: AI automation shifts governance, monitoring, and coordination burdens onto buyer teams without reducing total workload. It runs through every major vendor category. And the most telling evidence comes from the vendors themselves.
In late April 2026, Optimizely won Gold at the CMSWire IMPACT Awards for Digital Experience and Journey Optimization. The recognition centered on Opal, Optimizely’s suite of AI agents built to automate content workflows and deliver the productivity gains that headline vendor keynotes. Days later, the same company published research that told a different story.
The study, titled The Passion-Pressure Paradox and conducted with Heinz Marketing, surveyed 227 B2B marketing professionals. The respondent pool was senior: 72% held manager-level positions or higher, and 83% had more than 8 years of experience. These were the operators running the stack, not junior marketers overwhelmed by new tools.
The findings were uncomfortable for anyone selling AI-powered productivity. Among respondents, 61% said AI saves them time on specific tasks. But only 36% reported that the saved time translated into more space for strategic or creative work (1. Optimizely, 2026). The gap between saving time and reclaiming it is where the vendor narrative breaks down. Time saved on tasks didn’t flow back to strategy and creativity for nearly two-thirds of respondents. A separate finding in the same study suggests where it went instead.
Tara Corey, Optimizely’s SVP of Marketing, framed it directly in the press release: “If teams are only using AI to increase their output, they’re just accelerating the chaos” (1. Optimizely, 2026).
That sentence lands differently when you consider who wrote it. A vendor whose business depends on selling AI-powered productivity tools published research showing those tools don’t deliver the outcome they’re marketed to deliver. The vendor’s own executive named the failure mode in the press release announcing the findings.
A vendor winning industry recognition for AI capabilities while simultaneously documenting that those capabilities miss what practitioners need most captures where the DXP market sits in mid-2026.
The pattern isn’t confined to one company. It runs through the landscape, from enterprise suites to API-first platforms to smaller vendors aggressively repositioning around agentic capabilities. Cosmic, formerly known as a developer-focused headless CMS, has pivoted its entire product narrative toward AI agents and agent marketplaces. The language has shifted across the category. “Content platform” is becoming “agentic content platform,” and the feature race is accelerating even as the evidence mounts that buyers aren’t keeping pace.
The gap is wider than one vendor, and the evidence for it comes from both sides of the table.
The Expectation Gap Is Structural
The Optimizely research captures one vendor’s customers. The broader industry picture is worse.
Gartner’s 2025 survey of 413 marketing technology leaders, conducted between June and August 2025, mapped the gap across the landscape. Of those leaders, 89% with AI agent initiatives expected significant business benefits from those initiatives. Yet 45% reported that the AI agent their vendor provides doesn’t deliver on promised business performance (2. Gartner, 2025). Nearly half the market is reporting failure against expectations.
The adoption rate makes the failure rate more consequential, not less. According to the same survey cycle, 81% of martech leaders are piloting or actively using vendor-offered AI agents (5. Walker, 2026). Even with pilots in that count, engagement at that level means the technology has moved past the early-adopter stage. And broad engagement with a 45% failure-to-deliver rate means the problem isn’t early-stage growing pains. It’s a structural mismatch between what the tools do and what the organizations need them to do.
The reasons behind the underperformance are consistent across the survey. Half of respondents reported that their organizations lack the technical and data stack readiness required for AI agent deployment. That readiness gap is a two-front problem : vendors overpromising and organizations under-preparing. There’s a widening skills gap around integrating, orchestrating, and monitoring agents. And most governance policies are being written after issues surface, not before deployment (2. Gartner, 2025).
Think about what that means in practice. An enterprise marketing team buys an AI agent embedded in their DXP. The agent can generate content, trigger campaigns, optimize personalization. The vendor demo looked clean. But the team’s data infrastructure has inconsistent field-level hygiene. Identity resolution is partial. The CDP isn’t fully deployed. The agent inherits every one of those flaws and amplifies them at machine speed.
Where the Value Actually Lives
Tony Byrne, founder and CEO of Real Story Group, an independent analyst firm that has evaluated enterprise content and martech platforms for over 25 years, offered a sharper read on why the capability gap persists. In CMSWire’s 2025 year-end assessment, Byrne described agentic AI layers as potentially “interchangeable services” rather than deeply embedded competitive differentiators (4. CMSWire, 2025). If the AI agent layer is portable, then the advantage doesn’t live in the agent. It lives in the data, governance, and content architecture underneath.
That framing inverts the vendor pitch. Vendors sell the agent as the value. Byrne’s argument is that the agent is a commodity sitting on top of foundations most organizations haven’t properly built. The organizations paying premium pricing for agentic AI features are paying for the wrong layer of the stack.
Where the Saved Time Goes
The Optimizely research offers a window into the mechanism behind the expectation gap. Among the surveyed marketers, 37.9% described their role as primarily focused on coordination rather than on the strategic or creative work they were hired to do (1. Optimizely, 2026).
That number deserves a closer look. These are professionals with 8+ years of experience in manager-or-above positions. They aren’t new to complexity. Whether AI adoption shifted more work toward coordination or simply made a pre-existing coordination burden more visible, the outcome is the same: the promise that AI would free senior practitioners for higher-order work hasn’t materialized for a significant share of the workforce.
Consider what happens when AI enters a typical content workflow. The agent drafts the email. Someone still needs to review it for brand compliance, factual accuracy, and tone. The agent generates 5 content variants for an A/B test. Someone still chooses, edits, and approves each one. The agent triggers a campaign workflow. Someone still monitors whether the agent executed it correctly, whether the targeting criteria held, and whether the personalization rules fired without errors.
The task shrank. The oversight around the task expanded. And the oversight work is harder to see, harder to measure, and harder to cut because it involves judgment calls that can’t be delegated back to the same agent that created the need for oversight in the first place.
What Optimizely’s research captures is a workforce that gained 30 minutes on a task and lost it to 30 minutes of new coordination. Corey’s phrase, “accelerating the chaos,” isn’t hyperbole. It’s what happens when tools get faster but the organizational structure around them stays the same. The reporting lines haven’t changed. The approval chains haven’t shortened. The number of stakeholders who need to sign off on a campaign hasn’t decreased. AI automated the last mile of content production and left everything upstream of it untouched.
The bottleneck is organizational: the approval chains, the stakeholder alignment, the governance overhead that existed before AI arrived and persists after it. Making individual tasks faster doesn’t touch that layer. No AI agent, no matter how capable, fixes an organizational design problem by accelerating production.
Complexity Redistribution Is the Real Story
Beneath the survey data sits a structural pattern that doesn’t have a clean label in vendor marketing materials: complexity redistribution.
Every AI agent that automates a task creates a secondary task surface. The original work gets faster. But the governance around that work lands on someone’s desk. Approvals, monitoring, error intervention, model training, prompt maintenance. In most organizations, that someone is the marketing operations team, the marketing technologist, or a marketing leader who already runs a full portfolio of responsibilities.
The vendor’s release notes celebrate the new capability. The practitioner’s project board shows the same ticket count, sometimes higher, because the tickets changed categories. Content creation tickets drop. Agent monitoring tickets appear. Campaign execution tickets decrease. Prompt-engineering and quality-assurance tickets replace them. The total volume of work requiring human judgment hasn’t decreased. It has shifted.
The Governance Deficit
Then there’s the governance layer that most organizations haven’t built. AI agents that act autonomously inside a DXP make decisions about content, targeting, timing, and personalization. When those decisions go wrong, the brand absorbs the impact. Silent failures, where an agent makes a suboptimal decision and nobody catches it for days because the monitoring infrastructure doesn’t exist, are a growing concern among operations leaders. Half of the organizations in Gartner’s survey admitted they lack the infrastructure to handle this reality (2. Gartner, 2025).
New vendor capabilities that perform well in demos create real operational demands once deployed. Agent observability means knowing what the agent did and why. Prompt governance means controlling and versioning the instructions agents follow. Output auditing means verifying that agent-generated content meets brand and compliance standards. Escalation workflows define when and how a human takes over. None of these categories of work existed in most marketing teams 18 months ago. None of them appeared in the vendor’s TCO calculator.
The mistake in the vendor narrative is the assumption that automation equals simplification. Automation shifts where the complexity lives. When the shift is planned and the receiving team has capacity, governance, and observability tools, the outcome is positive. Organizations that have invested in data quality, clear taxonomy, solid identity resolution, and operational discipline will extract real value from agentic AI. When the shift is unplanned, the outcome is more work wearing a different label, performed by a team that didn’t budget for it and doesn’t yet have the skills to manage it.
The honest framing: agentic AI capabilities are a force multiplier. They multiply whatever you already have. If you have clean data, clear governance, and capable teams, agents multiply the value. If you have fragmented data, missing governance, and overextended teams, agents multiply the mess.
The Replacement Slowdown Tells the Real Story
If the survey data reveals the internal experience, the market data reveals the external signal. The 2025 MarTech Replacement Survey, published by MarTech in March 2026, showed the sharpest declines in platform replacement activity in years (3. MarTech, 2026).
Marketing automation replacements fell from 31.1% in 2024 to 19.4% in 2025, a drop of nearly 12 percentage points that ended a 5-year run as the most-replaced application category. CRM and email platforms followed the same trajectory with double-digit declines. The pullback was broad, spanning nearly every major category except analytics and business intelligence, which held steady.
Cost had overtaken features as the primary driver of replacement decisions the prior year. But the 2025 data tells a more specific story than budget pressure alone. Organizations that needed to replace their platforms already did in 2024’s spike. Those that remain are choosing to optimize what they have rather than chase the next vendor’s promise (3. MarTech, 2026).
That’s a behavioral signal that vendor roadmap presentations should take seriously. When practitioners stop replacing platforms despite vendors shipping more capabilities than ever, something has broken in the value proposition. Buyers aren’t persuaded that new features will solve the problems they’re experiencing. They’ve tried replacing their way to performance. It didn’t work. Now they’re trying something different: making the existing stack deliver on the investment already made.
The replacement slowdown also reflects fatigue with the switching cost itself. Every platform migration carries a hidden tax: data migration, integration rebuilds, team retraining, workflow redesign, and 6 to 12 months of reduced operational velocity while the new system reaches the performance level of the old one. The math behind “free year” migration offers makes the hidden cost concrete. Practitioners who have been through 2 or 3 migration cycles in the last decade aren’t eager for another. Especially when the new platform’s headline feature is an AI agent layer they’ve already watched underdeliver on the current platform.
The purchasing behavior contradicts the vendor narrative directly. Vendors depend on replacement cycles for new ARR. Practitioners are signaling, through what they buy and what they don’t, that new features aren’t worth the switching cost when the current stack delivers below its potential and the AI capabilities being layered on top haven’t demonstrated clear returns.
Why Vendors Can’t See What Practitioners See
DXP vendors aren’t ignoring practitioners out of malice. They’re operating inside an incentive structure that rewards capability expansion and penalizes restraint.
Public cloud and SaaS revenue models depend on feature velocity. New AI capabilities justify renewals, expand seat pricing, and provide material for analyst briefings. A vendor that ships 4 agentic AI features per quarter has a story for Gartner, for the board, and for the sales team. A vendor that ships governance tooling and reduces the coordination burden on existing customers has a harder story to tell, even though the second vendor is solving the problem practitioners care about most.
The result is a structural mismatch. Vendor success metrics like feature adoption, agent deployment, and expansion revenue measure different outcomes than practitioner success metrics like business impact, reduced coordination overhead, and time-to-value on existing investments. The two sets have drifted apart. Both sides can be correct in their own frame: the vendor shipped an agent that works as designed, and the practitioner can’t extract value from it because the governance and data infrastructure weren’t in place.
Byrne and the Real Story Group team have been tracking this divergence across vendor categories. RSG’s 2025 and 2026 advisory publications, including pieces on agent AI promise versus reality and AI feature redundancy, document a pattern where vendor AI releases overlap, compete with each other inside the same stack, and create redundancy the buyer pays for twice: once in licensing and again in integration effort.
In enterprise vendor evaluations, Byrne routinely includes two questions that cut through the positioning: “How do I turn off your AI?” and “How do I inject my own?” (6. BizTechReports, 2025). A vendor that can answer both questions treats AI as a configurable layer the buyer controls. A vendor that can’t answer them is selling a lock-in mechanism disguised as innovation.
The analyst community itself is divided on where this leads. Gartner and Forrester continue to emphasize agentic AI as a transformative investment. Byrne and RSG treat it as a “conservative” evolution, useful in specific applications, dangerous as a wholesale strategy. That disagreement is useful for buyers. It means the consensus is still being formed, which means buyers have room to define their own evaluation criteria rather than inheriting the framework an analyst or a vendor chose for them.
What Evaluation Should Look Like Instead
None of this means agentic AI is worthless. The technology works. The capability is real. Agents that automate content assembly, personalize experiences in real time, and optimize campaign performance within well-governed environments produce measurable gains.
The problem is the sequence. Vendors ship the capability. Practitioners adopt it. The organizational infrastructure to support it arrives third, if it arrives at all.
Senior martech leaders evaluating agentic DXP features in 2026 should invert the vendor’s sequence. Before asking “what can this agent do?”, ask three questions about what your organization has already built.
Does Your Team Have Governance Infrastructure?
Approval workflows, brand guardrails, escalation paths, and monitoring protocols aren’t afterthoughts. They’re prerequisites. An AI agent operating inside your DXP without governance infrastructure is making brand decisions nobody approved and nobody is watching. Deploying an agent without governance is deploying risk on a schedule.
Does It Reduce or Redistribute?
If the answer is “it automates content creation but adds monitoring, prompt management, and quality assurance tasks,” the net effect on your team’s capacity may be zero or negative. Track total task volume and total coordination hours, not individual task-category improvements. Vendors will show you how fast the agent works. Your team needs to know how much oversight it demands.
Can You See, Audit, and Override?
An agent that operates inside your DXP and touches content, personalization, or customer targeting must be transparent. If you can’t see what it decided, why it decided it, and reverse the decision when it’s wrong, you’ve outsourced judgment without accountability.
The trade-off in this approach is speed. Organizations that pause to build governance, clean their data, and upskill their teams will watch competitors move first. That’s real. But competitors deploying agents into ungoverned environments are accumulating operational debt: monitoring gaps, brand-risk exposure, team skill deficits, and integration fragility that compounds quarter over quarter. That debt is invisible until it’s expensive.
The organizations that build the foundation first won’t move fastest. They’ll move furthest. And when the vendor calls next quarter with the next wave of agentic capabilities, those organizations will know what questions to ask, what readiness looks like, and whether the agent is solving their problem or creating a new one that wears a better label.
Frequently Asked Questions
What is the agentic AI expectation gap in DXP?
Why does AI save time but not restore strategic bandwidth?
What is complexity redistribution in the context of DXP agentic features?
Why are marketing platform replacement rates declining?
How should organizations evaluate agentic AI readiness?
Why do vendor AI agents fail to meet business expectations?
What is the vendor incentive mismatch in the DXP market?
Are all agentic DXP investments a waste?
What role does governance play in AI agent success?
References
- Optimizely. (2026, April 29). New Optimizely research suggests AI’s not helping marketers regain their creativity [Press release]. https://www.optimizely.com/company/press/passion-pressure-paradox/
- Gartner. (2025, October 29). Survey of 413 marketing technology leaders on AI agent initiatives. As reported in Communications Today. https://www.communicationstoday.co.in/45-of-martech-leaders-say-vendor-ai-agents-short-of-business-expectations/
- MarTech. (2026, March). 2025 MarTech replacement survey. Third Door Media. https://martech.org/wp-content/uploads/2026/03/MT_replacement_survey_2025_031126.pdf
- Mooney, M. (2025, December). AI entered the digital experience stack in 2025. Reality set the terms. CMSWire. https://www.cmswire.com/digital-experience/ai-entered-the-digital-experience-stack-in-2025-reality-followed/
- Walker, T. (2026, May). Most AI agents fail without data and governance maturity. MarTech. https://martech.org/most-ai-agents-fail-without-data-and-governance-maturity/
- BizTechReports. (2025, October 28). AI forces a rethink of the marketing technology stack - Real Story Group. https://www.biztechreports.com/news-archive/2025/10/24/ai-forces-a-rethink-of-the-marketing-technology-stack-nbsp-real-story-group-october-28th-2025
