The principle that technology decisions don’t eliminate complexity but redistribute it, shifting the burden from one part of the organization, workflow, or architecture to another.
Every technology vendor pitches simplification. Fewer steps, cleaner interfaces, automated workflows. What they’re usually describing is complexity redistribution: the complexity didn’t disappear, it moved to a place the buyer can’t see during the demo.
A marketing cloud simplifies the user experience by bundling tools into a single interface. The complexity moves to the data layer, where multiple acquired products share data through internal integrations of varying quality. A composable architecture simplifies the presentation layer by letting teams assemble experiences from modular components. The complexity moves to the integration layer, where someone has to connect those components, manage API versioning, and ensure data consistency across services.
AI is the most visible current example. AI-powered tools reduce operational complexity by automating content generation, audience segmentation, and campaign optimization. The complexity redistributes to governance (who reviews AI outputs?), data management (what context feeds the model?), and risk assessment (what happens when the model is wrong?). The marketer’s daily work gets simpler. The organization’s management challenge gets harder.
The principle doesn’t argue against adopting new technology. It argues for asking a question that most evaluation processes skip: where does the complexity go? If the answer is “it goes to a team that’s already at capacity” or “it goes to a layer we don’t currently govern,” the simplification is temporary. The complexity will resurface as integration failures, governance gaps, or operational bottlenecks in a part of the organization that wasn’t prepared to absorb it.
Treating complexity as a conserved quantity (it can be moved but not destroyed) produces better technology decisions than treating it as something the right purchase can eliminate.