just a tourist

The AI Reorg Playbook

In late February 2026, Jack Dorsey cut nearly half of Block's workforce — roughly 4,000 people out of 10,000. Wall Street cheered. The stock surged 22% in after-hours trading, its best single day since 2022.

Five weeks later, Dorsey and Sequoia's Roelof Botha published an essay called From Hierarchy to Intelligence that reframed the cuts not as cost reduction but as a thesis about organizational design. The argument: hierarchies exist because humans were the only coordination mechanism available. AI changes this. Middle management is, at its core, an information routing protocol, and that protocol is going to be replaced or so the idea.

It's the most radical organizational experiment in tech right now. But Block isn't running it in isolation. Across the industry, companies have been testing the same hypothesis at different intensities — and the results are starting to come in.

The Thesis

The essay traces a surprisingly compelling historical arc. Roman legions invented span of control. The Prussian General Staff invented the professional middle manager. Daniel McCallum drew the first org chart for the Erie Railroad in the 1850s. Every experiment in flatness — Zappos' holacracy, Valve's free-form structure, Spotify's squads — reverted to hierarchy at scale.

The claim: this isn't a failure of will. It's a constraint of the medium. Humans can only carry so much context, so you need layers of humans to aggregate and route information. AI removes that constraint.

Block's new architecture replaces the management layer with four computational layers: atomic capabilities (financial primitives with no UIs), a world model (company state + per-customer models from transaction data), an intelligence layer (composing capabilities into solutions), and interfaces (Square, Cash App, etc.). Three roles remain: individual contributors, DRIs (90-day problem owners), and player-coaches.

There's a genuinely elegant insight about roadmaps:

When the intelligence layer tries to compose a solution and can't because the capability doesn't exist, that failure signal IS the future roadmap.

The system discovers its own gaps. No more PM-driven feature prioritization. It's a beautiful theory. But theories are cheap. What does the data say?

The Klarna Experiment: Canonical Case Study

Klarna is the closest thing to a controlled experiment. In 2024, the company replaced ~700 customer service agents with an OpenAI-powered chatbot. The initial numbers looked spectacular: 2.3 million conversations per month, resolution time from 11 minutes to under 2, projected savings of $40M annually. Overall headcount dropped from 7,400 to ~3,000.

Then quality collapsed. Customer satisfaction dropped on complex interactions. Escalation loops, where the bot hands off to a human who doesn't exist anymore, became a recurring complaint. CEO Sebastian Siemiatkowski publicly acknowledged they "went too far."

By 2025, Klarna was quietly rehiring humans. The company settled into a hybrid model: AI handles tier-1 volume (~60-80% of queries), humans handle complexity. Revenue still grew 38% YoY in the U.S., and the IPO valued the company at $19.65B. But the narrative shifted from "AI replaced everyone" to "AI augments the remaining team."

The pattern is notable: spectacular task-level metrics, followed by system-level quality degradation, followed by partial reversal.

The Scorecard

Klarna isn't an outlier. The data across companies is remarkably consistent:

Forrester's 2026 Predictions report surveyed companies that made AI-attributed layoffs:

Gartner predicts half of AI-driven layoffs will be quietly reversed by 2027. Forrester's analyst J.P. Gownder put it bluntly: "When you ask CEOs who announced AI replacement whether they have a mature AI system in place — 9 out of 10 times the answer is no."

Meanwhile, PwC's 2026 CEO Survey found that 56% of CEOs say they've gotten "nothing out of" their AI investments. Only 12% report AI both grew revenue and cut costs. BCG and McKinsey estimate only 5-6% of companies qualify as AI "high performers" with measurable EBIT impact.

The failure mode isn't subtle. Commonwealth Bank of Australia reversed its decision to cut 45 customer service roles after its AI voice-bot failed publicly and required a formal apology. Air Canada's chatbot invented refund policies the company was legally forced to honor. Qualtrics data shows AI customer service fails at four times the rate of other AI applications.

The Productivity Paradox Returns

The macro picture is equally sobering. An NBER study surveying 6,000 CEOs across the U.S., U.K., Germany, and Australia found the vast majority see little operational impact from AI. Task-level gains are real — lab studies consistently show 14-55% improvements on isolated tasks — but they aren't translating to organizational or GDP-level productivity.

The pattern has a name: the Solow paradox, updated for AI. In 1987, Robert Solow observed that "you can see the computer age everywhere but in the productivity statistics." Nearly four decades later, the same gap persists. U.S. productivity grew ~2.7% in 2025 (vs. a 1.4% decade average), but attribution to AI is contested, and research suggests a minimum 3-year lag between adoption and measurable effects. (I explored this gap in more detail in The Hidden Advantage: Why AI Gains Stay Private.)

The explanation may be structural. AI excels at high-volume routine tasks. It degrades on judgment calls, novel situations, and cross-domain coordination — exactly the work that middle managers do. The efficiency gains at the task level get eaten by coordination failures at the system level. This is essentially Amdahl's Law applied to organizations: if AI accelerates 50% of a workflow by 10x but the other 50% requires human judgment, the theoretical maximum speedup is 1.8x, not 10x. The non-automatable fraction dominates.

The AI-Washing Question

There's a more cynical reading of the trend. TechCrunch coined the term "AI-washing layoffs" in February 2026, documenting companies using AI transformation narrative as cover for financially motivated cuts. The numbers track: AI was cited in 55,000 job cuts in 2025 (4.5% of total). By March 2026, AI was the number-one cited reason at 25% of all cuts.

Block fits uncomfortably well. Dorsey's March 2025 layoff memo explicitly stated the cuts were not about replacing people with AI. Eleven months later, AI was the entire thesis. Then, weeks after cutting 4,000 people, Block quietly rehired some — including at least one termination attributed to a clerical error.

Darden Business School asked the question directly: "Is AI the strategy or the scapegoat?"

Where This Actually Works

The pattern across all the data points toward a narrow but real sweet spot. AI works when:

  1. The task is high-volume and routine (Klarna's tier-1 CS, IBM's HR processing)
  2. The human-in-the-loop is preserved for exceptions (Klarna's hybrid model)
  3. The rollout is incremental and reversible (Shopify's hiring freeze vs. Block's mass restructuring)
  4. The metric is task-level productivity, not organizational transformation

It fails when companies treat AI as a wholesale replacement for human judgment, coordination, and institutional knowledge. The productivity gains are real at the task level — but they come with hidden costs and don't scale the way the headline numbers suggest. The 55% regret rate from Forrester is striking precisely because these aren't companies that tried AI and found it lacking — they're companies that tried to replace humans with AI and discovered the hard way that the two perform different functions.

Block's Bet

Which brings us back to Dorsey. Block isn't doing incremental augmentation. It's eliminating the management layer entirely and betting the organizational architecture on a "world model" that, by its own admission, is "in early stages" and will "likely break before it works." Per-employee productivity must more than double to hit 2026 guidance.

The most telling detail might be the co-author. Roelof Botha isn't just Sequoia's managing partner — he sits on Block's board of directors. The co-publication signals that Sequoia sees this not as a Block experiment but as a portfolio-wide playbook. They recorded an accompanying podcast: "Every Company Can Now Be a Mini-AGI."

If the Forrester data is predictive, Block has a 55% chance of quietly walking this back within 18 months. If Dorsey is right — if the world model works, if the intelligence layer composes, if DRIs can replace managers — it's the template for the next decade of corporate design.

Either way, the question is the right one: is hierarchy a choice or a constraint?

The data so far says: it's a constraint. But the constraint hasn't been lifted yet.


Links: From Hierarchy to Intelligence (Sequoia) | Klarna: Now It's Rehiring (Reworked) | The AI Layoff Trap (HR Executive/Forrester) | AI Productivity Paradox (Fortune/NBER) | AI-Washing Layoffs (TechCrunch) | Is AI the Strategy or the Scapegoat? (Darden/UVA) | Block Shares Soar (CNBC)

#ai #block #klarna #management #organizational-design #productivity-paradox