The Hidden Advantage: Why AI Gains Stay Private
If large language models are as transformative as claimed, where's the disruption?
The question isn't rhetorical. ChatGPT reached 100 million users faster than any technology in history. Enterprise AI adoption jumped from 55% to 78% in a single year. Private investment in generative AI hit $33.9 billion in 2024â8.5 times 2022 levels. Yet labor productivity across OECD countries grew just 0.6% in 2023, with 2024 estimates around 0.4%.
Something doesn't add up. Or rather, something doesâbut not in the way the productivity statistics suggest.
The Numbers That Don't Match
Start with individual productivity gains. They're substantial and well-documented:
MIT Study (453 professionals): ChatGPT reduced task completion time by 40% and improved output quality by 18%. Weaker performers benefited most.
St. Louis Fed Survey: Workers using generative AI report saving 5.4% of work hours on averageâabout 2.2 hours per week for a full-time employee.
PwC AI Jobs Barometer: Sectors with higher AI exposure show 3x higher revenue-per-employee growth since 2022.
Scientific Publishing: Researchers flagged as using LLMs posted 30-50% more papers than non-users, with the biggest gains among non-native English speakers (43-89% more papers from Asian institutions).
These aren't marginal effects. A 40% time reduction is transformative. A 50% increase in research output would reshape entire fields.
Yet when economists look at aggregate productivity, the signal largely vanishes. The St. Louis Fed calculates that all those individual time savings translate to just 1.1% aggregate productivity gain. The gap between 40% task-level improvement and 1.1% economy-wide impact demands explanation.
The Finance Model: Alpha Decay
In quantitative finance, there's a well-understood phenomenon called "alpha decay." A profitable trading strategy worksâuntil it doesn't. The pattern is consistent:
- Someone discovers an edge (an arbitrage, a pattern, a predictive signal)
- They exploit it privately, generating returns
- Eventually, the strategy leaks or is reverse-engineered
- Competition crowds in, eliminating the edge
- Only then does it get published in academic papers
The timeline is brutal. High-frequency strategies last days to weeks. Momentum-based algorithms survive 3-6 months. Even longer-term approaches rarely stay effective beyond 12-18 months. Once published, strategies decay almost immediatelyâarbitraged away in hours or minutes as dozens of participants rush to execute the same signal.
This creates a simple rule: profitable strategies are kept secret; only strategies that no longer work get published.
The implication? Academic finance literature is largely a graveyard of dead edges. The live ones stay private.
The AI Parallel
Now consider AI productivity gains through the same lens.
If you discover that Claude can write your quarterly reports in 20 minutes instead of 4 hours, what do you do? The rational response isn't to tell your employer. It's to:
- Maintain the appearance of working 4 hours
- Use the saved time for other work, leisure, or side projects
- Enjoy a personal productivity surplus without organizational visibility
This isn't speculation. The data shows exactly this pattern:
Hidden AI use is rampant: Between 32% and 50% of workers report using AI without telling their employers. The top reason? "Secret advantage over peers" (36%). Fear of job loss runs second (30%).
The competence penalty: Researchers showed 1,026 engineers identical code and varied only whether it was described as AI-assisted. When reviewers believed AI was involved, they rated the engineer's competence 9% lowerâdespite identical work. Women and older workers faced even steeper penalties.
Employers aren't helping: 71% of company leaders say they'd hire less experienced candidates with AI skills over more experienced ones without. Yet nearly half of executives aren't investing in AI tools or training. Workers rationally conclude: learn AI privately, use it secretly, keep the edge.
The information asymmetry is structural. Workers with AI skills have every incentive to consume the productivity gains themselves rather than make them organizationally visible.
Where the Gains Actually Go
If AI productivity gains aren't showing up in aggregate statistics, where do they go?
On-the-job leisure: If workers complete tasks faster without employers' knowledge, they take the time savings as informal breaks. Output stays constant; effort decreases.
Quality ceiling effects: Workers use AI to hit "good enough" faster, not to produce more. A consultant might use Claude to draft a client memo in 30 minutes instead of 3 hoursâthen spend the remaining time on email, not on producing additional memos.
Unmeasured output: Much AI-assisted work goes into areas that don't show up in traditional productivity metricsâpersonal projects, learning, side businesses, or simply better work-life balance.
Redistributed rather than created: Some AI gains come at others' expense. If your AI-polished resume beats competitors, you've redistributed opportunity, not created it.
A striking data point: researchers found that only 3-7% of AI productivity gains translate into higher wages. Workers aren't being paid for their increased output. This suggests employers either don't see the gains or can't capture them.
The Paradox of Publication
The scientists-and-LLMs data illustrates the paradox perfectly.
Researchers using AI tools are publishing 30-50% more papers. But here's the catch: studies show the more "complex" the AI-generated writing, the less likely the paper delivers real scientific value. Good writing masks weak ideas.
In other words: AI helps researchers produce more, but the "more" may be padding rather than progress. The individual researcher benefits (more publications, stronger CV). The field pays the cost (more noise, harder peer review).
This is the finance pattern again. Individual gains are real and substantial. But they're captured privately rather than contributing to collective productivity. When everyone publishes more, the signal-to-noise ratio drops and the advantage dissipates.
The Contradictory Evidence
This patternâreal individual gains that don't translate to collective improvementâappears even more starkly when researchers try to measure AI productivity directly. The results are bewildering: studies reach wildly different conclusions depending on who they measure and what they measure.
AI Coding Productivity: Randomized Controlled Trials
| Study | Participants | Task Type | Result | Stat. Sig. |
|---|---|---|---|---|
| GitHub Copilot (Peng et al.) | 95 freelancers | HTTP server (JS) | +55.8% faster | Yes (p<0.05) |
| Google Internal | 96 Google engineers | Internal codebase | +21% faster | No (p=0.086) |
| C++ Runtime (Siddiq et al.) | 32 developers | Performance optimization | -29%/-15% slower | Yes (p<0.001) |
| METR Open Source | 16 OSS maintainers | Real issues, familiar code | -19% slower | Yes (CI: 1-39%) |
The pattern is revealing. The largest gains (+55.8%) came from freelancers recruited via Upworkâabout 50% earning under $10,000/yearâworking on a standardized JavaScript task. The worst outcomes (-19% to -29%) came from experienced developers working on familiar codebases or performance-critical code.
The METR study is particularly instructive. Before starting, developers predicted AI would save 24% of their time. Economists predicted 39% faster; ML researchers predicted 38% faster. After completing the tasks, developers still estimated they'd been 20% faster with AI. The actual measurement: 19% slower. Everyone was wrong, and in the same direction.
Why the perception gap? The study found developers spent 9% of their time reviewing AI output, 8% prompting, and 4% waiting on the model. For experts who already know what to type, this overhead exceeds any benefit. AI helps most when you don't already know what you're doing.
The Google study hints at a middle ground: experienced professionals at a top tech company showed a 21% speed improvement, but the effect wasn't statistically significantâsuggesting high variance in outcomes even within a relatively homogeneous population.
This echoes the innovator's dilemma. Experts have spent years optimizing their workflowsâmuscle memory, mental shortcuts, deep familiarity with their tools. AI doesn't slot into these workflows; it disrupts them. The overhead of learning to prompt effectively, reviewing AI output, and integrating suggestions into existing patterns may exceed any time saved. Novices, unburdened by established habits, can build AI-native workflows from scratch. There's something to the Zen concept of "beginner's mind" here: the expert's knowledge becomes a liability when the tools change fundamentally. The novice, knowing nothing, has nothing to unlearn.
But experts who do adapt report transformative results. Salvatore Sanfilippo (creator of Redis) describes building a 700-line C library for BERT embeddings in five minutesâmatching PyTorch's output at comparable speed. Tasks that would take weeks now take hours. His key insight: "writing code is no longer needed for the most part." The value shifted to problem conceptualization and communicating intent to the AI. This requires abandoning the identity of "person who writes code" for "person who directs code generation"âa psychological shift many experts resist. Qualitative research confirms this pattern: experienced developers who succeed with AI agents do so by "controlling agent behavior leveraging their expertise"âmaintaining oversight rather than fully delegating.
The contradiction resolves itself once you accept that AI productivity isn't a single numberâit's a distribution. Some people gain enormously, others lose time, and the aggregate washes out to noise. The gains that do exist flow to individuals who can capture them privately, not to organizations or economies that might measure them.
Why This Matters
The AI-as-hidden-advantage model has implications:
Measurement will lag reality: If workers capture gains privately, traditional productivity statistics will systematically undercount AI's impact. The economy may be transforming in ways that don't show up in BLS releases.
Inequality will compound: Workers who figure out AI first gain an edgeâin output, in job security, in career advancement. This advantage compounds before it diffuses. The "AI divide" may widen before it closes.
Organizational adoption matters more than tool availability: The 5% of companies seeing real AI value aren't using better tools than the 95% who aren't. They're structured to capture gains organizationally rather than letting them dissipate into individual time savings.
The disruption may come suddenly: When enough individual advantages accumulate, or when organizations finally restructure to capture AI gains, the aggregate statistics could shift rapidly. The "J-curve" of productivityâdeclining initially as firms invest, then surging as complements fall into placeâmay be steeper than expected.
The Information Asymmetry Economy
We're living through an experiment in information asymmetry. Powerful tools are freely available. The knowledge of how to use them effectively is not evenly distributed. And the incentive structure rewards keeping that knowledge private.
In finance, this dynamic is well-understood and regulated (however imperfectly). Insider trading is illegal precisely because information asymmetry is corrosive to markets.
In the labor market, no such constraints exist. The worker who masters AI-assisted workflows gains an edge over colleagues who haven't. The rational move is to exploit that edge quietly, not to democratize it.
The gap will likely widen. Workers whose skills and temperament align with AI-augmented workâcomfortable with iteration, good at prompting, willing to review and refine machine outputâwill pull further ahead. Those whose expertise lies in domains AI handles poorly, or who resist adapting established workflows, risk falling behind. This isn't about intelligence or effort; it's about compatibility with a new mode of working, and the willingness to adapt to it.
This isn't a criticismâit's a description. Given the incentives, workers are behaving rationally. The question is whether organizations and institutions will adapt to capture more of the gains collectively, or whether AI productivity will remain largely a private benefit, invisible in the statistics but very real to those who've figured it out.
The disruption is here. It's just not evenly distributedâand it's not showing up where we're looking.
Data sources: ChatGPT productivity study (MIT/Science) | Generative AI productivity (St. Louis Fed) | Productivity statistics (BLS) | Secret AI use at work (Fortune) | AI competence penalty (arXiv) | Copilot productivity (arXiv) | METR developer study (arXiv) | Scientific publishing and LLMs (arXiv) | Expert AI adoption (arXiv)