Bad Money: When AI Floods the Market for Ideas
In 1558, Queen Elizabeth I's financial advisor Sir Thomas Gresham observed a curious phenomenon: when debased coins circulated alongside pure silver, people hoarded the good coins and spent the bad ones. The debased currency drove the valuable currency out of circulation. This principle, later named Gresham's Law, has been restated across centuries: bad money drives out good.
Nearly five centuries later, the same dynamic is playing out in information markets. Except now it's not coins. It's papers, articles, and ideas.
The Year of Slop
"Slop" was Merriam-Webster's Word of the Year for 2025. The dictionary defines it as "low-quality content, especially that produced using AI." The term captures something precise: content that technically functions but lacks the intention, effort, and genuine thought that gives writing its value.
The numbers are striking. Over 52% of newly published English-language articles online are now AI-generated. Scientists using LLMs are posting roughly 33% more papers than before. A study of NeurIPS submissions found a 55% increase in objective errors between 2021 and 2025. Most alarming, GPTZero's analysis of 4,000 NeurIPS papers found AI-hallucinated citations in at least 53 that had passed peer review. Fabricated references, pointing to papers that don't exist, embedded in research that cleared the field's most prestigious gatekeepers.
The Gate Closes
In October 2025, arXiv, the preprint server that hosts much of physics and computer science research, changed its rules. The computer science category now requires survey papers and position papers to provide documentation of successful peer review elsewhere before being posted. Without evidence of prior publication, submissions are likely to be rejected.
The rationale was blunt: an "unmanageable influx." ArXiv was receiving hundreds of these submissions monthly. With LLMs, such papers have become "relatively easy to churn out on demand." The arXiv team noted, somewhat pointedly, that review articles and position papers were "never listed as part of the accepted content types" anyway.
This isn't a ban on AI-assisted research. It's a response to volume. When the cost of producing a superficially plausible survey paper drops to near zero, the number of such papers explodes. Peer review systems, designed for a world where writing required effort, buckle under the load.
The Market for Lemons
In 1970, economist George Akerlof published a paper that would eventually win him the Nobel Prize. "The Market for Lemons" described what happens when buyers can't distinguish quality. Akerlof explicitly invoked Gresham: "The 'bad' cars tend to drive out the good (in much the same way that bad money drives out good)... Gresham's Law has made a modified reappearance."
But Akerlof identified a crucial difference. Under Gresham's Law, both parties can tell good money from bad; the problem is that legal tender laws force them to accept debased coins at face value. In the used car market, only sellers know whether they're selling a lemon. This information asymmetry changes everything. Buyers, unable to verify quality, assume they're getting average quality and pay accordingly. Sellers of genuinely good cars, unable to get fair prices, leave the market. What remains converges toward low quality.
The AI slop problem combines both dynamics. Like Gresham's coins, some readers can tell the difference, at least sometimes. Experts recognize the telltale patterns, the generic phrasings, the confident wrongness. But like Akerlof's used cars, most buyers (readers, reviewers, editors) cannot reliably distinguish quality. They begin discounting everything. Genuine authors face a choice: invest substantial effort for the same diminished recognition, or join the slop economy. Meanwhile, peer reviewers, themselves using AI for over 50% of their reviews according to Nature surveys, struggle to maintain standards. The gatekeepers are as overwhelmed as the gates.
Why Detection Fails
The obvious solution is detection. Build tools to identify AI-generated text. The problem is that detection has fundamental limits.
Stanford researchers found that leading AI detection tools misidentify writing by non-native English speakers as machine-generated. In a field where international collaboration is the norm, false positives create their own crisis of trust. Meanwhile, as language models improve, their output converges on human patterns. The arms race between generation and detection has a structural asymmetry: generators only need to fool detectors some of the time.
Watermarking has been proposed. Some researchers have experimented with embedding verifiable markers in AI output. But adoption isn't universal, and techniques exist to remove or obscure watermarks. The MIT Press survey on LLM-generated text detection summarizes the situation: methods that work well on benchmark datasets fail in real-world settings where AI and human text are mixed and edited.
Where Does the Good Go?
Gresham observed that when bad money drove out good, the good money didn't disappear. People hoarded it. The same pattern is emerging in information markets. Quality content retreats behind paywalls, into invite-only communities, or stays in private circulation. The open internet becomes increasingly synthetic. Knowledge stratifies: those who can pay for access, or who know where to look, get the genuine article. Everyone else gets slop.
Richard Sever, co-founder of bioRxiv and medRxiv, warned of a potential "existential crisis" if the signal-to-noise ratio collapses. The threat isn't that AI will produce work equal to human effort. It's that the flood of low-effort content makes finding and valuing high-effort content prohibitively expensive.
The Irony
AI was supposed to democratize knowledge creation. Lower the barriers. Let anyone write, research, summarize, synthesize. In a narrow sense, it has. But Gresham's Law operates regardless of intent. When producing plausible text costs nothing, plausible text floods the market. When verifying quality costs more than producing content, verification breaks down.
The institutions that once maintained trust, peer review, editorial curation, reputation systems, evolved in an environment where production was costly enough to serve as a natural filter. That environment no longer exists. What new institutions might restore the ability to distinguish signal from noise remains an open question. But the economic logic is clear. Bad money drives out good. Unless we find ways to make quality visible again, the market for ideas follows the market for debased coins.
Links: arXiv Changes Rules After AI Spam (404 Media) | arXiv Banning CS Reviews (Nature) | Merriam-Webster Word of the Year: Slop (PBS) | NeurIPS Papers Had Hallucinated Citations (Fortune) | AI Slop Went Mainstream (Euronews)