Mystery solved: Anthropic reveals changes to Claude's harnesses and operating instructions likely caused degradation

For several weeks, a growing chorus of developers and AI power users claimed that Anthropic’s flagship models were losing their edge. Users across GitHub, X, and Reddit reported a phenomenon they described as "AI shrinkflation"—a perceived degradation where Claude seemed less capable of sustained reasoning, more prone to hallucinations, and increasingly wasteful with tokens. Critics pointed to a
Source
VentureBeat
Opens original article in a new tab



