‘Tokenmaxxing’ is making developers less productive than they think
Summary (EN)
TechCrunch reported on April 17 that a growing body of developer-analytics data is challenging a simple equation between more AI usage and higher engineering productivity. The article focuses on a practice informally described as “tokenmaxxing,” in which large token budgets become a proxy for AI intensity inside software teams. It argues that token consumption is an input metric rather than an output metric, and that emerging evidence shows many teams are generating more code without preserving equivalent long-term value. Drawing on multiple analytics vendors, the report says code acceptance rates for AI-generated output can initially look high, in the 80% to 90% range, but a significant share of that code is later revised or removed. Waydev, which works with more than 10,000 engineers across 50 customers, said this can push true retained acceptance down to 10% to 30% of generated code. The article also cites GitClear, which reported in January that regular AI users showed 9.4 times higher code churn than non-AI users, and Faros AI, which found code churn rose 861% under high AI adoption. Jellyfish data on 7,548 engineers in the first quarter of 2026 suggested the largest token budgets produced the most pull requests, but only about double the throughput at roughly ten times the token cost. The piece stands out as a data-backed, counterintuitive view of AI coding adoption rather than a standard product or funding story.
Summary (ZH)
TechCrunch 于 4 月 17 日报道称,越来越多来自开发者分析平台的数据正在挑战一个流行假设,即“AI 用得越多,工程效率就越高”。文章把这种现象概括为“tokenmaxxing”,即企业内部把可消耗的 token 预算当作 AI 使用强度和生产力的象征。报道指出,token 本质上只是投入指标,而不是产出指标;最新证据显示,很多团队虽然借助 AI 生成了更多代码,但长期保留下来的有效价值并未同比增长。文章综合多家开发效率分析公司的数据,指出 AI 生成代码在初期看似有 80% 到 90% 的接受率,但后续常有大量代码被工程师修改、重写或删除。Waydev 覆盖 50 家客户、超过 1 万名工程师,其数据表明,真正能够长期保留的 AI 生成代码比例可能只有 10% 到 30%。报道还引用 GitClear 的研究称,常规 AI 用户的代码 churn 水平是非 AI 用户的 9.4 倍;Faros AI 的报告则显示,在高 AI 采用环境下,代码 churn 增长了 861%。Jellyfish 对 2026 年第一季度 7548 名工程师的数据分析发现,token 预算最高的人确实提交了更多 PR,但仅实现约 2 倍吞吐,却消耗了约 10 倍 token 成本。这使该文成为过去 24 小时内少见的、由数据支撑的反共识观察,重点不在发布新品,而在重新定义 AI 编程效率应如何衡量。
Source: https://techcrunch.com/2026/04/17/tokenmaxxing-is-making-developers-less-productive-than-they-think/