‘Tokenmaxxing’ is making developers less productive than they think

There’s an old saw in management: What you measure matters. And, typically, you get more of whatever you’re measuring. Furthermore, experts in startup note the continued relevance.

Software engineers have debated productivity metrics for decades, starting with lines of code. But as the recent generation of AI coding agents delivers more code than ever, what their managers ought to be measuring is less clear.

Enormous token budgets — essentially, the amount of AI processing power a developer is authorized to consume — have become a badge of honor among Silicon Valley developers, but that’s a very weird way to think about productivity. Measuring an input to the process makes little sense when you presumably care more about the output. It might create sense if you’re trying to encourage more AI adoption (or selling tokens), but not if you’re trying to become more efficient.

Consider the evidence from a latest class of companies operating in the “developer productivity insight” space. They’re finding that developers using tools like Claude Code, Cursor, and Codex generate a lot more accepted code than they did before. But they also find that engineers have to return to revise that accepted code far more often than before, undercutting claims of increased productivity.

Alex Circei, the CEO and founder of Waydev, is building an intelligence layer to track these dynamics; his firm works with 50 different customers that employ more than 10,000 software engineers. (Circei has contributed to TechCrunch in the past, but this reporter had never met him before.)

He says that engineering managers are seeing code acceptance rates of 80% to 90% — meaning the share of AI-generated code that developers approve and keep — but they’re missing the churn that happens when engineers have to revise that code in the following weeks, which drives the real-world acceptance rate down between 10% and 30% of generated code.

The rise of AI coding tools led Waydev, founded in 2017 to provide developer analytics, to totally rework its platform in the last six months to address the proliferation of rapid coding tools. Now, the corporation is releasing latest tools that track the metadata generated by AI agents, offering analytics on the quality and cost of their code to provide engineering managers with more insight into both AI adoption and efficacy.

Meet your next investor or portfolio startup at Disrupt

Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. ”

Topics This also touches on aspects of mobile apps.

AI Disclosure: This article has been generated and curated using advanced AI technology. While we strive for absolute accuracy, some details may be summarized or translated by autonomous systems. Please cross-reference critical financial data with official sources.