AI codegen is growing up. So are the price tags.

Over the last couple of days, I noticed something that feels small on the surface but is actually part of a much bigger shift.

OpenAI introduced a new higher Codex-focused tier. At the same time, I could clearly see that the practical 5-hour limit on the lower tier had become noticeably tighter. Even light documentation work burned through it faster than I was used to. I saw similar trends and implementations at other AI tools, e.g. the Google codegen ratelimit reductions.

I do not think this is wrong.

AI codegen is a real productivity tool now. Not a toy, not a novelty, not just a nice demo for conference slides. It saves time. It helps with repetitive work. It helps with drafting, reviewing, checking, restructuring, and sometimes even with finding problems humans would otherwise miss for too long. In other words: it does useful work. Useful work usually does not stay cheap forever.

So yes, I expect pricing and segmentation to increase.

That is not scandalous. Frankly, I would be more surprised if the current market stayed this generous for another five years. The “free lunch” phase was never going to last forever. In fact, I suspect even today’s higher-priced plans are still subsidized relative to the value they create for many users.

At the same time, this is not just a story about price.

The deeper change is that AI-assisted software development is beginning to split into classes.

For quite a while, AI codegen had a surprisingly leveling effect. Small teams could move faster. Independent developers could do more. Smaller projects could get leverage that used to be available mainly to organizations with larger engineering budgets. That was one of the most interesting things about the whole wave.

Now the market is starting to look more like, well, the software industry.

If you have budget, the answer is simple: upgrade, absorb the cost, move on. If you do not, things become more complicated. You can still benefit, but with more friction, more limits, and more interruptions in the middle of useful work. That may sound minor. It is not. Friction accumulates.

And there is another problem that worries me more in the longer run.

A lot of important software is not fashionable.

Projects like rsyslog are widely deployed. They sit quietly inside real systems. They matter for operations, reliability, and security. Large parts of the Internet depend on software of that kind. But these projects are usually not glamorous. They do not create social buzz. They do not get huge star counts just because they exist. They are not the kind of thing people show off in “look what I built this weekend” posts.

In short: they are important, but not sexy.

That creates a visibility problem. AI labs will naturally notice what is noisy, trendy, consumer-visible, or attached to large commercial ecosystems. Quiet infrastructure software often is none of these things. So there is a real risk that attention, access, early programs, and special support flow more easily toward what is visible than toward what is important.

That is not malice. It is just how radar works. Big shiny things show up first.

But if that pattern hardens, it becomes a real issue. The software ecosystem may end up accelerating the already visible parts even more, while under-recognized but widely deployed infrastructure is left to make do with whatever remains broadly available.

That would be a mistake.

I do think model serving will get cheaper over time. Better hardware, better inference stacks, better model design, better routing, all of that should help. But cheaper operation does not automatically mean broader access to the best tools. Access is becoming a product decision, not just a compute-cost question.

And that is where I think the real debate should be.

The question is not whether AI codegen should cost money. Of course it should.

The question is whether the best AI development tools will remain broadly reachable, including for smaller but important software projects, or whether they will gradually move behind layers of price, selection, visibility, and vendor preference.

I do not have a dramatic conclusion here. This is not the point where I declare doom, throw my keyboard out the window, and move into a cabin in the forest to write C code in peace. Tempting as that may occasionally be.

But I do think we are entering a new phase.

AI codegen is becoming real engineering infrastructure. That means better tools, more serious usage, more serious revenue expectations, and more segmentation. None of that is surprising. Some of it is healthy. Still, we should pay attention to who gets accelerated and who gets slowed down as this market matures.

That will matter far beyond individual subscriptions.