The Subsidy Era Is Over: A Reality Check on AI-Powered Dev Tool Pricing

A reality check on AI-powered dev tool pricing, shrinking subsidies, workload-based costs, and predictable pricing for code review tooling.

The Subsidy Era Is Over: A Reality Check on AI-Powered Dev Tool Pricing

If you prefer watching a video version of this article, check out:

You have all been using different kinds of AI tools, be it Claude, GitHub Copilot, anti-gravity, Google Gemini, or Cursor. These tools started with very generous pricing, so it felt like you could get a lot done for very little. Many people were getting things done using the free tier itself, but it seems like things are changing. In this video, I will walk through a few of these limitations.

The Subsidy Era Is Ending

This discussion was prompted by a post by Kamil Krauspe, a VP of Engineering and Managing Director at a reputed AI company. Based on some interesting evidence, his conclusion is that, on the surface, it looks like nothing has changed, but in many of these providers, the amount of AI included per plan has reduced. Essentially, the subsidy is ending.

He points to evidence across providers:

  • Claude has reduced subsidies.
  • Copilot has reduced subsidies.
  • OpenAI has reduced subsidies for Codex.
  • Google has tightened anti-gravity.

There is also a nuance with Claude. Recently, there was news that Claude had increased its limits, or doubled them. But what actually happened is that the five-hour limit increased while the weekly and monthly limits remained the same. So the overall volume has not increased; only the hourly limit has.

One reason for this, according to the argument, is that in a typical SaaS product there is an extreme user profile: someone who pays $10 and maybe, in the worst case, consumes $100 worth of value. But with agents, that multiple can become 100,000x. Cost also has multiple layers:

  • Model cost
  • Tool builder cost
  • User behavior cost

All in all, because the subsidies are ending, it is very important for people to make cost-conscious decisions. The core point is that all the major providers are tightening plans and tightening steps.

subsidy-quote

Think in Workloads, Not Models

The practical conclusion is that companies and individuals have to think more from a workload angle, not from a model angle. Earlier, the view was simple: there is a task, you complete the task, and you pay a particular amount. Now pricing is becoming much more granular, often at the token level.

That means the workload matters far more. For example, you might be doing:

  • Autocomplete
  • A single question or query
  • Interactive file editing
  • An autonomous agent run
  • Multi-agent orchestration

The token usage profiles and the costs are totally different across these workloads. Token usage might be less than 1,000, or it could go up to a billion. The cost might be less than $0.01, or it could be $500 or more for a single task.

This is a massive difference. For example, in a Claude subscription someone may pay $200, while heavy use may consume up to $4,400. On average, maybe it is $1,000. So who is paying for the 5x or even 50x cost? Right now, companies are absorbing it, but how long can they keep doing that?

task variance

Pricing Models in AI Code Review

In my own area, where I am building a portfolio, what I am seeing is that many products still use an older, seat-based pricing idea.

For example:

  • Code Rabbit charges $30 per month per user. If you are a startup with 10 engineers, that is $300 per month and $3,000 or more annually for the basic plan.
  • The premium plan may be double that: $60 per user, or $6,000 per year.
  • Code-ant is also charging similarly.
  • Greptile is also charging similarly.
  • Usage-based pricing, like Claude, is unbounded. It could be any number. You have no control over what the final monthly amount is going to be.

With pure usage pricing, a review could cost $15 or $25. If you are doing 10 reviews, that can become a large number in a single month, even for one user. So in the market, we are seeing a mix of pure usage pricing and seat-based pricing.

The Approach Behind git-lrc and LiveReview

I have a tool called git-lrc and LiveReview, and we have taken a different approach. We have merged usage-based pricing with upper-limit, slab-based pricing. The reason is simple: we want usage anchoring and slab-level pricing at the same time, so you get predictability at the end.

What does that mean?

  • We charge by how many lines of code you scan.
  • We do not care how many engineers you have.
  • We are not charging by token.

So if you are a startup with 10 engineers, you can start with just $32, as long as you are scanning only 100,000 lines. The slabs are straightforward:

  • $32 for 100,000 LOC
  • $64 for 200,000 LOC
  • $128 for 400,000 LOC

This is what I mean by slab-based pricing. We are not charging artificially. We are trying to charge in a fair, transparent, and predictable way. In my opinion, none of the other solutions in the market, whether Claude, Code Rabbit, or Greptile, offer this kind of sensible pricing.

I think this is a major strength of our model: predictability plus usage anchoring. People do not want unbounded charges from pure token-based pricing, and they do not want seat-based pricing where they may end up paying unnecessarily because they have 10 engineers but many of them may not use the tool that actively. You end up paying for unused capacity, and that is not a good idea.

So this is the pricing model we have innovated: slab-based pricing tied to usage. That is where you get predictability, simplicity, and value for money. There is no misuse of money happening. Your money goes to actual usage. I hope this subsidy discussion gives you that reality check, and I hope you check out the tool I have built.

A Quick Introduction to git-lrc

I want to take two minutes of your time to introduce git-lrc. git-lrc is a free micro AI code review tool that runs on git commit as you develop software with agents.

As you know, AI can write a lot of code, but your team still owns the outcome. With AI, code generation is not the problem; consequence management is. Who has to answer for outages, security incidents, broken promises, and customer complaints? It is still the engineer, the engineering team, and engineering management. You may delegate execution to AI a little bit, but you cannot delegate responsibility.

git-lrc provides micro AI code reviews on git commit. This improves production stability, security, and performance while reducing bugs, latency, and cost, and it does all this while demanding very little engineering effort.

A Quick Demo

Here is a quick demo of git-lrc:

  • You open Git as usual.
  • You run git status and see there are changes.
  • You trigger a review with git commit.
  • The updates appear in real time.
  • You quickly get a summary of what the change is about.
  • You get that summary in a very clear viewer web UI.
  • The issues are categorized by severity, such as warning and critical.
  • You can also see cost bugs, security bugs, and other issues.

You can go to GitHub and read the entire source at github.com/HexmosTech/git-lrc.

For teams, the pricing is extremely affordable. You can get started at $32 and scan 100,000 lines of code per month. We do not have any headcount-based pricing, and we do not charge per engineer. The pricing is extremely simple.

You can get started with $32 per month for any number of users and unlimited team members, and you get all the features for review, PR threads, GitHub, GitLab, bucket integration, AI credits, micro reviews, the VS Code extension, and other features. Everything is included in this.

So for $32, this is one of the best value-for-money options you can get on the market in terms of code review. You can increase the tier if you wish. Go to github.com/HexmosTech/git-lrc and see it for yourself.