PhilippGille an hour ago

Is this not just about extra credit? So what's included in the subscription doesn't change - just extra credits are now token based instead of message based? (For Plus/Pro)

  • nba456_ 6 minutes ago

    God every single title I read about AI on this site ends up being a straight up lie.

  • raincole 20 minutes ago

    Yes.

    > This format replaces average per-message estimates with a direct mapping between token usage and credits.

    It's to replace the opaque, per-message calculation, not the subscription plan.

    • liuliu 10 minutes ago

      It does feel like also impact the usage meter for subscription plans?

      • raincole 7 minutes ago

        Usage meter has always been completely opaque anyway. They could (and probably did) shrink the limit whenever they like.

mrweasel 7 minutes ago

Why not just attach a real dollar amount, rather than using "credits"?

Well, I know why. I just wanted to be snarky. It's just that trying to hide the actual price is getting a bit old. Just tell me that generating this much code will cost me $10.

  • hmry 4 minutes ago

    Pay 100 Gold or 15 Gems to generate this feature

Skunkleton an hour ago

The title is misleading and not in the article. This change is for business/enterprise accounts. Also, these are still credit based. The change is that credits now operate on tokens like the API rather than on messages as they used to.

  • petcat an hour ago

    > Customers on existing Plus, Pro and Enterprise/Edu plans should continue to use the legacy rate card. We’ll migrate you to the new rates in the upcoming weeks.

  • ccmcarey 35 minutes ago

    Nope, they buried the lead a bit but this is coming for _all_ users, even pro/plus subscription plans. So you get chatgpt pro/plus benefits, and then effectively $20/$200 in credits for codex

__mharrison__ an hour ago

For the past month, I've been claiming that $20/mo codex is the best deal in AI.

Now I'm going to have to find the new best deal.

  • scosman 8 minutes ago

    Check out z.ai coder plan. The $27/mo plan is roughly the same usage as the 20x $200 Claude plan. I have both and Claude is a little better, but GLM 5.1 is much better value.

  • piyh 25 minutes ago

    Already paying for Google photo storage, AI pro for an extra $7 is a steal with anti-gravity.

    • matt_heimer 18 minutes ago

      That's only good for the web based UI. If you want Gemini API access which is what this article is about then you must go the AIStudio route and pricing is API usage based. It does have a free usage tier and new signups can get $300 in free credits for the paid tier so it's I think it's still a good deal, just not as good as using the subscriptions would be.

    • purrcat259 17 minutes ago

      Good luck sticking within limits, I have been burning up my baseline limits insanely fast within a few prompts, a marked change from a few weeks ago.

      There's a few complaints online about the same happening to multiple users.

      Otherwise anti-gravity has been great.

  • verdverm 35 minutes ago

    We are exiting a hype cycle, well into the adoption curve. Subscriptions were never going to last.

    My next step is going to be evaluating open and local models to see if they are sufficiently close to par with frontier models.

    My hope is that the end of seat based pricing comes with this tech cycle. I was looking for document signing provider that doesn't charge a monthly, I only need a few docs a year.

    • alifeinbinary 6 minutes ago

      I'm developing software in this area right now, so I try a lot of the new models. They're not even close for coding tasks. It basically comes down to 26b parameters vs 1T parameters / quantisation / smaller context sizs, there's no comparison. However, for agentic work, tool calling, text summarisation, local LLMs can be quite capable. Workloads that run as background tasks where you're not concerned about TTFB, cold starts, tok/s etc., this is where local AI is useful.

      If you have an M processor then I would recommend that you ditch Ollama because it performs slowly. We get double or triple tok/s using omlx or vmlx, respectively, but vmlx doesn't have extensive support for some models like gpt-oss.

    • __mharrison__ 12 minutes ago

      I recently experimented creating a Python library from scratch with Codex. After I was done, I took the PRD and Task list that was generated and fed them to opencode with Qwen 3.5 running locally.

      Opencode was able to create the library as well. It just took about 2x longer.

      • selectodude 8 minutes ago

        Which version of Qwen 3.5 did you use?

        • verdverm 7 minutes ago

          which quant as well

fabian2k 23 minutes ago

Is this something that is likely to also change the way Github Copilot bills? Right now the billing is message-based, not token-based. And OpenAI and Microsoft are rather opaquely intertwined in the AI space.

  • phainopepla2 9 minutes ago

    Hard to say, but GitHub Copilot also allows access to Anthropic, Google and Grok models, so I don't know that a change from a single provider would necessarily change how they bill

Rastonbury an hour ago

So Anthropic bundled CC with Claude.ai cuz OAI bundled chatgpt with Codex, now OAI is unbundling, IPO must be around the corner. Writing is also on the wall for CC usage based subscriptions now that main competitor effectively got rid of it. How are the Chinese models looking?

m-hodges an hour ago

The days of subsidized access is rapidly coming to an end.

  • _fizz_buzz_ 9 minutes ago

    Although I have to say I am sometimes surprised how much people burn through their usage. I was briefly on a Claude Max plan and then switched to a pro plan and still almost never hit my limit.

  • nojito an hour ago

    So many folks are just burning tokens just to burn them.

    The infrastructure build out just can't keep up with it.

  • LtWorf an hour ago

    Good!

    • thejazzman an hour ago

      It’s kind of a rug pull to effectively raise the price like 10x. I can’t afford to finish some of my projects with this change

      • SoftTalker 4 minutes ago

        Sounds like saying my plan to get rich buying up $10 bills for $1 hit kind of a rug pull in that people aren't selling them for that price anymore.

      • nearbuy 10 minutes ago

        If my math is right, assuming a mix of around 70% cached tokens, 20% input tokens, and 10% output tokens, it breaks even with the old pricing at around 130k tokens per message, or about 13k output tokens per message.

        With the hidden reasoning tokens and tool calls, I have no idea how many tokens I typically use per message. I would guess maybe a quarter of that, which would make the new pricing cheaper.

      • bloppe 4 minutes ago

        I don't think you can call it a rug pull when everybody saw it coming from miles away

      • JesseTG 44 minutes ago

        Is writing it by hand the old-fashioned way not on the table?

        • dmd 24 minutes ago

          It's really not. As a one-person IT department I'm now able to build things in hours or days that it previously would have taken my weeks or even months to build (and thus they didn't get done). Things people have wanted for years that I didn't ever have the time for, I can now say "yes" to.

          • bornfreddy 13 minutes ago

            Then I would say they judged the situation correctly when they decided to raise prices.

            That said: competition will soon kick in.

        • thejazzman 26 minutes ago

          Absolutely not. I took on some thins that would normally take 5-10 people and many months.

          Some people are turn out slop. I was really excited to try and make some impressive shit. My whole life has been dedicated to trying to embody what Apple preached in the early days.

          I knew this was coming, but I thought I had a little more time to try and get them over the finish line, ya know?

          Maintenance by hand might be achievable, but it’s extremely hard when you’ve built something really big.

          I’ve only got so much savings left to live on.

          I’m not saying anyone owes me anything, but we all need to pivot and in a lot less sure my pivot is going to work out now

          • SlinkyOnStairs 12 minutes ago

            > I took on some thins that would normally take 5-10 people and many months.

            Based on what, exactly?

            It's very easy to claim some software would've taken you months to make, but this is ridiculous. Estimating project duration is well known to be impossible in this field. A few years ago you'd get laughed out the room for making such predictions.

            > I’ve only got so much savings left to live on.

            Respectfully, what are you doing here?

            Yeah sure, the Apple dream. But supposing AI did in fact make you this legendary 100x developer, so it would to everyone else including those with significantly more resources. You'd still be run out of the market by those with bigger budgets or more marketing, and end up penniless all the same.

            I would strongly recommend you not put all your proverbial eggs in this basket.

        • DecoySalamander 29 minutes ago

          Not really. Many scenarios where that would mean spending 50x the time or hiring a team.

      • SecretDreams an hour ago

        That is okay.

        Ultimately, we need to know the true cost of this technology to evaluate how effectively or ineffectively it can displace the workforce that existed before it.

        • techgnosis 19 minutes ago

          Agreed, this has to happen and the sooner the better.

      • GaggiX an hour ago

        There are plenty of good models on Openrouter that are very cheap, maybe it's time to experiment with alternatives.

        • sfmike 38 minutes ago

          what are some of them?

          • GaggiX 21 minutes ago

            MiniMax M2.7, MiMo-V2-Pro, GLM-5, GLM5-turbo, Kimi K2.5, DeepSeek V3.2, Step 3.5 Flash (this last one is particularly cheap while still being powerful).

AstroBen 32 minutes ago

Things must be bad if they're doing this before their IPO

  • rvnx 4 minutes ago

    Billions of USD in debt, a business model bleeding cash with no profit in perspective, high-competition environnement, a sub-par product, free-to-use offline models taking off, potential regulatory issues, some investor commitments pulling out... tricky.

    But let's not cry for the founders, they managed to get away with tons of money. The problem is for the fools holding the bag.

supliminal 27 minutes ago

Any takes on how Codex compares to Claude? I mostly use it to run ahead, document, investigate and prep the actual implementation for Claude.

Gemini burned me too many times but maybe the situation has improved since.

anuramat 20 minutes ago

from what they wrote, they're just changing how they measure the usage; might even be a good thing if you manage your context right:

> This format replaces average per-message estimates for your plan with a direct mapping between token usage and credits. It is most useful when you want a clearer view of how input, cached input, and output affect credit consumption.

adamtaylor_13 an hour ago

Sounds like a death knell to me.

If I recall correctly, Ed Zitron noted in a recent article that one of the horsemen of his AI-pocalypse would be price hikes from providers.

  • supliminal 29 minutes ago

    Every time an Ed Zitron article is posted on HN, it is met with a torrent of vitriol and personal attacks. The articles are okay if not overly wordy but I don’t see how the subject matter elicits that strong of a response.

    At any rate, this observation is not unique to Ed, lots of people have made the same conclusion that the math doesn’t add up from a business profitability perspective.

  • cududa 29 minutes ago

    That guy has his own form of AI psychosis

  • hn_throwaway_99 33 minutes ago

    Literally every VC funded consumer product has switched from a "growth at all costs" phase to a "Now we hike prices, make money, and generally enshittify" phase, and tons of those companies are still around (e.g. Uber), so I'm not sure why anyone thinks it would be much different for AI.

    • cyanydeez 30 minutes ago

      yes, but how many succeed without any kind of moat or having destroyed the existing companies?

      I'm still running local LLMs and finding perfectly acceptable code gen.

convexly 31 minutes ago

This pricing only really makes sense if the users can predict their usage, if not people that use this heavily are just going to be hamstrung and are going to start rationing their usage.

adi_kurian 33 minutes ago

Makes sense. Right now the subscriptions are like Uber as I remember it in NYC in 2014.

alkonaut 37 minutes ago

Not only do I not keep up with the tech itself, I don’t even keep up with how to pay for it.

kvanbeek 34 minutes ago

So migrate to gemini now?

  • matt_heimer 22 minutes ago

    Not if you are just looking to avoid API based pricing. For development I find that the Gemini IDE plugins that have good free usage aren't great. Gemini plug-in under IntelliJ is often broken, etc. The best experience is with other tools like Cline where you've had to use a developer based account which is API usage based already.

    But Gemini's API based usage also has a free tier and if that doesn't work for you (they train on your data) and you've never signed up before you get several hundred dollars in free credits that expire after 90 days. 3 months of free access is a pretty good deal.

jamesu 28 minutes ago

The current pricing model (for plus) feels deliberately confusing to me, I can never really tell if I'm nearing any kind of limit with my account since nothing really seems to tell me.

gigatexal 7 minutes ago

good. just like the Claude model. getting the pricing to be in line with costs is the only way this remains sustainable.

SilverElfin an hour ago

Does this mean there’s no such thing as a “subscription” to ChatGPT for businesses? I thought they offered businesses a subscription with some amount of built in quota previously, including for the side products like codex and sora.

  • afrisch an hour ago

    There are still subscriptions that give access to both ChatGPT and Codex, but with a much smaller usage quota than before the change (which came at the same time as the end of the 2x promo). I couldn't find the equivalent in terms of credit for the usage included with these $20/25 seats...

rdli 22 minutes ago

[dead]