> I realized I looked at this more from the angle of a hobbiest paying for these coding tools. Someone doing little side projects—not someone in a production setting. I did this because I see a lot of people signing up for $100/mo or $200/mo coding subscriptions for personal projects when they likely don’t need to.
Are people really doing that?
If that's you, know that you can get a LONG way on the $20/month plans from OpenAI and Anthropic. The OpenAI one in particular is a great deal, because Codex is charged a whole lot lower than Claude.
The time to cough up $100 or $200/month is when you've exhausted your $20/month quota and you are frustrated at getting cut off. At that point you should be able to make a responsible decision by yourself.
I'm not cheap, just ahead of the curve. With the collapse in inference cost, everything will be this eventually
I'll basically do
$ man tool | <how do I do this with the tool>
or even
$ cat source | <find the flags and give me some documentation on how to use this>
Things I used to do intensively I now do lazily.
I've even made a IEITYuan/Yuan-embedding-2.0-en database of my manpages with chroma and then I can just ask my local documentation how I do something conceptually, get the man pages, inject them into local qwen context window using my mansnip llm preprocessor, forward the prompt and then get usable real results.
In practice it's this:
$ what-man "some obscure question about nfs"
...chug chug chug (about 5 seconds)...
<answer with citations back to the doc pages>
Essentially I'm not asking the models to think, just do NLP and process text. They can do that really reliably.
It helps combat a frequent tendency for documentation authors to bury the most common and useful flags deep in the documentation and lead with those that were most challenging or interesting to program instead.
I understand the inclination it's just not all that helpful for me
or even
$ cat source | <find the flags and give me some documentation on how to use this>
Could you please elaborate on this? Do I get this right that you can set up your your command line so that you can pipe something to a command that sends this something together with a question to an LLM? Or did you just mean that metaphorically? Sorry if this is a stupid question.
Is your RAG manpages thing on github somewhere? I was thinking about doing something like that (it's high on my to-do list but I haven't actually done anything with llms yet.)
My tool can read stdin, send it to an LLM, and do a couple nice things with the reply. Not exactly RAG, but most man pages fit into the context window so it's okay.
this is the extent to what I use any LLM - they're really good at looking up just about anything, in natural language, and most of the time even the first hit, without reprompting, is a pretty decent answer. I used to have to sort thru things to get there, so there's definitely an upside to LLMs in this manner.
The limits for the $20/month plan can be reached in 10-20 minutes when having it explore large codebases with directed. It’s also easy to blow right through the quota if you’re not managing content well (waiting until it fills up and then auto-compacting, or even using /compact frequently instead of /clear or the equivalent in different tools).
For most of my work I only need the LLM to perform a structured search of the codebase or to refactor something faster than I can type, so the $20/month plan is fine for me.
But for someone trying to get the LLM to write code for them, I could see the $20/month plans being exhausted very quickly. My experience with trying “vibecoding” style app development, even with highly detailed design documents and even providing test case expected output, has felt like lighting tokens on fire at a phenomenal rate. If I don’t interrupt every couple of commands and point out some mistake or wrong direction it can spin seemingly for hours trying to deal with one little problem after another. This is less obvious when doing something basic like a simple React app, but becomes extremely obvious once you deviate from material that’s represented a lot in training materials.
Not for Codex. Not even for Gemini/Antigravity! I am truly shocked by how much mileage I can get out of them. I recently bought the $200/mo OpenAI subscription but could barely use 10% of it. Now for over a month, I use codex for at least 2 hrs every day and have yet to reach the quota.
With Gemini/Antigravity, there’s the added benefit of switching to Claude Code Opus 4.5 once you hit your Gemini quota, and Google is waaaay more generous than Claude. I can use Opus alone for the entire coding session. It is bonkers.
So having subscribed to all three at their lowest subscriptions (for $60/mo) I get the best of each one and never run out of quota. I’ve also got a couple of open-source model subscriptions but I’ve barely had the chance to use them since Codex and Gemini got so good (and generous).
The fact that OpenAI is only spending 30% of their revenue on servers and inference despite being so generous is just mind boggling to me. I think the good times are likely going to last.
My advise - get Gemini + Codex lowest tier subscriptions. Add some credits to your codex subscription in case you hit the quota and can’t wait. You’ll never be spending over $100 even if you’re building complex apps like me.
> I recently bought the $200/mo OpenAI subscription but could barely use 10% of it
This entire comment is confusing. Why are you buying the $200/month plan if you’re only using 10% of it?
I rotate providers. My comment above applies to all of them. It really depends on the work you’re doing and the codebase. There are tasks where I can get decent results and barely make the usage bar move. There are other tasks where I’ve seen the usage bar jump over 20% for the session before I get any usable responses back. It really depends.
I got it to try Atlas, their agentic browser, before it was open to Plus users. I convinced myself that I could use the additional capacity to multi-task and push through hard core problems without worrying about quota limits.
For context, this was a few months ago when GPT 5 was new and I was used to constantly hitting o3 limits. It was an experiment to see if the higher plan could pay for itself. It most certainly can but I realized that I just don’t need it. My workflow has evolved into switching between different agents on the same project. So now I have much less of a need for any one.
To use up the Pro tier plan you must close the loop so to speak - so that Codex knows how to test the quality of its output and incrementally inch toward its goals. This can be harder or easier depending on your project.
You should also queue up many "continue ur work" type messages.
I’m actively doing that for a fun side project - systematically rewriting SQLite in Rust. The goal is to preserve 100% compatibility, quirks and all. First I got it to run the native test harness, and now it’s basically doing TDD by itself. Have to say, with regular check-ins, it works quite well.
Note: I’m using the $20 plan for this! With codex-5.2-medium most of the time (previously codex-5.1-max-medium). For my work projects, Gemini 3 and Antigravity Claude Opus 4.5 are doing the heavy lifting at the moment, which frees up codex :) I usually have it running constantly in a second tab.
The only way I can now justify Pro is if I am developing multiple parallel projects with codex alone. But that isn’t the case for me. I am happier having a mix of agents to work with.
That is a good use-case as well and would definitely require a codex Pro subscription.
I've been doing something like this with the basic Gemini subscription using Antigravity. I end up hitting the Gemini 3 Pro High quota many times but then I can still use Claude Opus 4.5 on it!
I like Pro also for better access to 5.2 Pro which is indispensable for some problems and for producing specs/code samples. I use https://gitingest.com
Not the same poster, but apparently they tried the $200/mo subscription, but after seeing they don't need it, they "subscribed to all three at their lowest subscriptions (for $60/mo)" instead.
> I rotate providers. My comment above applies to all of them. It really depends on the work you’re doing and the codebase. There are tasks where I can get decent results and barely make the usage bar move. There are other tasks where I’ve seen the usage bar jump over 20% for the session before I get any usable responses back. It really depends.
Ah, I missed this part. Yes, this is basically what I would recommend today as well. Buy a couple of different frontier model provider basic subscriptions. See which works better on what problems. For me, I use them all. For someone else it might be codex alone. Ymmv but totally worth exploring!
My first try at LLM coding was with Claude, got back confusing results for a hello world++ type test and ran out of credits in a couple of hours, asked for a refund all the same day. I'm slowly teaching myself prompt engineering on qwen3-coder, it goes in circles much like claude was, but at least it's doing that at the cost of electricity at the wall, I already had a GPU.
That has not been my experience with sonnet, and even so it is largely remedied by having better AI docs caching the results of that investigation for future use.
Yes, we are doing that. These tools help make my personal projects come to life, and the money is well worth it. I can hit Claude Code limits within an hour, and there's no way I'm giving OpenAI my money.
As a third option, I've found I can do a few hours a day on the $20/mo Google plan. I don't think Gemini is quite as good as Claude for my uses, but it's good enough and you get a lot of tokens for your $20. Make sure to enable the Gemini 3 preview in gemini-cli though (not enabled by default).
Huge caveat: For the $20/mo subscription Google hasn't made clear if they train on your data. Anthropic and OAI on the other hand either clearly state they don't train on paid usage or offer very straightforward opt-outs.
> What is the privacy policy for using Gemini Code Assist or Gemini CLI if I’ve subscribed to Google AI Pro or Ultra?
> To learn more about your privacy policy and terms of service governed by your subscription, visit Gemini Code Assist: Terms of Service and Privacy Policies.
The last page only links to generic Google policies. If they didn't train on it, they could've easily said so, which they've done in other cases - e.g. for Google Studio and CLI they clearly say "If you use a billed API key we don't train, else we train". Yet for the Pro and Ultra subscriptions they don't say anything.
This also tracks with the fact that they enormously cripple the Gemini app if you turn off "apps activity" even for paying users.
If any Googlers read this, and you don't train on paying Pro/Ultra, you need to state this clearly somewhere as you've done with other products. Until then the assumption should be that you do train on it.
I have no idea at all whether the GCP "Service Specific Terms" [1] apply to Gemini CLI, but they do apply to Gemini used via Github Copilot [2] (the $10/mo plan is good value for money and definitely doesn't use your data for training), and states:
Service Terms
17. Training Restriction. Google will not use Customer Data to train or fine-tune any AI/ML models without Customer's prior permission or instruction.
Thanks for those links. GitHub Copilot looks like a good deal at $10/mo for a range of models.
I originally thought they only supported the previous generation models i.e. Claude Opus 4.1 and Gemini 2.5 Pro based on the copy on their pricing page [1] but clicking through [2] shows that they support far more models.
Yes, it's a great deal especially because you get access to such a wide range of models, including some free ones, and they only rate limit for a couple minutes at a time, not 5 hours. And if you go over the monthly limit you can just buy more at $0.04 a request instead of needing to switch to a higher plan. The big downside is the 128k context windows.
Lately Copilot have been getting access to new frontier models the same day they release elsewhere. That wasn't the case months ago (GPT 5.1). But annoyingly you have to explicitly enable each new model.
Yeah Github of course has proper enterprise agreements with all the models they offer and they include a no-training clause. The $10/mo plan is probably the best value for money out there currently along with Codex $20/mo (if you can live with GPT's speed).
That's good to know, thanks. In my case nearly 100% of my code ends up public on GitHub, so I assume everyone's code models are training on it anyway. But would be worth considering if I had proprietary codebases.
My thoughts exactly. The $100 Claude subscription is the sweet spot for me. I signed up for the $20 at first and got irritated constantly hitting access limits. Then I bought the $200 subscription but never even hit 1/4 of my allocation. So the $100 would be perfect.
Me. Currently using Claude Max for personal coding projects. I've been on Claude's $20 plan and would run out of tokens. I don't want to give my money to OpenAI. So far these projects have not returned their value back to me, but I am viewing it as an investment in learning best pratices with these coding tools.
Me too. I couldn’t build an app that I hope to publish with the $20 plan. The sunk cost will either be reaped back once live, or it’s truly sunk and I’ll move on…..
> If that's you, know that you can get a LONG way on the $20/month plans from OpenAI and Anthropic.
> The time to cough up $100 or $200/month is when you've exhausted your $20/month quota and you are frustrated at getting cut off. At that point you should be able to make a responsible decision by yourself.
These are the same people, by and large. What I have seen is users who purely vibe code everything and run into the limits of the $20/m models and pay up for the more expensive ones. Essentially they're trading learning coding (and time, in some cases, it's not always faster to vibe code than do it yourself) for money.
I've been a software developer for 25 years, and 30ish years in the industry, and have been programming my whole life. I worked at Google for 10 of those years. I work in C++ and Rust. I know how to write code.
I don't pay $100 to "vibe code" and "learn to program" or "avoid learning to program."
I pay $100 so I can get my personal (open source) projects done faster and more completely without having to hire people with money I don't have.
Came here to write something similar (Of course, other than working in Google) and saw your comments reflecting my views.
Yes, Its worth pending $200/month on Claude to get my personal project ideas come to life with better quality and finish.
because we want to support open source? Even if you're independence maximalist, you still pay other people in your life to do things for you at some point. If you've got the money and the desire but not the time, why does that not seem reasonable to you?
Frankly I almost consider it a duty to use these agents -- which have harvested en masse from open source software (including GPL!) without permission -- to produce open source / free software.
I'm talking about the general trend, not the exceptions. How much of the code do you manually write with the 100 dollar subscription? Vibe coding is a descriptive, not a prescriptive, label.
I review all of it, but hand write little of it. It's bizarre how I've ended up here, but yep.
That said, I wouldn't / don't trust it with something from scratch, I only trust it to do that because I built -- by hand -- a decent foundation for it to start from.
Sure, you're like me, you're not a vibe coder by the actual definition then. Still, the general trend I see is that a lot of actual vibe coders do try to get their product working, code quality be damned. Personally, same as you, I stopped vibe coding and actually started writing a lot of architecture and code myself first then allowing the LLM to fill in the features so to speak.
The issue is that your claim was that if you are using up tokens you are probably vibe coding.
But I’ve not found that to be true at all. My actually engineered processes where I care the most is where I push tokens the hardest. Mostly because I’m using llms in many places in the sdlc.
When I’m vibing it’s just a single agent sort of puttering along. It uses much fewer tokens.
> The issue is that your claim was that if you are using up tokens you are probably vibe coding.
I said "by and large" ie generally speaking. As I mentioned before, the exception does not invalidate the trend. I assume HN is more heavily weighted towards non-vibe-coders using up tokens like me and you but again, that's the exception to what I see online elsewhere.
Programming has always been about levels of abstraction, and the people who see LLM-generated code as “cheating” are the same people who argued that you can’t write good code with a compiler. Luddites, who will time-and-time again be proven wrong by the passage of time.
If this is the new way code is written then they are arguably learning how to code. Jury is still out though, but I think you are being a bit dismissive.
I wouldn't change definitions like that just because the technology changed, I'm talking about the ability to analyze control flow and logic, not necessarily put code on the screen. What I've seen from most vibe coders is that they don't fully understand what's going on. And I include myself, I tried it for a few months and the code was such garbage after a while that I scrapped it and redid it myself.
Absolutely not. They're not writing code or performing most of the work that programmers do, therefore they're not [working as] programmers. Their work ends up producing code, but they're not coders any more than my manager is.
A "vibecoder" is to a programmer what script kiddie is to a hacker.
What I find perplexing is the very respectful people that pay those subscriptions to produce clearly sub-par work I'm sure they wouldn't have done themselves.
And when pressed on “this doesn't make sense, are you sure this works?” they ask the model to answer, it gets it wrong, and they leave it at that.
Claude's $20 plan should be renamed to "trial". Try Opus and you will reach your limit in 10 minutes. With Sonnet, if you aren't clearing the context very often, you'll hit it within a few hours. I'm sympathetic to developers who are using this as their only AI subscription because while I was working on a challenging bug yesterday I reached the limit before it had even diagnosed the problem and had to switch to another coding agent to take over. I understand you can't expect much from a $20 subscription, but the next jump up costing $80 is demotivating.
Session limit that resets after 5 hours timed from the first message you sent. Most people I’ve seen report between 1 to 2 hours of dev time using Opus 4.5 on the Pro plan before hitting it unless you’re feeding in huge files and doing a bad job of managing your context.
Yeah it’s really not too bad but it does get frustrating when you hit the session limit in the middle of something. I also add $20 of extra usage so I can finish up the work in progress cleanly and have Opus create some notes so we can resume when the session renews. Gotta be careful with extra usage though because you can easily use it up if the context is getting full so it’s best to try to work in small independent chunks and clear the context after each. It’s more work but helps both with usage and Opus performs better when you aren’t pushing the context window to the max.
I half agree, but it should be called “Hobbiest” since that’s what it’s good for. 10 minutes is hyperbolic, I average 1h30m even when using plan mode first and front loading the context with dev diaries, git history, milestone documents and important excerpts from previous conversations. Something tells me your modules might be too big and need refactoring. That said, it’s a pain having to wait hours between sessions and jump when the window opens to make sure I stay on schedule and can get three in a day but that works ok for hobby projects since I can do other things in between. I would agree that if you’re using it for work you absolutely need Max so that should be what’s called the Pro plan but what can you do? They chose the names so now we just need to add disclaimers.
I actually get more mileage out of Claude using a Github Copilot subscription. The regular Claude Pro will give me an hour or up to 90 minutes max, before it reaches the cap. The Github version has a monthly limit for the Claude requests (100 "premium requests") which I find much easier to manage. I was about to switch to the max plan but this setup (both Claude pro and Github Copilot, costing 30 a month together) was just enough for my needs. With a bonus that I can try some of the other model offerings as well.
In practice, how does switching between Claude and GitHub Copilot work?
1. Do you start off using the Claude Code CLI, then when you hit limits, you switch to the GitHub Copilot CLI to finish whatever it is you are working on?
2. Or, you spend most of your time inside VSCode so the model switching happens inside an IDE?
3. Or, you are more of a strict browser-only user, like antirez :)?
I always start in the Claude CLI. Once I hit the token limit, I can do two things: either use Copilot Claude to finish the job, or pick up something completely different, and let the other task wait until the token limit resets. Most importantly, I'm never blocked waiting for the cap.
Good to hear that’s working. When I was using copilot before Opus 4.5 came out I found it didn’t perform as well as Claude Code but maybe it works better now with 4.5 and the latest improvements to VSCode. I’ll have to try it again.
the only thing that matters is whether or not you are getting your money’s worth. nothing else matters. if claude is worth $100 or $200 per month to you, it is an easy decision to pay. otherwise stick with $20 or nothing
Short answer is yes. Not only is it more token-friendly and potentially lower latency, it also prevents weird context issues like forgetting Rules, compacting your conversation and missing relevant details, etc.
To me, it doesn’t matter how cheap open AI codex is because that tool just burns up tokens, trying to switch to the wrong version of node using NVM on my machine. It spirals in a loop and never makes progress, for me, no matter how explicitly or verbosely i prompt.
On the other hand, Claude has been nothing but productive for me.
I’m also confused why you don’t assume people have the intelligence to only upgrade when needed. Isn’t that what we’re all doing? Why would you assume people would immediately sign up for the most expensive plan that they don’t need? I already assumed everyone starts on the lowest plan and quickly runs into session limits and then upgrades.
Also coaching people on which paid plan to sign up for kinda has nothing to do with running a local model, which is what this article is about
I spent about 45 mins trying to get both Claude and ChatGPT to help get Codex running on my machine (WSL2) and on a Linux NUC, they couldn't help me get it working so I gave up and went back to Claude.
Because somewhere inside its little non-deterministic brain, the phrase "switch to node version xxx" was the most probable response to the previous context.
From my personal experience it's around 50:50 between Claude and Codex. Some people strongly prefer one over the other. I couldn't figure out yet why.
I just can't accept how slow codex is, and that you can't really use it interactively because of that. I prefer to just watch Claude code work and stop it once I don't like the direction it's taking.
From my point of view, you're either choosing between instruction following or more creative solutions.
Codex models tend to be extremely good at following instructions, to the point that it won't do any additional work unless you ask it to. GPT-5.1 and GPT-5.2 on the other hand is a little bit more creative.
Models from Anthropics on the other hand is a lot more loosy goosy on the instructions, and you need to keep an eye on it much more often.
I'm using models interchangeably from both providers all the time depending on the task at hand. No real preference if one is better then the other, they're just specialized on different things
bit the bullet this week and paid for a month of claude and a month of chatgpt plus. claude seems to have much lower token limits, both aggregate and rate-limited and GPT-5.2 isn't a bad model at all. $20 for claude is not enough even for a hobby project (after one day!), openai looks like it might be.
I feel like a lot of the criticism the GPT-5.x models receive only applies to specific use cases. I prefer these models over Anthropic's because they are less creative and less likely to take freedoms interpreting my prompts.
Sonnet 4.5 is great for vibe coding. You can give it a relatively vague prompt and it will take the initiative to interpret it in a reasonable way. This is good for non-programmers who just want to give the model a vague idea and end up with a working, sensible product.
But I usually do not want that, I do not want the model to take liberties and be creative. I want the model to do precisely what I tell it and nothing more. In my experience, te GPT-5.x models are a better fit for that way of working.
When you look at how capable Claude is, vs the salary of even a fresh graduate, combined with how expensive your time is… Even the maximum plan is a super good deal.
And as a hobbyist the time to sign up for the $20/month plan is after you've spent $20 on tokens at least a couple times.
YMMV based on the kinds of side projects you do, but it's definitely been cheaper for me in the long run to pay by token, and the flexibility it offers is great.
I havent tried agentic coding as I havent set it up in a container yet, and not going to yolo my system (doing stuff via chat and a utility to copy and paste directories and files got me pretty far over the last year and a half).
It helps that Codex is so much slower than Anthropic models, a 4.5 hours Codex session might as well be a 2 hour Claude Code one. I use both extensively FWIW.
It really depends. When building a lot of new features it happens quite fast. With some attention to context length I was often able to go for over an hour on the 20$ claude plan.
If you're doing mostly smaller changes, you can go all day with the 20$ Claude plan without hitting the limits. Especially if you need to thoroughly review the AI changes for correctness, instead of relying on automated tests.
I find that I use it on isolated changes where Claude doesn’t really need to access a ton of files to figure out what to do and I can easily use it without hitting limits. The only time I hit the 4-5 hour limit is when I’m going nuts on a prototype idea and vibe coding absolutely everything, and usually when I hit the limit, I’m pretty mentally spent anyway so I use it as a sign to go do something else. I suppose everyone has different styles and different codebases, but for me I can pretty easily stay under the limit without that it’s hard to justify $100 or $200 a month.
Codex $20 is a good deal but they have nothing inbetween $20 and $200.
The $20 Anthropic plan is only enough to wet my appetite, I can't finish anything.
I pay for $100 Anthropic plan, and keep a $20 Codex plan in my back pocket for getting it to do additional review and analysis overtop of what Opus cooks up.
And I have a few small $ of misc credits in DeepSeek and Kimi K2 AI services mainly to try them out, and for tasks that aren't as complicated, and for writing my own agent tools.
Indeed I would consider switching to Codex completely if a) they had a $100 or $50 membership b) they really worked on improving the CLI tool a lot more. It's about 4-6 months behind Claude Code
> The time to cough up $100 or $200/month is when you've exhausted your $20/month quota and you are frustrated at getting cut off. At that point you should be able to make a responsible decision by yourself.
leo dicaprio snapping gif
These kinds of articles should focus on use case because mileage may vary depending on maturity of idea, testing and host of other factors.
If the app, service, or whatever is unproven, that's a sunk cost on macbook vs. 4 weeks to validate an idea which is a pretty long time.
I’ve been using vs code copilot pro for a few months and never really had any issue, once you hit the limit for one model, you generally still have a bunch more models to choose from. Unless I was vibe coding massive amounts of code without looking to testing, it’s hard to imagine I will run out of all the available pro models.
Oh wow, you're absolutely correct. In my head i recall this being different, I think i've confused myself about either when I was trialling antigravity, or the system they had earlier in this year where you would get notifications that you've used up a given model, at least for a limited time. I feel like the latter was a thing, but you've now made me question my memory, so wouldn't swear by it.
Time is my limiting factor, especially on personal projects. To me, this makes any multiplying effect valuable.
When I consider it against my other hobbies, $100 is pretty reasonable for a month of supply. That being said, I wouldn’t do it every month. Just the months I need it.
this, provided you don't mind hopping around a lot, 5 20 dollar a month accounts will get you way more tokens typically, also good free models will show up from time to time on openrouter
If you're a hobbyist doing a side project, I'd start with Google and use anti-gravity, then only move to OpenAI when the project gets too complex for Gemini to handle things.
Are people really doing that?
If that's you, know that you can get a LONG way on the $20/month plans from OpenAI and Anthropic. The OpenAI one in particular is a great deal, because Codex is charged a whole lot lower than Claude.
The time to cough up $100 or $200/month is when you've exhausted your $20/month quota and you are frustrated at getting cut off. At that point you should be able to make a responsible decision by yourself.