For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more codeclimber's commentsregister

Creator of the website here. It’s a practical snapshot of what seemed to work with Claude Code this summer, arranged by stage and linked back to the originals. Most of it should translate across agentic tools; a handful is Claude Code–specific. Corrections and additions appreciated.


My datapoint (Claude Code): on the paid Pro tier I’d hit the cap after ~1–2 hours of real coding. I upgraded to Max and haven’t hit limits since, even with multiple terminals in parallel (lots of refactoring/tests)


Community-maintained comparison of AI coding tools with actual free access to frontier models (GPT-5, Claude Sonnet 4, Gemini 2.5 Pro, Qwen3-Coder-480B). Only includes models scoring >60% on SWE-bench Verified.

Turns out Qwen Code gives 2,000 requests/day to Qwen3-Coder-480B while Rovo Dev CLI gives 5M tokens/day of Claude Sonnet 4 during beta, among others.

The catch: every tool counts differently (requests vs tokens vs credits), and the number of requests to solve particular coding problems varies significantly from tool to tool. Makes comparison quite hard.

Quick survey: How many actual coding hours do you get from your coding tool? (Free or paid, rough numbers fine)


I get about an hour per day with Gemini 2.5 Pro for free in Gemini CLI, then it downgrades to flash model.


Nice write-up, especially the point about mixing different models for different stages of coding. I’ve been tracking which IDE/CLI tools give free or semi-free access to pro-grade LLMs (e.g., GPT-5, Claude code, Gemini 2.5 Pro) and how generous their quotas are. Ended up putting them side-by-side so it’s easier to compare hours, limits, and gotchas: https://github.com/inmve/free-ai-coding


So far I have seen GPT-5 models only in Cursor (free for paying users). Any others?


Could you share the hardware specs you use to run Devstral?


Gemini CLI (free tier) [a] - less than 1 h/day Claude Code (pro) [a] - 2 h/day Cursor (pro) [b] - 2 weeks/month


Trying to track which agentic tools (both IDEs and CLIs) give free access to pro-grade LLMs (Claude Sonnet/Opus 4, Gemini 2.5 Pro, Grok 4, OpenAI o3, etc). May be gaps or mistakes - feedback welcome.


I haven’t personally run into the fallback to Flash (probably because I mostly use Claude Code now). Curious if others using Gemini CLI are seeing the same: does the switch to Flash happen after less than an hour of coding with Gemini 2.5 Pro? Going to dig a bit and see if this is a common pattern.


Yes, it does switch in less than an hour.


Thanks, added Warp to the list with the following data:

Up to 150 AI requests/month → about 15 coding hours/month (assuming 10 req/hour) with frontier models. The Free plan explicitly supports frontier models like Claude Sonnet 4, OpenAI o3, and Gemini 2.5 Pro.

That puts Warp at #2 on the list, right below Gemini CLI.


I believe Warp can count a typical agent call as more than 1 request. Worth trialing how many that actually is in practice.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You