For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | goosejuice's commentsregister

What you're saying is conceptually true for subscription services in general, but thats not why they are making this change. There's a 5 hour limit and a weekly limit. Those are hard token limits. Everyone on a plan pays for the max set of tokens in that plan. The limits manage capacity. The solution to that isn't a change of ToS, it's adjusting the limits.

In other words this is about Anthropic subsidizing their own tools to keep people on their platform. OpenClaw is just a good cover story for that. You can maximize plans just as easily w/ /loop. I do it all the time on max 20x. The agent consuming those tokens is irrelevant.

For what it's worth I don't use OpenClaw and don't intend to, but I do use claude -p all the time.


You aren't paying to be using that limit all of the time.

You are paying to be using that limit some of the time. There are 5 hour windows when you are sleeping and can't use it. There are weekend limits.

Theoretically you can max out every 5 hour window, but they lose money on that.

It's structured so users can have bursts of unlimited usage, and spend ~15% of the theoretical max cap, and that's still cheaper than a subscription for that user.

An OpenClaw user can use 6, 7, 8 times what a human subscriber is using.


I've met people that fill a box of sushi to take home at the end of their “all you can eat” session because “they paid for it”. Shrug.

Yes and the staff will tell them to stop that or charge them extra for it.

Yes and they will hide their sushi-grabbing because somewhere deep inside they know it's not part of the deal, while at the same time still strongly feeling that they have indeed paid for it.

Ah, to be human!


I'd argue they hide their takeaway because of what GP comment said — not because of anything innate, but because a staff member will not let them.

I grew up in an Asian household of six. We definitely took food home at AYCE places. My parents definitely knew it wasn't OK, but they felt like they were gaming the system (like a dubious life hack of sorts) and saving money, so they were actually quite proud of it, bragging to friends how much they were able to get.

To be human indeed!


I assume it's not unusual for thieves to brag about their scores.

Ah to be inhuman!

The cooperative and competitive sides of our soul fighting it out in a single situation

Really

At least on a personal max account, I can't max every window. There is also weekly limit. If I max every window, I run out of tokens halfway through the week.

> Theoretically you can max out every 5 hour window, but they lose money on that.

No, there is a weekly limit as well. Maxing out a single 5h window uses ~10% of the weekly limit


I fill my week limit in a few days :(

Do you do process XLS or similar data? Ive seen using any other files than code files eats much more faster

> You are paying to be using that limit some of the time. There are 5 hour windows when you are sleeping and can't use it. There are weekend limits.

They could easily structure their limits to enforce that kind of pattern fairly on both human and automated users. They could e.g. force a cooldown period between your daily activity bursts, by decreeing that continued heavy use on a 24h basis would count exponentially more towards your limit. That would be transparent and force the claws to lighten their load below that of a typical human user. We're talking about a company that's worth hundreds of billions of dollars and targeting highly sophisticated enterprise users, not consumers; it's just not credible that they'd be technically unable to set that up.


Or alternatively just pay based on what you use? I.e. $/tokens.

They offer that, API pricing. Or "extra usage" which they're saying you can still use for OpenClaw. It's really expensive!

Train a generation to min/max stats and then put them in a time box limit and then explain to them why “this is normal”.

The issue is, and always will be, competing views on what these services are for. Most, see them as augments of their normal everyday workflow. Others see it as the tool that allows their creativity to flow as fast as their thoughts do. The problem is the service is more than capable of catering to both but the creative vibe commander will hit those limits far faster. Simply telling them to “take a break” is a kin to those video game screen nags that developers were forced to put into games to remind people to pee.


I think maybe you are not familiar with what /loop and the Claude cron tools do.

https://code.claude.com/docs/en/scheduled-tasks


I think maybe you still don’t understand that not everyone will max out their usage, regardless of the methods available.

I need a hypothetical use case for things like this, I don't get how so many people have so much desire for use of features like this.

https://martinfowler.com/articles/harness-engineering.html it's being talked about everywhere.

If you manage developers or product folk, do you allow them to work when you're not looking over their shoulder? All developers can be managers/team leads now. You plan, you delegate, you review.

You're welcome to not do this, surely that's appropriate in quite a few areas of work, but many of us are because we can get more work done than if we we're micromanaging every line of code change. For startups, where a bit of quality can suffer in favor of finding market fit, this is huge.


Deduplicating/validating/processing incoming bug reports?

Every morning it summarizes a bunch of stuff for me, suggests me PRs to review, emails to reply to, freshly cloned any new repos, pulled all others, presents me with the suggested approaches to my PRs of that day, and gives me a list of my slack mentions that look more urgent.

This is just the morning ones, and saves shitloads of time of clicking around from tool to tool, freeing up time for the thinking and deciding.


Wow, you should probably ask it to write a script for 90+% of that instead . Sounds like a huge waste of electricity.

I don't believe that Anthropic looses money when heavy users consume the max amount of tokens.

do you have any proof of your statement ?


If we do same work via Subscription and API , the subscription is way cheaper. So if we compare them yes they loose money.

none of them are making any money yet. they all lose.

> Theoretically you can max out every 5 hour window, but they lose money on that.

This typically results in a ban for TOS violations after a few windows in a row on a claude subscription


I have maxed out my 5 hour limits and my weekly limits fairly regularly, when I did a bunch of editing work on long form writing next to having CC run a few coding tasks over the xmas holidys - I only slept a few hours at night an timed those roughly (by chance) with my limits.

I neither got a warning or a ban or anything - and that was with the double token amount during those days.

So I don't see human usage being something they ban for TOS violation, like you describe. But as always YMMV.


Was this on a new (few months) or significantly older account?

I feel like Anthropic is going down a bad path here with billing things this way. Especially as local LLM continues to develop so fast.

I downgraded from my $200 a month plan to my $20 plan and hit limits constantly. I try to use the API access I purchased separately, and it doesn't work with Claude Code (something about the 1 million context requiring extra usage) so I have to use it Continue. Then I get instantly rate limited when it's trying to read 1-2 files.

It just sucks. This whole landscape is still emerging, but if this is what it's like now, pre enshittification, when these companies have shitloads of money - it's going to be so much worse when they start to tighten the screws.

Right now my own incentive is to stop being dependent on Claude for as much as I can as quickly as I can.


This is how free drink refills, airplane tickets, Internet service, unlimited data plans, insurance, flat rate shipping, monthly transit passes, Netflix, Apple Music, gym memberships, museum memberships, car wash plans, amusement park passes, all you can eat buffets, news subscriptions, and many more work.

Either you get a flat rate fee based on certain allowed usage patterns or everyone has to be billed à la carte.


This is a different case - those all have limitations based on human behavior (it's not necessary or possible to constantly be washing your car the entire month when you pay for unlimited washes) - that doesn't exist here. The types of plans available should reflect that reality. If gyms faced a situation where people would go and spend 18 hours working out every day for a month, they would probably change how they billed things.

Your comparisons are all also "unlimited" situations to Claude's very much limited situation. You can't buy a plan for Claude that is marketed as being unlimited. They're already selling people metered usage. They're just also adding restrictions on top of that.


They sell metered usage while having the implied expectation that most wont use it fully. Power users and users of stuff like OpenClaw don't match that idea.

So they further restricted the metered caps, which were only offered to NOT be reached by that many.

Simple as that.


>Power users and users of stuff like OpenClaw don't match that idea.

Then they should figure out how to structure an offering that accommodates this type of usage not just blanket ban it


> Then they should figure out how to structure an offering that accommodates this type of usage

They did, didn't they? You can pay the non-plan rate.

> not just blanket ban it

They didn't do that. The email specifically tells you how to use Openclaw with Anthropic. There is no "blanket ban".


Why "should" they? There's no reason they would especially when their competitor now owns OpenClaw.

Because a big part of Anthropic's story is that they build based on how people actually use AI. Power users aren't just annoying edge cases, they're signal. Throttling them and calling it done is inconsistent with that.

> Power users aren't just annoying edge cases, they're signal.

You got that right; in this case they are signalling that AI token providers are not going to be able to run at a profit anytime soon.

Not sure if that helps or hurts your argument, though.


> Power users aren't just annoying edge cases, they're signal.

Not all power users. Some re-invent the wheel and/or do things inefficiently, and in most cases there's no business incentive to adapt the service to fit the usage patterns of those users, or of other users that deviate from the norm in regards to resource usage.


They build based on how _people_ use AI.

Sorry to tell you but generally any company's "story" is all marketing and PR, if it interferes with their making money, which it does in this case, that company will not hesitate to leave it behind.

Oh the billion bollar vc backed pre ipo companys story was this? Omg and they somehow are not delivering up to your standards? Damn they better get their act together lest people like you will whine on twitter about them losing their way

> Why "should" they?

Because it is clear that there is a market demand for it.


There is also a clear market demand for $10 bills sold for $5, but I don't see you tapping into that opportunity!

They did: just use the metered API.

Isn’t that just usage based charges?

Don’t cry while you’re ruining it for everyone.

They did figure out how to structure an offering that accommodates that type of usage: pay for your tokens.

"Unlimited" has always been a lie. There is no free lunch. There are always limits.

I've had to unwind "unlimited" within startups that oversold. I've been bit by ISPs, storage providers, music streamers, fuckin _Ubers_, now AI subscription services, that all dealt in "unlimited". None of them delivered in the long run.

I'd be mad at Anthropic if it weren't for the fact that my experience now can see this sort of thing from a mile away. There are a lot folks, even on HN, that haven't been around for as long. I understand the outrage. I've been there. But these computers cost money to run, and companies don't operate at a loss in the fullness of time.

Once you know that unlimited trends towards limited, the real question is whether we're equipped as a society to deal with the fact that the capital-L Labor input to the economic equation is about to be replaced with a Capital input for which only a handful of companies have a non-zero value.


On your 1.5Mbps link, you could theoretically download 500GB per month. A huge amount, but I believe it was often genuinely allowed, because their uplinks could cope with it. Unlimited could genuinely be unlimited.

But now you might get things like “unlimited” 1Gbps… which reverts to 10Mbps (1% speed) or worse after 3.6TB (eight hours). And so your new theoretical maximum is about 6.8TB per month rather than 330TB.


You can both know that "unlimited" means "limited" and also be pissed that they market it as such and try to conceal the actual limits.

>If gyms faced a situation where people would go and spend 18 hours working out every day for a month, they would probably change how they billed things.

Not the best example. The upkeep cost of a gym is pretty flat regardless of how much people use the facilities. Two people can't use a single machine at the same time make it wear out twice as fast. The price of memberships is not correlated to usage, it's inversely correlated to the number of memberships sold.


Two people can't use a machine at the same time is the issue. If you have 50 machines and 200 customers all of whom want to be in the gym 18 hours per day that's quickly going to lead to cancelled subscriptions. Now you need more space and machines or some other way to balance things.

>Two people can't use a single machine at the same time make it wear out twice as fast

The machine doesn't care about the number of people using it. If it's constantly being used, it will wear out faster. You are conflating "we price based on expected under-utilization" with "costs don't scale with usage." Those are different things.

The inverse correlation you talk about isn't relevant here - People buy gym memberships intending to go, feel good about the intention, and then don't follow through. The business model is built on that gap. That's pretty specific to fitness and a handful of similar industries where aspiration drives purchase.

Anthropic doesn't sell based on a "golly gee I hope people dont use this" gap - they sell compute. Different business.


> Anthropic doesn't sell based on a "golly gee I hope people dont use this" gap - they sell compute. Different business.

There is nothing anywhere hinting at that.

They don’t sell compute. They sell a subscription for LLM token budgets that they hope people don’t use because the compute is vastly more expensive than what they charge or what users are ever willing to pay.

Especially with enterprise subscription plans the idea is for customers to never utilize anywhere close to their limits.


>If it's constantly being used, it will wear out faster.

Yeah, but there's an absolute limit to that, beyond which the cost doesn't keep increasing. Beyond that point, the QoS goes down (queues).

>You are conflating "we price based on expected under-utilization" with "costs don't scale with usage."

I'm not conflating anything, I'm responding to what you said:

>If gyms faced a situation where people would go and spend 18 hours working out every day for a month, they would probably change how they billed things.

Why would a gym need to change how they bill things if all their customers were aiming for maximal utilization, when their costs would barely see any change? I doubt your typical gym operates on razor-thin margins.


Gym costs absolutely scale with usage. Equipment wears faster under heavier use. Cleaning and maintenance staff hours scale with how much the facility is used. Consumables like towels, soap, and chalk go faster. HVAC runs harder. The reason gyms can offer flat-rate pricing is that they bet on under-utilization, not that costs are flat.

Setting that aside, even if we accept your argument that gym costs barely scale with usage, then that makes gyms a bad comparison case for Anthropic, whose costs directly scale with usage. You can't use the gym model to defend Anthropic's pricing decisions if the two cost structures are nothing alike.

I'm arguing that both gyms and Anthropic have usage costs that scale with usage, but gym business model assumes a large margin of under-utilization and there's a hard cap to "power user" - I think both of those extremes don't apply to Anthropic's situation. Under-utilizers aren't paying for AI they have a free tier. There's also a natural ceiling on how much any one person can use a gym. There's no equivalent constraint on API usage.


> The reason gyms can offer flat-rate pricing is that they bet on under-utilization, not that costs are flat.

Yes. In fact i remember hearing about a gym which offered a flat-rate pricing model but explicitly excluded certain professions from partaking in it. I remember the deal was excluding police, bouncers, models, actors and air stewardesses. They had a separate more costly tier for these people. (And I think i heard about it from the indignation the deal has caused online.)


> Under-utilizers aren't paying for AI they have a free tier.

Sure they do. Free tiers suck. I may not always need to use AI, but when I need it, I don't want to immediately get hit by stupidly low quotas and rate limits, or get anything but SOTA models.


>You can't use the gym model to defend Anthropic's pricing decisions if the two cost structures are nothing alike.

Am I? I think you read something into my comments that I didn't write.


à la carte is honest; overprovisioning just slows progress by preventing demand from creating pressure to innovate proper solutions.

The commons? Tragic.

> I feel like Anthropic is going down a bad path here with billing things this way.

What do you expect them to do? You are looking at a business currently running at a loss, and complaining about their billing even though this is not a price-rise?

Unrelated, is it still possible to use $10k/m worth of tokens on their $200/plan?


They seem to know what they’re doing. Anthropic entered 2025 with a run rate of $1 billion; the run rate for March 2026 is estimated at $19 billion.

Internal projections show the company reaching cash-flow break-even in 2028, after stopping cash burn in 2027.

They’ve already implemented several of the features that put OpenClaw on the map.


> Anthropic entered 2025 with a run rate of $1 billion; the run rate for March 2026 is estimated at $19 billion.

I don't know what that means in this context.

> Internal projections show the company reaching cash-flow break-even in 2028, after stopping cash burn in 2027.

What does that have to do with them implementing restrictions on their plans because they are currently running at a loss?

Okay, lets say their internal projections[1] are accurate: were those before or after Openclaw released? Maybe their projections were made on the assumption that people would stop using $10k/m worth of tokens on a $200/m plan? Or that those users doing that will only be doing code? Or that the plan users won't be running requests at a rate of 5/minute, every minute of every hour of every day?

--------------------------------

[1] Where did you find those projections? I'm skeptical, at their current prices and current plans, that a break-even at any point in the future is possible unless they shut off or severely scale down training. Running at a per-unit loss means that the more you sell, the larger your loss - increasing your sales increases your loss.


If you can do less for the same price, that is in effect a price increase.

>Especially as local LLM continues to develop so fast.

I'm sorry is there anything even close to sonnet, much less opus, that can be run on a 4080? Or 64gb of ram, even slowly?


You can run SOTA local MoE models very slowly by streaming the weights in from a fast PCIe 5 SSD. Kimi 2.5 (generally considered in the ballpark of current sonnet, not opus of course) has been measured as 2 tok/s on Apple M5 hardware, which is the best-case performance unless you have niche HEDT hardware with lots of PCIe lanes to attach storage to and figure out how to use that amount of parallel transfer throughput.

Look for the current crop of local Mixture of Experts models, where it seems like they've made inroads on the O(n^2) context attention cost problem. Several folks have mentioned Qwen, but there's many more of that ilk. Several of them actually score really high on benchmarks. But when I mess with one of them locally by hand myself, (I have a 3090), it feels a bit like last year's Sonnet. They don't quite make the leaps of understanding you get from Opus.

* Weird thing of the day: https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-...


Qwen 3.5, Gemma 4

You can use the API with CC, you just need to log out and log in, selecting API usage.

> You are paying to be using that limit some of the time.

This makes zero sense. I'm paying to use that limit all of the time. If that's too much for Anthropic, they are free to lower the limits or increase the price. Claiming otherwise would be false advertising.


> Theoretically you can max out every 5 hour window, but they lose money on that.

Then it's not priced correctly. As I said, you can do all of this without OpenClaw.. claude code ships with everything you need to maximize the limits.


It is priced incorrectly, but that is intentional. You can't create a tiered paid plan for the whole world that fits everyone. You can't create nuanced extra plans to satisfy all the outliers. It's an bet to keep the customers and still having a good margin. Think of ecom, returns are a big struggle for any large company because they are unpredictable and subject to abuse, shipping fees are just an sophisticated guess to cover that cost. Not a subscription, same mechanics. The only thing here to criticize is, if it's a good thing to make everything a subscription and disguise the real cost.

>You can't create a tiered paid plan for the whole world that fits everyone.

I mean, you can. Electricity is already sold that way. Subscribers with uncharacteristic usage spikes don't get blackouts, they get a slightly larger bill, and perhaps get moved up a tier.


Very valid. My comment was fixated around the fact that big tech has the addiction to have subscriptions for everything. It's common that you provide generic subscription plans for the masses and supply "call us" custom plans for the specific (usually corporate) needs. If anthropic doesn't provide that or vibe coders are too cheap to do that, then those are issues, but the subscription models are itself valid. It is certainly misleading to a degree, but we've stopped complaining about this a while ago.

It's pretty stupid because as others in this thread have pointed out it's already not a flat plan. Even from their side it makes zero sense to bill things this way rather than based on usage. It's not like a VPS where your VM shares the hardware, which consumes electricity more or less regardless of what you use the machine for.

Those yottabytes of VRAM are also consuming electricity constantly.

The difference being that an LLM request is not an operating system. Since they're compartmentalized and ephemeral, you can very easily distribute requests among your available hardware so that you can switch off machines during periods of low activity.

Your capital costs for buying those machines don't go away.

That's a problem that already exists in power generation and delivery, and it's already been solved. Bills are sums of fixed terms and variable terms.

Custom payment schemes are late stage profit generation. It requires hoards of salespeople or an AI that can actually do math.

It's just how hyperscaling works. You are not wrong, but in the wrong timeline.


I'm not talking about custom, negotiated service contracts, I'm talking about simply charging people for what they use.

But that would be using (a special Claude code version) of the API; as it stands now, I have tried the current api for fun and I hit $200 well within an hour. So if they would charge for real use, no one would use it as there are competitors who have less harsh limits with tier plans still. If all go away then I will be running open models on vast.ai or so as those are now viable (been testing with glm 5 and it's great for coding). So tier subscriptions cannot go away as it will end those companies fast.

No, it is priced correctly.

Just because outliers can be money-losing doesn’t mean you should raise the price for everyone.


> Just because outliers can be money-losing doesn’t mean you should raise the price for everyone.

If they are losing money then it's not priced correctly. That's what I responded to.

Yes, subscriptions work as you say. Plenty of people under utilize subscriptions from prime, to credit cards, to netflix. But if they lost money overall, they too would raise prices. Because that's how economics works. Shortage of capacity, high demand, raise prices until equilibrium.

There's other knobs beyond ToS. They just didn't choose those options.


> If they are losing money then it's not priced correctly.

Just a few years ago this was the standard business model for startups: attract VC money, offer plans at a loss, capture a huge market, boil the frog with incremental price increases to become profitable.

Companies like Uber wouldn't have been anywhere near as successful if they had been forced to make a profit from day one.


I think "offer unlimited but TOS ban behaviors that would cost too much to support" is actually a very normal way that things work instead of "raise prices until equilibrium is reached", including in credit cards. Credit cards do simply ban people they think are "rewards churning" based on a completely subjective TOS policy for example.

Raising prices is a bad strategy if you have a smaller base that costs enormously larger than the rest. "A million users that cost $1 and one user that costs $10 million, charge everyone $10 equilibrium", you're screwing over almost all of your users. The $20/month sub price is basically just not trying to capture the openclaw users, it doesn't make sense that all of the vanilla Claude users should subsidize them (and in fact it wouldn't even work because they will just go to Gemini or ChatGPT if your cheapest paid plan was very expensive to try to subsidize the other users)


Yes, they chose the knob of ToS, because that was the way to price it correctly.

The market will determine if it was the correct choice. I don't think it's an obviously bad choice on their part.

Yes, and they are in control of Claude Code, so they are fine with that. If it causes problems they can tweak it. If OpenClaw causes problems they can’t.

Efficient token use will be the new code/vim golf.

Whether it's human token use, or future OpenClaws


I've mention before that we should have a look at Telegraph/telegram speak. There was a HUGE industry in word efficiency at that time. There are hundreds of books.

I even think an LLM trained to communicate using telegram style might even be faster and way cheaper.


Reminds me of the terminus agent/harness on the terminal-bench coding benchmark - they just send send keystrokes to a tmux session. They score pretty well.

https://www.tbench.ai/news/terminus


Why use many word when few do trick?

> I've mention before that we should have a look at Telegraph/telegram speak.

.- -. -.. / .. --..-- / ..-. --- .-. / --- -. . --..-- / .-- . .-.. -.-. --- -- . / --- ..- .-. / -. . .-- / - . .-.. . --. .-. .- -- -....- -... .- ... . -.. / --- ...- . .-. .-.. --- .-. -.. ...


It’s the new cloud cost vector, where cutting 2K from context on a busy service saves $xxxxx.

Terse.


Like "Token Usage Consulting" companies popping up now? :-D

No org doing real work cares about token use costs.

This mainly just affects hobbyists.


Token use cost can easily get as large as dev salaries. Even real businesses care about that.

Tell me you are not using Anthropic without telling me. Bursts of unlimited usage was never the case. And I bet their infrastructure doesn’t like bursts as much as more spread out activity.

> You aren't paying to be using that limit all of the time.

The erosion of the norm of things doing what they advertise rather than being weasel-worded BS is particularly unfortunate, and leads to claims like this.


you can write automated MCP tools that run within claude code, and could theoretically generate as high a load as any other automated/3rd party agent. You can also do loops that burn tokens incredibly fast. This is allowed with no caveats (I use MCP's basically to test what I'd like to try with the API...) So this explanation just seems a lil hollow.

Yes, but very few people are actually doing that compared to OpenClaw. If everyone else was doing that, they'd be cracking down on it too.

When you can’t enforce everything at once, you go where the most acute problems are. I imagine when your MCP avenue of abuse catches on—like this other category of harnesses did—to such a scale as to become a problem impacting us folk trying to go about our business… when that’s where the problems shift, I imagine (and hope) Anthropic will crack down on that vector too. To keep the service usable for us ordinary meatbags.

I’m glad they give us the leeway to experiment, and I’m also glad they weed the garden from time to time. To switch metaphors, I’m deeply frustrated when my very modest, commuter-grade use gets run off the figurative highway by figurative hot-rodders. It’s been extra-529y this week, and it’s about time they reined it in a little.

You’re always welcome to pay-as-you-go for as many tokens as you’d like to burn on their infrastructure… or to compute against any of the wide array of ever-improving open models on commodity compute providers…


>>when your MCP avenue of abuse catches on

Thats an interesting way of phrasing it - so is there a way to use the quota that's not 'abuse'? MCP/claude code seems to be want they want you to use it - are loops or ralph abuse as well ?


It's not difficult at all to burn through your weekly limit just writing code.

While you can write an automated tool to consume all their tokens, I strongly suspect most users, like myself, are not doing that. So even if Anthropic loses money on a power user, they profit overall and keep public sentiment high by not alienating users with restrictions. It's an optimization problem of making a profit off the average used while staying low enough to attract customers, even if that means some users cost more than they pay.

More users spinning up OpenClaw means that balance starts to shift towards more users maxing their tokens, thus the average increases, so I think their explanation makes sense still.


>>So even if Anthropic loses money on a power user, they profit overall and keep public sentiment high by not alienating users with restrictions

So they profit overall if I use all my tokens either way? Again, I understand usage limits - I just don't understand why some usage is 'good' and some 'bad' if I'm using the same either way.

>>More users spinning up OpenClaw

I'm pretty sure that's a small percentage of overall users, and probably skewed towards the very people that would be recommending/implementing you model for work/businesses. Seems like that would be the group you are encouraging/cultivating ?


My company has several MCPs that our very token intensive, but it seems that with Claude Code, usage is throttled even before hitting limits. I don't have any proof, but often when using intensive MCPs, Claude Code will just stall for 10+ minutes.

I wonder if anyone else has experienced this?


Anthropic is much more concerned about what people are ACTUALLY doing than what they could, in theory, be doing.

How can an OpenClaw user use 6 times what a human subscriber is using when I'm four hours into the week and 15% of my weekly limit is already used up, just by coding? OpenClaw can't use 600% of my weekly limits.

>How can an OpenClaw user use 6 times what a human subscriber is using when I'm four hours into the week and 15% of my weekly limit is already used up, just by coding?

Perhaps because your Claude agent usage is not representative of the average user, and closer to the average OpenClaw user levels...


Not sure what tier you're on.

Basically; spin up in the morning eats a lot of tokens because the cache is cold. This has actually gotten worse now that Opus supports a 1Mt context.

So: compact before closing up for the night (reduces the size of the cache that needs to be spun up); and the default cache life is 5 minutes, so keep a heartbeat running when you step away from the keyboard to keep the cache warm.

Also, things like web-research eat context like crazy. Keep those separate, and ask for an md report with the key findings to feed into your main.

This is not exhaustive list and it's potentially subtly wrong sometimes. But it's a good band-aid.

https://news.ycombinator.com/item?id=47616297

Know what's funny? Openclaw might actually burn less tokens than a naive claude code user; if configured correctly. %-/


I'm on the $100 tier, but I don't use OpenClaw. My point is it can't use more than 100% of my limit, so "6-8x more" is only possible if you use 15% of your subscription normally.

Right. I was editing to add more info. Possibly you already know the usage tricks I list above. The world is still very messy and not much is documented properly.

And I'm skeptical of the 6x-8x claim myself. They'd have to explain that in more detail.


Oh actually the cache trick is neat, I hadn't considered it, thanks!

Man, I run 3-5 sessions an evening for 5-6 hours, and longer on weekends and feel like I'm barely using what I paid for. I've only hit five hour limits a small number of times. Genuinely baffled when I hear people blow through tokens apparently several times faster than me. Are you going out of your way to design complex subagent workflows or something? I just let claude code use subagents when it wants to but don't give it any extra direction to use them.

Without data, this is just a bunk excuse to defend the walled garden practices.

With data, it's an engineering target.

They could just 429 badly behaved clients.


They already 429 everyone! That's the crazy thing. They already have strict limits that we all keep hitting regularly.

You guys are arguing on the reality of a subscription, but Anthropic still resides in the coocoo make-up world of growth at all costs backed up by unfathomable investments. They're not acting rationally by trying to present a good product with reasonable backend fundamentals. They're just trying to maintain the money loss to what they have set aside for the quarter. OpenClaw was not planned for, and thus must be fought.

There are multiple reasons why this makes sense for Anthropic

- The intention of subscriptions, as anywhere, is a combination of trying to promote brand loyalty, and the gym membership model of getting people to pay for resources that many will never use. As the parent noted, people maxxing out their allowed usage, for whatever reason, are not the most profitable customers, and in this case probably not profitable at all

- OpenClaw is now owned by a competitor, OpenAI, and Anthropic are trying to compete in this space

https://www.semafor.com/article/04/03/2026/anthropic-eyes-it...

- Anthropic are capacity constrained, having sensibly chosen to err on the side of safety (not going bankrupt), and are now trying to do the best they can to manage that.

Presumably they might be acting differently if they had capacity to spare, but even then helping a competitor to build market share in a potentially lucrative segment doesn't make strategic sense.

I do wonder about the wisdom of Anthropic promoting usage-maxxing development patterns such as running a dozen agents in parallel ... maybe not the wisest thing to do when capacity constrained! It would make more sense to promote usage at night with low priority "batch jobs" rather than encourage people to increase usage during period of maximum demand.


> Everyone on a plan pays for the max set of tokens in that plan.

From Anthropic's perspective, everyone pays to be in bins with a given max.

And to everyone's benefit, there is a wide distribution of actual use. Most people pay for the convenience of knowing they have a max if they need it, not so they always use it.

So Anthropic does something nice, and drops the price for everyone. They kick back some of the (actual/potential) savings to their customers.

But if everyone automates the use of all their tokens Anthropic must either raise prices for everyone (which is terribly unfair for most users, who are not banging the ceiling every single time), or separate the continuous ceiling thumpers into another bin.

That's economics. Service/cost assumptions change, something has to give.

And of the two choices, they chose the one that is fair to everyone. As apposed to the one that is unfair (in different directions) to everyone.


Yes, mostly what I'm saying, but forgetting the important part:

From the email: > but these tools put an outsized strain on our systems. Capacity is a resource we manage carefully and we need to prioritize our customers using our core products

OpenClaw doesn't put an outsized strain on their systems any more than Anthropics own tools. They just happen to have more demand than they can serve and they benefit more when people to use their own tools. They just aren't saying that explicitly.

It has nothing to do with fairness or being nice.


If this was a gym subscription, it would be an equivalent of some people going to the gym, and some people sending their android to the gym every day, for the whole day, and using as much equipment as the gym policy allows.

It would be like some people sending the gym's competitor's android to the gym instead of the android the gym provides. Said gym also doesn't have enough equipment for everyone's gym appointed android despite being more expensive. Said gym doesn't want to admit this, nor does it want to raise prices on an already more expensive subscription. Said gym doesn't want competitor's android to gain marketshare. Said gym blames competitor's android for using up gym equipment despite gym's own android being capable of using as much equipment.

> using as much equipment as the gym policy allows.

which said customer paid for. And now they want to back out of it because it turns out they thought users wouldn't do that.

I say they ought to be punished by consumer competition laws - they need to uphold the terms of the subscription as understood by the customer at the time of the sign up.


> there is a wide distribution of actual use

except when people start using openclaw, and the distribution narrows (to that of a power user).

I hate companies that try to oversell capacity but hides it in the expected usage distribution. Same goes for internet bandwidth from ISP (or download limit - rarer these days, but exists).

Or airplane seats. Or electricity.


> I hate companies that try to oversell capacity but hides it in the expected usage distribution.

Except they charge you less because of the distribution. Competition for customers doesn't evaporate.


The trade-off is that if you set your usage limits so that you can handle the case where everyone is saturating their limit at all times, then (1) the usage limits would be too small and (2) you're optimizing for a usage pattern that doesn't exist and (3) you're severely underprovisioning, which is worse for everyone.

Instead, you can prioritize people "earnestly" bursting to the usage limits, like the users who are actually sitting at their computer using the service over someone's server saturating the limit 24/7.

The goal is to have different tiers for manual users vs automated/programmatic tools. Not just Anthropic, this is how we design systems in general.


Well earnest here just means using Claude code directly or the Claude app. Both that just happen to support using tokens while you sleep!

Defining earnest (placeholder word btw) is the hard part of the trade-off, though.

When your least automated, most interactive users are competing for capacity with fully-automated tools, let's say, you're forced to define some sort of periphery between these groups.

OpenClaw is a self-directed, automated loop that sits on a server. It's wowing its owner by shitposting on moltbook and doing any number of crazy stories you can find online that amount to "omg I can't believe my self-directed claude loop spent all day doing this crazy thing haha."

On the other end of the spectrum is someone using Claude.app's interface.

And then in the middle, you can imagine "claude -p" inside a CI tool that was still invoked downstream of a user's action. Still quite different from the claude loop.


Claude code has /loop. Claude app has scheduled tasks. The leaked source has a proactive mode.

I'm sorry but this framing just doesnt make sense.


Even with those tools, the usage of Claude Code with all of them turned on is going to trend much lower than OpenClaw usage. Everyone that I've seen with OpenClaw will intentionally waste tokens just to make sure they hit the cap, even if they're doing useless stuff with it. And it can be going 24/7, every minute constantly, while the intended purpose with scheduled tasks is to use them at a set rate but not nearly constantly.

Definitely. They will see less usage. That's good for them because they have infra scaling issues that they don't care to admit explicitly. Their competitor will also get less telemetry (if they enable it). It's a win win.

I don’t really follow what you’re saying. You mention the 5 hour limit. Is your expectation that they have enough capacity so that everyone can hit their 5 hour limit all the time? Or you are proposing that’s how they limit capacity for a subscription?

Do you have an example of how this is how they have advertised or sold the plan? I don’t recall ever seeing any advertisement that their plan is simply pre paying for tokens.


This is what I've been wondering about for a while now. I have the 20x plan as well, which I thought would allow me to try some API coding - but you get zero API usage.

As you said, I would imagine where the token usage comes from is irrelevant - you are generating the same load whether you do it from claude code or some other agent. So it seems like the rules are more to do with encouraging claude code usage, rather then claude model usage.


Claude code is still getting used by these agents. They banned the mimicry awhile ago and said claude -p was fine.

OpenClaw just happens to also get telemetry, of probably higher value, out of the same tokens. It also happens to be owned by their competitor.

edit: I'm wrong OpenClaw surprisingly doesn't collect telemetry. Good for them.


Are we now banned from using `claude -p` now?

You’re missing something. I’m pretty sure it’s not only about the cost. Anthropic literally doesn’t have enough compute. They have to balance the load between enterprise customers and end users with subscription. If you consider they don’t have infinite compute (ie at their scale there is a limit to how much is available in a given region) and something is causing subscription users to increase usage significantly they do have to find a way to balance.

At least that’s my read. I don’t believe it is nefarious


Look guys I use AI to help me re-write shit but for HN comments?

(Maybe I'm just being paranoid here).


Exactly your point. Anthropic is subsidizing their own tools to keep people on their platform. What's wrong with that?

Tokens and these agents(Claude Code/cowork/claude.ai) are separate from model tokens, and they want to discount for their own product usage.

The subscription they sell is a package of these products, not tokens. They never sell token subscriptions, so why do we need to relate tokens with the subscription? Fundamentally, they never meant to sell token usage in that subscription, similar to any other SaaS company trying to sell API usage.


> What's wrong with that?

Nothing beyond fumbling the PR around it.


If they bundled together these two radically different usage patterns, either the service would become more expensive or the limits would become a lot tighter, in both cases making Claude Code far less attractive to professional users.

OpenAI does this btw, it is why I still have that sub.

OpenClaw is a mass project and doing something in the background 24/7.

I haven't even heard of claude -p before your comment.

OpenClaw is for sure not just a good cover story. Or its the cover face of the issue of automated tool workflows.

I don't think they are bothered too much about other frontends who do the same as claude code.


-p gets penalized is not worth using it.

It’s shame they do all this sketchy stuff, I switched to Codex I have enough of their bs.


> The agent consuming those tokens is irrelevant.

This is so wrong.

The subscription is to Claude (the app, Claude code, etc) not the API.

Anthropic subsidizes Claude code because they collect a ton of super useful telemetry and logs so they can improve… Claude code.

Wanting to pay for a subscription to Claude and treat it like an API discount is like going to an all you can eat buffet and asking them to bring unlimited quantities of raw ingredients to you so you can cook at home. Ok, not a perfect analogy, but you get the idea.


> Anthropic subsidizes Claude code because they collect a ton of super useful telemetry and logs so they can improve… Claude code.

You just paraphrased my argument


You are still misunderstanding.

If you max out your token limits, you are costing Anthropic more than you are paying them. They only expect a small percentage of their users to do this, but OpenClaw changed the dynamic.

Anthropic knows that they will lose more users by lowering limits than they will by blocking OpenClaw, because OpenClaw users will overwhelmingly switch to API pricing, while chatbot users will leave for competitors with higher limits.

They are a business. They hope to become profitable. This was the correct move.


How many tokens does the $20/month buy me? I want to know what those hard token limits are but they refuse to tell me. I'm pretty sure they've reduced those limits the last week but they won't admit it. It feels like a scammy pricing model.

I agree, I think consumers appreciate transparency.

To some degree sure, is it about the number tokens you can max out?

I’m pretty happy knowing that it supports my development workflow for a week. Recent features like the Code Desktop built in browser, Cowork with Claude in Chrome and remote control matter to me way more than the number of tokens. But that’s me.

Depends on their targeted ICP also, which they are free to define. Is it those users maxing out tokens for the buck? I have the feeling there’s even better alternatives on the market right now.


> I’m pretty happy knowing that it supports my development workflow for a week

For many it doesn't. It's opaque, it changes, and they bury the news in fucking twitter. https://x.com/trq212/status/2037254607001559305

There's a lot to love about Anthropic. But man do they suck at PR.


Oh no, man fell in love with corporation

Exactly.

Subscriptions are crazy subsidized.

So you can’t use OpenClaw, OpenCode, etc. because they take you outside their applications/lock in and their ability to easily monetize in the future.


OpenAI allows you to use your sub with any of these tools.

First, OpenClaw is OpenAI. [1]

Second, OpenAI is burning UNIMAGINABLE sums of money. Three days ago they raised $122 billion [2], the largest funding rounding in history. By comparison, Anthropic has emphasized a more capital efficient approach, with a ~30% burn rate. [3]

[1] https://x.com/sama/status/2023150230905159801

[2] https://openai.com/index/accelerating-the-next-phase-ai/

[3] https://www.wsj.com/tech/ai/openai-anthropic-profitability-e...


yes and then still subsidise subscriptions by an order of magnitude

its obvious they will tighten everything and raise prices for years to come


AI can use gdb.

How do you achieve this behavior ? Sorry I haven't done researchs on it because so the answer might be super easy, but I'm curious what's your solution

I haven’t done it specifically but it shouldn’t be much different from other tools calling

No-one started as a vim master.

Your arguments here are valid, for a particular kind of person who values a particular kind of workflow.

Some of us would rather use vi than vscode. If you take away the plugin ecosystem, the core value is still there.


Just pin the plugin or don't use it.

Not a choice if you need a specific new feature or a certain fix.

The entire software development world would be much simpler if nobody needs new features, bugs and CVEs don't exist, or "just pin the version" works.


Neovims API isn't (yet) fully stable. So updating neovim could also break a plugin.

https://45press.com/ would be my guess.

If this is referring to the US, yes indeed. We're great at the whole free market, fiduciary duty to shareholder bit. We're terrible at using law to manage the negative externalities.

I have bad news, I don't think we're that great at the free market either.

There's a distinct difference between food retail and food service. This kind of regulation will harm the later, it does not belong. Are we going to weigh every pizza, every omelette, every side of fries too? We don't need to sterilize every single part of our food culture.

Anyone who has spent even a short amount of time in the food service business will be familiar with shrink. The average bar is probably seeing more than 15% shrinkage. The short pours are probably not offsetting that loss. Margins are thin.

Solutions for the neurotic drinker this website appeals to: - order a can or bottle - buy retail and stay home - go to a self pour joint and pay by volume. Bonus: you don't have to talk to anyone.

Otherwise put away the scale and talk to the bartender. Chances are you come away with plenty of free beer. Most small taprooms will help you find a beer you like by giving you free beer. If you're obsessing over getting what you paid for in food service, you're missing out on the true value of that industry.

Let's not harass our bartenders, a hell of a tough job, with scales. I spent years behind the case of a cut to order cheese shop. There's a time and place for scales. This is not it.


Bars are dying and are on thin margins so they have to do short pours, but if I just talk to a bartender, he'll give me plenty of free beer?

Yes, generally food service operates on thin margins. A neighborhood brewery probably won't be profitable for the first few years, then if successful might stabilize around 15% net* profit margin.

If you go to a beer bar or a tap room, a large part of the role of a bartender is helping you find beer that you like. Successful bars and bartenders thrive from repeat customers. Community is important. This is very obvious if you actually sit down at bars and talk to the people behind them.


Wait until they discover cocktails.

Bickering over a few dollars when you're paying a premium price for the experience of going out is rather silly. There's better uses of your time, like enjoying yourself.

If you're not familiar with the mess that was nips: https://carolinas.eater.com/2015/10/16/9553903/mini-bottles-...


I'm sure the same individuals will calculate tip to the cent as well.

As someone who worked in specialty foods for years, you get what you get. Flaws and all. That's part of the charm of this industry. This is especially true for small craft breweries. If you insist on accuracy then ask for a can/bottle list. If you insist on consistency then buy macro.


The extreme factions of the right are a very small portion of the electorate. They generally don't decide elections beyond the primaries and generally turn out in favor of the right regardless.

Dems lean more on moderates/independents. Trump won because he persuaded that group, particularly the young men.

https://www.thirdway.org/memo/why-republicans-can-win-with-t...


25-33% of the electorate is no small fraction. There's a group of people who have been consistently supportive of this government's policies since 2016. Take any policy survey, and the fraction that supports the right-wing side of action always amounts to a consistent 25-33% of the votes.

You're going to have to define extreme right with those percentages. You think 25-33% of the electorate is extreme right?

Definitely MAGA, even if not violently far right.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You