I feel they should leave an opening for claws that use the Claude code sdk (like nanoclaw) because they will still operate on behalf of the main user. The same rate limits can apply as for CC, so why not?
Or even let us maybe use haiku only with claws?
But if this becomes a hassle, I won’t mind giving my $200/month to OpenAI instead.
Cuz for whatever reason, they seem to have way higher quotas even in the $20 plan.
Time for nanoclaw to add an adapter to work with other SDKs.
>Why bother with a drawing tool when you can literally mockup with real components and react etc.
I think that there's still value for the canvas (I'm a UX designer). I like seeing changes in the vector tool first, and then pushing out any changes to a JSON file where it can be used by the AI tool. That being said, this is just my preferred way of working, so somebody else may not even want to use Figma.
I totally get that. But soon will it be your primary mode of output vs yet another way to try ideas? Like how we sketch on paper to get creative juices flowing.
I travel 4-5 trips a year and I didn't hesitate for a second to pay for Flighty, because this was one of those "man these guys deserve to be rewarded for the amazing job they've done"
I have had at least 2-3 situations where Flighty gave me information before the airport did, and that I ended up being a guy informing a few fellow passengers on the status of our flight before the airlines did.
They've chosen a niche, have executed extremely well, and I'm happy to throw $50/year at them to say thank you for an excellent product that does everything I want.
My ONLY complaint is that during a flight, flighty's live activity or something uses up a TON of battery. It seems unlike them to overlook such a thing when the rest of the app has such a polish and attention to detail (form and function-wise)
its slower than text mode. basically you can print anything as long as you can convert it to monochrome bitmap before sending. But the thermal printers have anotehr mode which prints extremely fast if your data is textual with rudimentary formatting like order slips.
I believe the speed also depends on how many activated dots there are per line in the image, as thermal print heads often have a limit to the number of elements that can be activated at once.
I say this as someone who uses many Apple products, but still can't justify buying this. (I do have AirPods but have wanted headphones so I don't have to stick something into my ears)
If you try to understand this stuff outside the context of fashion, you'll go around in circles (as I did).
If you see this through the lens of "people will pay anything to signal various things to others" and "you can charge whatever the market will bear" then it all adds up.
Then look at how Anthropic basically Acquihired the entire Bun team. If the CS fundamentals didn't matter, why would they?
Even Anthropic needs people that understand CS fundamentals, even though pretty much their entire team now writes code using AI.
And since then, Jared Sumner has been relentlessly shaving performance bottlenecks from claude code. I have watched startup times come way down in the past couple months.
Sumner might be using CC all day too. But an understanding of those fundamentals (more a way of thinking rather than specific algorithms) still matter.
Or even let us maybe use haiku only with claws?
But if this becomes a hassle, I won’t mind giving my $200/month to OpenAI instead.
Cuz for whatever reason, they seem to have way higher quotas even in the $20 plan.
Time for nanoclaw to add an adapter to work with other SDKs.
reply