Update: sounds like the author had some life changes and had to stop his YouTube channel which was the primary motivator for the library- but there are forks popping up (https://github.com/canvas-commons/canvas-commons).
---
This notion that an open source library is "dead" and shouldn't be used because it's not being actively updated is a bit odd. You can fork and fix issues you have. It's got years of many people's work put into it. It's a great library and widely used on YouTube and elsewhere.
The reddit link mentions that they only reported what is now issue #9 and bitwarden has said it's working as intended, so that's why they were "ignored" 4 years ago.
But wouldn't a less efficient tool simply consume your 5-hour/weekly quota faster? There's gotta be something else, probably telemetry, maybe hoping people switch to API without fighting, or simply vendor lock-in.
> But wouldn't a less efficient tool simply consume your 5-hour/weekly quota faster?
Maybe.
First, Anthropic is also trying to manage user satisfaction as well as costs. If OpenCode or whatever burns through your limits faster, are you likely to place the blame on OpenCode?
Maybe a good analogy was when DoorDash/GrubHub/Uber Eats/etc signed up restaurants to their system without their permission. When things didn't go well, the customers complained about the restaurants, even though it wasn't their fault, because they chose not to support delivery at scale.
Second, flat-rate pricing, unlike API pricing, is the same for cached vs uncached iirc, so even if total token limits are the same, less caching means higher costs.
Again, subscription gives you a fixed allotment of tokens, doesn't matter if you consume them with claude code or with a 3rd-party tool, both get the same amount of tokens and thus cost Anthropic the same.
In fact it might even be better for Anthropic if people use 3rd-party tools that cache suboptimally because the cache hits don't consume the fixed allotment so claude code users get more of a free ride and thus cost Anthropic more money.
Presumably most people also do not use their full quota when using the official client, whereas third-party clients could be set up to start back up every 5 hours to use 100% of the quota every day and week.
It's the whole "unlimited storage" discussion again.
Terminal scrolling opens a big can of worms for them, I doubt they'll ever implement it. The best you can do is enable scrollbars in opencode so you can quickly jump places.
I don’t think OpenAI gets enough credit for exposing GPT via an API. If the tech remained only at Google, I’m sure we would see it embedded into many of their products, but wouldn’t have held my breath for a direct API.
Yeah, for all that people make fun of the "Open" in the name their API-first strategy really did make this stuff available to a ton of people. They were the first organization to allow almost anyone to start actively experimenting with what LLMs could do and it had a huge impact.
DeepMind wrote the paper, and while Google's API arrived later than OpenAI's it isn't as late as some people think. The PaLM API was released before the Gemini brand was launched.
Microsoft funded OpenAI and popularized early LLMs a lot with Copilot, which used OpenAI but now supports several backends, and they're working on their own frontier models now.
Yeah and it was Open AI that scaled it and initiated the current revolution and actually let people play with it.
> while Google's API arrived later than OpenAI's it isn't as late as some people think.
Google would not launch an API for Palm till 2023, nearly 3 years after Open AI's GPT-3 launch.
Yeah let's not pretend Open AI didn't spearhead the current transformer effort because they did. God knows how far behind we would be if we left things to Google.
Looks like there're two main approaches to AI-first development: (i) favor slow responses to produce an upfront high-quality result, (ii) favor quick responses to enable faster response-test-query iteration. And, based on comments read here, seems Codex isn't too fit for the later. Optimally a developer should be able switch between the two approaches depending on problem at hand.